Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Join Technomantic A vibrant AI community where you can discuss, ask, and learn all about artificial intelligence.
Create A New Account
Anybody knows good methods to debug autograd issues in dynamic graphs, especially with JAX or PyTorch?
If you’re hitting autograd issues in JAX or PyTorch, here’s what works for me: First, check gradients are even enabled – in PyTorch, make sure requires_grad=True. In JAX, use jax.grad only on functions with real float outputs. Use gradient checkers – PyTorch’s gradcheck or JAX’s check_grads help spoRead more
If you’re hitting autograd issues in JAX or PyTorch, here’s what works for me:
First, check gradients are even enabled – in PyTorch, make sure
requires_grad=True. In JAX, usejax.gradonly on functions with real float outputs.Use gradient checkers – PyTorch’s
gradcheckor JAX’scheck_gradshelp spot silent failures.Debug with hooks or prints – PyTorch has
register_hook()on tensors to inspect gradients. In JAX,jax.debug.print()is a lifesaver insidejit.Simplify the code – isolate the function, drop the model size, and test with dummy data. Most bugs pop up when the setup is too complex.
In short: test small, print often, and trust the math to guide you.
See lessI'm working on an object detection project. Which datasets have you found most useful?
Nice object detection projects can be a lot of fun (and frustrating, in a good way). The dataset you choose really depends on what you're detecting, but some are just solid all around. If you’re working on general-purpose detection — stuff like people, cars, animals, etc. COCO is probably the go-tRead more
Nice object detection projects can be a lot of fun (and frustrating, in a good way). The dataset you choose really depends on what you’re detecting, but some are just solid all around.
If you’re working on general-purpose detection — stuff like people, cars, animals, etc. COCO is probably the go-to. It’s massive, diverse, and has great annotations (bounding boxes, segmentation masks, keypoints). Pascal VOC is a bit older, but still clean and good for smaller-scale testing or benchmarking.
If you’re dealing with specific domains, there are some really well-curated niche datasets:
Honestly, I’ve found that combining a good base dataset (like COCO) with a small, clean custom dataset for your specific use case usually gives the best results. You don’t always need 100,000 images just the right ones.
See lessIf you tell me what you’re trying to detect, I might be able to suggest something more specific.
My training loss on my transformer model just won’t settle down, it keeps jumping all over the place. Could this be a learning rate issue or something else?
Yeah, that kind of erratic loss can definitely be frustrating. From what you’re describing, it could be a learning rate issue — that’s often the first thing I’d look at. When the learning rate is too high, the model starts overshooting during optimization, kind of like it's bouncing around instead oRead more
Yeah, that kind of erratic loss can definitely be frustrating. From what you’re describing, it could be a learning rate issue — that’s often the first thing I’d look at. When the learning rate is too high, the model starts overshooting during optimization, kind of like it’s bouncing around instead of settling into a groove. Lowering it, even just a bit, can sometimes calm things down noticeably.
See lessBut it’s not always that simple. Sometimes the issue isn’t just the learning rate itself, but how it’s changing over time — especially if you’re using a transformer. Those models really like having a learning rate warmup in the beginning and a proper decay afterward. If your schedule’s too aggressive or missing altogether, it could explain the instability.
Also, not to freak you out, but sometimes the root cause is buried in something like bad input data or tiny batch sizes that make your training super noisy. Even things like not clipping gradients can silently cause chaos behind the scenes.
If you want to dig deeper, feel free to share a few details like your learning rate, optimizer, and whether you’re using any warmup. Sometimes just tweaking one thing makes a world of difference.
What are the most beginner-friendly tools/platforms to prototype a voice assistant (e.g., Rasa, Dialogflow, Alexa Skills Kit)?
If you’re just getting started, Dialogflow is probably the easiest way in — it’s intuitive, mostly no-code, and you’ll have a working voice assistant in no time. If you’re more technical and want control, Rasa is great but takes more setup. Alexa Skills Kit is perfect if you're building specificallyRead more
I tried using AI tools for resume writing but got generic results. Any recommendations for better options?
Totally get that a lot of AI resume tools spit out the same cookie-cutter stuff that doesn’t really help you stand out. If it feels like you’re reading a robot's idea of a “strong communicator,” you’re not alone. Instead, try tools that let you guide the output more personally. Teal is a good one iRead more
Totally get that a lot of AI resume tools spit out the same cookie-cutter stuff that doesn’t really help you stand out. If it feels like you’re reading a robot’s idea of a “strong communicator,” you’re not alone.
See lessInstead, try tools that let you guide the output more personally. Teal is a good one it lets you tailor resumes to specific job descriptions and keeps track of your applications. It’s smart but still gives you control.
Rezi is another solid option; it’s AI-powered but focused on keyword optimization, which can actually help with beating ATS filters without sounding fake.
That said, the best results usually come from using AI as a starting point not the final draft. Start with your real experience, feed that in, and then tweak what the AI gives you. Or better yet, treat the AI like a writing buddy: ask it to reword something you wrote, instead of writing from scratch.
If you want, I can help you craft or edit a resume paragraph that actually sounds like you just send over what you’ve got.
What's the best way to normalize data without leaking info from the test set into the training process?
When I first started working with machine learning, I made this classic mistake: I normalized my entire dataset before splitting it. And guess what? My model performed great a little too great. 😅 Turns out, I was leaking information from the test set into training without even realizing it. Here’s wRead more
When I first started working with machine learning, I made this classic mistake: I normalized my entire dataset before splitting it. And guess what? My model performed great a little too great. 😅
Turns out, I was leaking information from the test set into training without even realizing it.
Here’s what I do now (and always recommend):
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
That way, your model learns only from the training data — just like it would in a real-world setting. No sneak peeks at the test set.
See lessTrust me, once you catch this, you’ll never scale data the old way again. It’s a small thing, but it makes a huge difference in keeping your model honest.
I trained my model, but it's performing too well on validation — could this be data leakage? How do I check for that?
I once trained a model that was performing way too well on the validation set — like, suspiciously good. At first, I was excited… but something felt off. Turned out, it was data leakage. Here’s what I did to figure it out: I rechecked my data splits and found that some similar entries had ended up iRead more
I once trained a model that was performing way too well on the validation set — like, suspiciously good. At first, I was excited… but something felt off. Turned out, it was data leakage.
Here’s what I did to figure it out:
Lesson learned: if your model feels like it’s “too perfect,” always check for leakage. It’ll save you a ton of headaches later. Adopt this, It may solve this problem.
See less