Sign Up

Have an account? Sign In Now

Sign In

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

Sorry, you do not have permission to ask a question, You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Technomantic Logo Technomantic Logo
Sign InSign Up

Technomantic

Search
Ask A Question

Mobile menu

Close
Ask A Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help

Discuss, Ask, and Learn AI Together

Join Technomantic A vibrant AI community where you can discuss, ask, and learn all about artificial intelligence.

Create A New Account
  • Recent Questions
  • Most Answered
  • Bump Question
  • Answers
  • Most Visited
  • Most Voted
  • No Answers
  1. Asked: May 29, 2025In: Deep Learning

    Anybody knows good methods to debug autograd issues in dynamic graphs, especially with JAX or PyTorch?

    Hassaan Arif
    Hassaan Arif Enlightened
    Added an answer on May 31, 2025 at 1:28 pm

    If you’re hitting autograd issues in JAX or PyTorch, here’s what works for me: First, check gradients are even enabled – in PyTorch, make sure requires_grad=True. In JAX, use jax.grad only on functions with real float outputs. Use gradient checkers – PyTorch’s gradcheck or JAX’s check_grads help spoRead more

    If you’re hitting autograd issues in JAX or PyTorch, here’s what works for me:

    First, check gradients are even enabled – in PyTorch, make sure requires_grad=True. In JAX, use jax.grad only on functions with real float outputs.

    Use gradient checkers – PyTorch’s gradcheck or JAX’s check_grads help spot silent failures.

    Debug with hooks or prints – PyTorch has register_hook() on tensors to inspect gradients. In JAX, jax.debug.print() is a lifesaver inside jit.

    Simplify the code – isolate the function, drop the model size, and test with dummy data. Most bugs pop up when the setup is too complex.

    In short: test small, print often, and trust the math to guide you.

    See less
      • 0
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  2. Asked: May 31, 2025In: Computer Vision

    I'm working on an object detection project. Which datasets have you found most useful?

    Maya
    Maya Begginer
    Added an answer on May 31, 2025 at 1:48 am

    Nice  object detection projects can be a lot of fun (and frustrating, in a good way). The dataset you choose really depends on what you're detecting, but some are just solid all around. If you’re working on general-purpose detection — stuff like people, cars, animals, etc.  COCO is probably the go-tRead more

    Nice  object detection projects can be a lot of fun (and frustrating, in a good way). The dataset you choose really depends on what you’re detecting, but some are just solid all around.
    If you’re working on general-purpose detection — stuff like people, cars, animals, etc.  COCO is probably the go-to. It’s massive, diverse, and has great annotations (bounding boxes, segmentation masks, keypoints). Pascal VOC is a bit older, but still clean and good for smaller-scale testing or benchmarking.
    If you’re dealing with specific domains, there are some really well-curated niche datasets:

    • Open Images (by Google) – huge and broad, but the annotations can be messy.
    • KITTI – great if you’re doing object detection in a self-driving or road scene context.
    • Roboflow Universe – surprisingly helpful. It has tons of community-uploaded datasets, and sometimes you’ll find exactly what you need, already annotated and ready to go.

    Honestly, I’ve found that combining a good base dataset (like COCO) with a small, clean custom dataset for your specific use case usually gives the best results. You don’t always need 100,000 images just the right ones.
    If you tell me what you’re trying to detect, I might be able to suggest something more specific.

    See less
      • 1
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  3. Asked: May 30, 2025In: Natural Language Processing (NLP)

    My training loss on my transformer model just won’t settle down, it keeps jumping all over the place. Could this be a learning rate issue or something else?

    Rundu
    Rundu Begginer
    Added an answer on May 31, 2025 at 1:45 am

    Yeah, that kind of erratic loss can definitely be frustrating. From what you’re describing, it could be a learning rate issue — that’s often the first thing I’d look at. When the learning rate is too high, the model starts overshooting during optimization, kind of like it's bouncing around instead oRead more

    Yeah, that kind of erratic loss can definitely be frustrating. From what you’re describing, it could be a learning rate issue — that’s often the first thing I’d look at. When the learning rate is too high, the model starts overshooting during optimization, kind of like it’s bouncing around instead of settling into a groove. Lowering it, even just a bit, can sometimes calm things down noticeably.
    But it’s not always that simple. Sometimes the issue isn’t just the learning rate itself, but how it’s changing over time — especially if you’re using a transformer. Those models really like having a learning rate warmup in the beginning and a proper decay afterward. If your schedule’s too aggressive or missing altogether, it could explain the instability.
    Also, not to freak you out, but sometimes the root cause is buried in something like bad input data or tiny batch sizes that make your training super noisy. Even things like not clipping gradients can silently cause chaos behind the scenes.
    If you want to dig deeper, feel free to share a few details like your learning rate, optimizer, and whether you’re using any warmup. Sometimes just tweaking one thing makes a world of difference.

    See less
      • 0
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  4. Asked: May 29, 2025In: Ai Tools

    What are the most beginner-friendly tools/platforms to prototype a voice assistant (e.g., Rasa, Dialogflow, Alexa Skills Kit)?

    Rundu
    Rundu Begginer
    Added an answer on May 31, 2025 at 1:45 am

    If you’re just getting started, Dialogflow is probably the easiest way in — it’s intuitive, mostly no-code, and you’ll have a working voice assistant in no time. If you’re more technical and want control, Rasa is great but takes more setup. Alexa Skills Kit is perfect if you're building specificallyRead more

    If you’re just getting started, Dialogflow is probably the easiest way in — it’s intuitive, mostly no-code, and you’ll have a working voice assistant in no time. If you’re more technical and want control, Rasa is great but takes more setup.
    Alexa Skills Kit is perfect if you’re building specifically for Alexa. And if you prefer a visual, drag-and-drop approach, Voiceflow is super beginner-friendly and fun to prototype with. It really comes down to how hands-on you want to be and where you want your assistant to live

    See less
      • 1
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  5. Asked: May 31, 2025In: Ai Tools

    I tried using AI tools for resume writing but got generic results. Any recommendations for better options?

    Runki
    Runki Begginer
    Added an answer on May 31, 2025 at 1:45 am

    Totally get that a lot of AI resume tools spit out the same cookie-cutter stuff that doesn’t really help you stand out. If it feels like you’re reading a robot's idea of a “strong communicator,” you’re not alone. Instead, try tools that let you guide the output more personally. Teal is a good one  iRead more

    Totally get that a lot of AI resume tools spit out the same cookie-cutter stuff that doesn’t really help you stand out. If it feels like you’re reading a robot’s idea of a “strong communicator,” you’re not alone.
    Instead, try tools that let you guide the output more personally. Teal is a good one  it lets you tailor resumes to specific job descriptions and keeps track of your applications. It’s smart but still gives you control.
    Rezi is another solid option; it’s AI-powered but focused on keyword optimization, which can actually help with beating ATS filters without sounding fake.
    That said, the best results usually come from using AI as a starting point not the final draft. Start with your real experience, feed that in, and then tweak what the AI gives you. Or better yet, treat the AI like a writing buddy: ask it to reword something you wrote, instead of writing from scratch.
    If you want, I can help you craft or edit a resume paragraph that actually sounds like you  just send over what you’ve got.

    See less
      • 0
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  6. Asked: May 29, 2025In: Artificial Intelligence

    What's the best way to normalize data without leaking info from the test set into the training process?

    Charlesg
    Charlesg Begginer
    Added an answer on May 29, 2025 at 10:25 pm

    When I first started working with machine learning, I made this classic mistake: I normalized my entire dataset before splitting it. And guess what? My model performed great a little too great. 😅 Turns out, I was leaking information from the test set into training without even realizing it. Here’s wRead more

    When I first started working with machine learning, I made this classic mistake: I normalized my entire dataset before splitting it. And guess what? My model performed great a little too great. 😅
    Turns out, I was leaking information from the test set into training without even realizing it.

    Here’s what I do now (and always recommend):

    1. First, split your data into train and test (or train/val/test).
    2. Fit your scaler only on the training set — not the whole dataset.

      python
      scaler = StandardScaler()
      scaler.fit(X_train)
    3. Then use that same scaler to transform both the training and test sets.

      python
      X_train_scaled = scaler.transform(X_train)
      X_test_scaled = scaler.transform(X_test)

    That way, your model learns only from the training data — just like it would in a real-world setting. No sneak peeks at the test set.
    Trust me, once you catch this, you’ll never scale data the old way again. It’s a small thing, but it makes a huge difference in keeping your model honest.

    See less
      • 1
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report
  7. Asked: May 29, 2025In: Machine Learning

    I trained my model, but it's performing too well on validation — could this be data leakage? How do I check for that?

    joseph1
    joseph1 Begginer
    Added an answer on May 29, 2025 at 9:49 pm

    I once trained a model that was performing way too well on the validation set — like, suspiciously good. At first, I was excited… but something felt off. Turned out, it was data leakage. Here’s what I did to figure it out: I rechecked my data splits and found that some similar entries had ended up iRead more

    I once trained a model that was performing way too well on the validation set — like, suspiciously good. At first, I was excited… but something felt off. Turned out, it was data leakage.
    Here’s what I did to figure it out:

    • I rechecked my data splits and found that some similar entries had ended up in both training and validation.
    • I reviewed my features — one of them was indirectly revealing the target.
    • I even tested a basic model, and it still performed too well, which confirmed my suspicion.

    Lesson learned: if your model feels like it’s “too perfect,” always check for leakage. It’ll save you a ton of headaches later. Adopt this, It may solve this problem.

    See less
      • 2
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 51
  • Answers 54
  • Best Answers 22
  • Users 192
  • Popular
  • Answers
  • Rety1

    I'm facing overfitting issues in my deep learning model. What ...

    • 4 Answers
  • Rety1

    How do you decide between using CNNs, RNNs, or Transformers ...

    • 4 Answers
  • Jiyakhan

    What are the most beginner-friendly tools/platforms to prototype a voice ...

    • 3 Answers
  • y2mate20201
    y2mate20201 added an answer When AI can mimic your voice, writing, and face, identity… June 25, 2025 at 3:16 pm
  • Hassaan Arif
    Hassaan Arif added an answer AI can inform emotional decision-making, but it should never replace… June 10, 2025 at 10:07 pm
  • Hassaan Arif
    Hassaan Arif added an answer Human-centered AI” is not just a tech buzzword. It’s about… June 10, 2025 at 10:06 pm

Top Members

Hassaan Arif

Hassaan Arif

  • 0 Questions
  • 5k Points
Enlightened
padhyaakshay

padhyaakshay

  • 1 Question
  • 98 Points
Teacher
Lartax

Lartax

  • 3 Questions
  • 42 Points
Begginer

Trending Tags

ai ai art ai tools animation avatar blog writing chatbot chatgpt content deep learning gpt image generation long form ml modern ai nlp productivity prompting task management writing assistant

Explore

  • Home
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Help

Footer

Technomantic

Technomantic the AI Platform for asking questions, solve AI problems, and connect on machine learning, ChatGPT, NLP, and prompt engineering topics.

About Us

  • About Us
  • Contact Us
  • Contribute

Legal Stuff

  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
  • Community Guidelines / Forum Rules

Help

  • Contact Us

© 2025 Technomantic. All Rights Reserved