A

Abhyudya Bhatnagar

About

Detail

Bengaluru, Karnataka, India

Contact Abhyudya regarding: 
Flexible work
Starting at USD10/hour

Timeline


work
Job
school
Education
folder
Project

Résumé


Jobs verified_user 0% verified
  • I
    GTM Engineering Intern
    INFLXD
    Jul 2025 - Current (9 months)
    Developed an end-to-end Prompt Generator backend using FastAPI + Supabase powering 500+ finance-domain prompts across VC, PE, Sell-Side, FP&A, and IR.Optimized fuzzy-search and metadata-ranking algorithms for semantic retrieval with under 10 ms query latency.Implemented Supabase data pipelines enabling query-based filtering, prompt generation, and multi-model analytics.Integrated PostHog dashboards tracking search behavior and lead-magnet performance, guiding SEO & roadmap.
  • F
    Gen AI Intern
    Friska.AI
    Mar 2025 - Jul 2025 (5 months)
    Engineered duplex Conversational AI pipeline (under 200 ms latency) integrating STT, ASR, and TTS.Architected a dual-LLM Q&A system boosting data throughput 5× and cutting manual work 80%.Trained domain-adapted LLMs on medical-nutrition datasets to deliver accurate, context-aware dietary recommendations.
  • P
    Developer Intern
    Procohat
    Jun 2024 - Aug 2024 (3 months)
    Designed PowerBI dashboards for real-time insights, improving decision-making and UX.Migrated 7 years of physical data to a virtual datalake via SAP ABAP, increasing data accessibility by 25%.
Education verified_user 0% verified
  • Indian Institute of Information Technology
    Bachelor of Technology in Computer Science
    Indian Institute of Information Technology
    Nov 2022 - Current (3 years 5 months)
Projects (professional or personal) verified_user 0% verified
  • D
    DocuMind | Python
    Feb 2025
    Developed an AI-powered document Q&A system using Retrieval-Augmented Generation (RAG) with Qdrant vector storage, reducing query latency by 400%.Integrated Google's Gemma2 27B model for accurate context-aware responses improving context-aware responses by 11% in internal benchmark testing and reducing average response time for complex queries by 21 milliseconds.
  • K
    Knowledge Distillation Implementation | Python, Jupyter
    Jan 2025
    Achieved 94.4% accuracy by refining BERT into DistilBERT, reducing model size by 40% while retaining 97% performance.Implemented custom KL-divergence loss with temperature scaling to improve training efficiency.