R

Rama Krishna Reddy Molakaseema

About

Detail

Buffalo, New York, United States

Contact Rama regarding: 
work
Full-time jobs

Timeline


work
Job
school
Education

Résumé


Jobs verified_user 0% verified
  • LPL Financial
    Data Engineer
    LPL Financial
    May 2024 - Current (1 year 10 months)
    • Designed and deployed a high-throughput, real-time data pipeline using AWS Kinesis, enabling seamless processing of 500K+ user contacts and transactions, and boosting ingestion performance by 75% for mission critical systems. • Built scalable ETL workflows with AWS Glue and Python (Pandas, PySpark) to unify disparate datasets, streamline complex transformations, and cut processing time by 60%. Implemented real-time stream processing with AWS Lambda, while utilizing Hadoop for batch processing. Integrated Scikit-learn for anomaly detection, accelerating ML-driven insights and decision-making. • Leveraged AWS Lambda for real-time stream processing and Hadoop for batch workloads, integrating Scikit-learn for anomaly detection to accelerat
  • Tiger Analytics
    Data Engineer
    Tiger Analytics
    Jun 2022 - Jul 2023 (1 year 2 months)
    | • Architected a secure, scalable ETL pipeline to migrate data from SQL Server to Azure Data Lake, boosting regulatory compliance and integration efficiency for critical business operations. • Accelerated data ingestion by 60% using Azure Data Factory with Data Flows and Triggers, while optimizing storage with Parquet format for enhanced performance. • Optimized transformation workflows in Azure Databricks with PySpark and Pandas, cutting processing time by 50% and enabling near real-time analytics. • Enhanced scalability and performance for 500GB+ datasets through Azure Synapse, supporting complex queries and high-speed analytics across business units. • Improved data retrieval speed by 50% via strategic partitioning and indexing in
  • Genpact
    Associate Data Engineer
    Genpact
    May 2021 - May 2022 (1 year 1 month)
    • Architected a high-throughput real-time analytics pipeline with Apache Kafka, seamlessly processing over 1 million messages per minute to power immediate, data-driven decisions. • Accelerated data processing for 500 GB of daily input by optimizing distributed data workflows using Apache Spark, significantly improving pipeline efficiency. • Built robust ETL systems in Python (Pandas, NumPy) to integrate, cleanse, and transform data from 15+ disparate sources, loading 2 TB of structured data into AWS Redshift for advanced analytics. • Crafted complex, performance-optimized SQL queries in AWS Redshift using window functions and multi-level joins to analyze over 10 TB of data, driving accurate, high-value reporting. • Automated 50+ missi
Education verified_user 0% verified
  • U
    Master of Science
    University at Buffalo-The State University of New York
    Aug 2023 - Dec 2024 (1 year 5 months)
Projects (professional or personal) verified_user 0% verified
  • A
    AI-Powered Chatbot for Trading Insights
    Built an end-to-end AI chatbot using Flask, LangChain, AstraDB, and Gemini API, integrating a Retrieval-Augmented Generation (RAG) model to deliver real-time trading insights with a 95% response accuracy, handling 500+ concurrent queries.
  • M
    Metric Learning for Image Similarity Search
    Developed a self-supervised machine learning pipeline with a hybrid loss function (Triplet Loss and Categorical Cross Entropy) and optimized data pipelines, improving image similarity search precision by 15% and model accuracy by 20%.
This is a community-created genome.