L
Lakshmi Sowjanya
Lakshmi Sowjanya
About
Detail
Sr. Cloud Data Engineer at Centene Corporation//OPEN FOR CONTRACT on C2C
North Brunswick Township, New Jersey, United States
Around 10 years of technical expertise in complete software development life cycle (SDLC), Hands on experience working with cloud environments like AWS, Azure and GCP. Strong Expertise in building Data Lakes and data marts, Big Data/Hadoop Framework. Worked extensively with streaming and batch-processing frameworks such as Kafka, Spark, and NiFi. • Excellent working experience in Scrum / Agile framework, Iterative and Waterfall project execution methodologies. • Expertise in Hadoop architecture and various components such as HDFS, YARN, High Availability, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce programming paradigm. • Good experience in core python object-oriented programming • Good experience with communication with devices using python such as Ports, Sockets etc.,) • Experience on Delta Lake and Data Delta Lake on AWS and Azure. • Having experience in Design, Development, Data Migration, Testing, Support and Maintenance using Redshift Databases. • Experience on Apache Hadoop technologies like Hadoop distributed file system (HDFS), Map Reduce framework, Hive, PIG, Pyspark, Sqoop, Oozie, HBase, Spark, Scala and Python. • Experience in using Microsoft Azure SQL database, Data Lake, Azure ML, Azure data factory, Functions, Data bricks and HDInsight. • Hands on experience with importing and exporting data from Relational databases to HDFS, Hive and HBase using Sqoop. • Experience in AWS cloud solution development using Lambda, SQS, SNS, Dynamo DB, Athena, S3, EMR, EC2, Redshift, AWS Glue, Cloud watch Step functions and Cloud Formation. • Experience with SAS programming • Experienced in processing real time data using Kafka producers and stream processors and implemented stream process using Kinesis and data landed into data Lake S3. • Experience in implementing multitenant models for the Hadoop 2.0 Ecosystem using various big data technologies. • Working experience in big data on cloud using AWS EC2 & Microsoft Azure, and handled redshift & Dynamo databases with huge amount of data 300 TB. • Extensive experience in migrating on premise Hadoop platforms to cloud solutions using AWS and Azure. • Experience in writing python as ETL framework and Pyspark to process huge amount of data daily. • Strong experience in implementing data models and loading unstructured data using HBase,
Contact Lakshmi regarding:
work
Full-time jobs