-Experience in design, development, and Implementation of Big data applications using Hadoop ecosystem frameworks and tools like HDFS, MapReduce, Yarn, Pig, Hive, Sqoop, Spark, Scala, Storm HBase, Kafka, Flume, Nifi, Impala, Oozie, Zookeeper, Airflow, etc.
-Expertise in developing Scala and Java applications and good working knowledge of working with Python.
-Good Expertise in ingesting, processing, exporting, analyzing Terabytes of structured and unstructured data on Hadoop clusters in Healthcare, Insurance, and Technology domains.
-Experience in working with various SDLC methodologies like Waterfall, Agile Scrum, and TDD for developing and delivering applications.
-Experience in gathering requirements, analyzing requirements, providing estimates, implementation, and peer code reviews.
-In-depth knowledge of Hadoop Architecture and working with Hadoop components such as HDFS, JobTracker, TaskTracker, NameNode, DataNode, and MapReduce concepts.
-Demonstrated experience in delivering data and analytic solutions leveraging AWS, Azure or similar cloud data lake.
-Data Streaming from various sources like cloud (AWS, Azure) and on - premises by using the tools Spark.
-Hands-on experience with AWS (Amazon Web Services), Elastic Map Reduce (EMR), Storage S3, EC2 instances and Data Warehousing.