• 4+ years of experience as a Data Engineer in areas of Database Development, ETL Development, Data modeling, Report Development, and Big Data Technologies.
• Proficient in complete Software Development Life Cycle (SDLC) for projects using methodologies like Agile and Waterfall.
• Good experience in developing web applications using R, Python, and SQL.
• Proficiency in working with various Python IDEs using PyCharm, and Jupyter Notebook.
• Well-versed in building high-performance and scalable solutions using various Hadoop ecosystem tools like Hadoop, MapReduce, Hive, Apache Spark, and Pig.
• Experience in the development of various reports, and dashboards using various Tableau Visualizations.
• Good experience in developing SSIS Packages to extract, transform, and load (ETL) data into data warehouses data marts from heterogeneous sources.
• Knowledge of cloud platforms like Amazon Web Services, Azure, and Databricks (both on Azure as well as AWS integration of Databricks).
• Working experience in data analysis techniques using Python libraries like NumPy, Pandas, SciPy, and visualization libraries of Python like Seaborn, and Matplotlib.
• Working knowledge of configuration and administration of the database in MySQL and MongoDB.