Senior Data Engineer at Everst | Torre

Senior Data Engineer

Connecting the best talent with leading companies. ⛰
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Contractor

Currency exchange and taxes to be paid by:

Candidate

Compensation
USD2K - 4K/month
Non-negotiable
location_on
Remote (anywhere)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Posted over 2 years ago

Requirements and responsibilities


✨ Our client MedeLoop is looking for their next Senior Data Engineer ✨ Job Description: As a Senior Data Engineer, you will be responsible for architecting, developing, and maintaining our data infrastructure and ETL pipelines. You will work closely with data scientists, analysts, and other stakeholders to ensure seamless data integration and reliable data processing. Your expertise will help us optimize data storage, improve data quality, and streamline data movement, contributing significantly to data governance and security. Responsibilities: 🔸Design, develop, and maintain scalable data pipelines and ETL workflows to ingest, process, and transform data from diverse sources into our data warehouse and data lakes. 🔸Collaborate with cross-functional teams to understand data requirements and design solutions that support data analytics, reporting, and machine learning initiatives. 🔸Optimize data pipelines for performance, scalability, and cost efficiency on cloud platforms (AWS/Azure/GCP). Implement data validation, quality checks, and error handling mechanisms to ensure data integrity. 🔸Leverage big data technologies such as Apache Spark and Hadoop for distributed data processing. 🔸Monitor and troubleshoot data pipelines to ensure high availability and reliability. 🔸Champion data engineering best practices, code standards, and documentation. 🔸Stay up-to-date with industry trends and emerging technologies to drive continuous improvement. Requirements: ✅Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. ✅Proven experience (5+ years) as a Data Engineer, developing and maintaining data pipelines in a production environment. ✅Fluent English communication skills at C1-C2 level to effectively collaborate and present ideas to an international team. ✅Strong proficiency in Python and SQL for data processing and querying. ✅Hands-on experience with big data technologies such as Apache Spark and Hadoop. ✅Solid knowledge of cloud services (AWS/Azure/GCP) and experience deploying data infrastructure in a cloud environment. ✅Familiarity with data warehousing concepts and data modeling techniques. ✅Proficiency in using ETL tools like Apache Airflow, Apache NiFi, or Talend. ✅Experience with containerization (e.g., Docker) and container orchestration (e.g., Kubernetes). ✅Strong understanding of data governance, security, and data compliance practices. ✅Excellent problem-solving skills and ability to work in a collaborative team environment. Preferred (Not Required): ✨Experience with machine learning platforms like TensorFlow, PyTorch, or Scikit-learn. ✨Knowledge of data visualization tools like Tableau, Power BI, or Looker.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.