Sr Software Data Engineer at TubeScience | Torre

Sr Software Data Engineer

You'll build and maintain data pipelines, optimizing ingestion processes to impact high-volume ad performance.
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Contractor

Currency exchange and taxes to be paid by:

Candidate

Compensation
USD5K - 8.5K/month
Negotiable
location_on
Remote (for Argentina residents)
Remote (for Brazil residents)
Remote (for Colombia residents)
Remote (for Peru residents)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Published 10 months ago

Responsibilities & more


What you’ll do: - Build & Maintain Ingestion Pipelines: Design and build robust and maintainable data pipelines, primarily focused on ingesting ad performance data from sources like Meta, Youtube, Tiktok etc. - API Interactions: Set up and manage API interactions with various data sources to facilitate smooth, efficient, and reliable data pulls. - Infrastructure Setup: Establish the infrastructure necessary for automated, scheduled data pulls, including observability tools and custom settings for complex ingestion tasks. - Custom Logic Integration: Integrate custom ingestion logic into robust systems such as Airbyte, Airflow, Dagster, Temporal, etc., to ensure seamless data flows. - Pipeline Architecture Improvements: Regularly review our current data ingestion pipeline architecture, identifying areas for improvement and proposing optimization solutions that elevate our performance and scalability. - Collaborate & Lead: Work closely with cross-functional teams to understand data needs, and mentor junior engineers in building robust data pipelines. What we’re looking for: - Proven Experience in data engineering with a focus on building, scaling, and maintaining data pipelines and architecture. - Proficiency in Python: Building robust ingestion systems, handling API calls, and creating custom logic for data management. - Advanced SQL Knowledge: Experience with complex queries, optimization, and managing large datasets. - Strong understanding of data ingestion/integration tools such as Airbyte, Airflow, Dagster, Temporal or similar. - Excellent Problem Solving & Architecture Skills: Ability to analyze and optimize ingestion pipelines for better performance, scalability, and observability. - ETL Expertise: Strong experience designing and implementing ETL processes to efficiently move, clean, and process data from source systems to data warehouses or data lakes (coalesce/dbt/databricks). - Solid Communication Skills: Comfortable collaborating with cross-functional, distributed teams and guiding junior engineers in English. - Familiarity with cloud platforms (AWS required, GCP advantage). Nice to Have: - Experience in Java for building custom API endpoints. - Experience in React frontends for dashboards/admin tools. - Experience using AI tools for data enrichment and classification.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.