Requirements:
- Experience with Azure Container Apps and Service Bus.
- Proficiency in Microsoft Fabric: Lakehouse Delta Tables, Warehouse, Environments, Notebooks, Pipelines, Dataflow Gen2, Semantic Models.
- Strong knowledge of PostgreSQL.
- Advanced skills in Python, PySpark, and SQL.
- Optional Azure tools: API Management, Event Hubs, Application Insights, Key Vault, Logic App, Datalake Gen2.
- CI/CD: Experience with GitLab or Azure DevOps.
- Other optional tools: Jira, Confluence, Service Bus Explorer, DBeaver, Storage Explorer, OneLake, PostMan.
- English level: B1 or higher.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines and solutions using Microsoft Azure and Microsoft Fabric components.
- Implement and manage data solutions within Microsoft Fabric, including Lakehouse Delta Tables, Warehouse, Environments, Notebooks, Pipelines, Dataflow Gen2, and Semantic Models.
- Work extensively with large datasets, ensuring data quality, consistency, and reliability across all platforms.
- Develop, optimize, and troubleshoot queries and scripts using SQL, Python, and PySpark for data extraction, transformation, and loading (ETL/ELT) processes.
- Manage and optimize PostgreSQL databases, ensuring performance, availability, and data integrity.
- Utilize Azure Container Apps and Service Bus for deploying and orchestrating microservices and handling message queuing.
- Collaborate with data scientists, analysts, and other engineering teams to understand data requirements and deliver actionable insights.
- Implement and maintain CI/CD pipelines using tools like GitLab or Azure DevOps for automated deployments and infrastructure changes.
- Monitor system performance, troubleshoot issues, and ensure the stability and security of data platforms.