MLOps / LLMOps Engineer at Irth Solutions | Torre

MLOps / LLMOps Engineer

You'll operationalize cutting-edge ML/GenAI solutions on a multi-cloud Lakehouse, ensuring secure, high-performance AI.
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Employment

Provide your expected compensation while applying
location_on
Remote (for India residents)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Shared by
Emma of Torre.ai
5 days ago

Requirements and responsibilities


Company SummaryIrth Solutions is a software product company building cutting-edge technology platforms that enable data-driven insights across Damage Prevention, Asset Integrity, Land Management, and Stakeholder Engagement. With a strong product culture, collaborative environment, and high growth potential, Irth offers opportunities to work on enterprise-scale data and AI platforms.Irth is building a governed, multi-cloud Databricks Lakehouse to support analytics, AI/ML innovation, and customer-facing AI products across AWS, Azure, and GCP.Job SummaryAs an MLOps / LLMOps Engineer, you will design, automate, and operate scalable ML and LLM systems on Irth’s enterprise Lakehouse platform.You will work closely with Data Science, Engineering, and Product teams to deploy reliable, secure, and production-ready ML and GenAI solutions. This role focuses on operationalizing ML models, building CI/CD pipelines, ensuring governance and compliance, and maintaining high-performance, observable AI systems.RequirementsPrimary ResponsibilityML/LLM Platform DevelopmentOperationalize model training, evaluation, packaging, and deployment using Databricks, Delta Lake, and medallion architecture.Implement Unity Catalog model governance, lineage tracking, and access control.Develop reusable job templates, cluster policies, and standardized deployment patterns.ML/LLM Production DeploymentDeploy and manage ML and GenAI solutions including risk scoring, anomaly detection, predictive maintenance, NLP, and RAG pipelines.Build and optimize LLM pipelines using vector databases, model serving endpoints, and inference workflows.Optimize models using quantization, caching, and performance tuning techniques.Implement batch and real-time inference pipelines with defined SLAs.Reliability, Security & ComplianceImplement data contracts, schema validation, and data quality checks across ML pipelines.Ensure secure handling of sensitive data including PII detection, classification, and obfuscation.Maintain full lineage from data sources to deployed models and serving endpoints.Enforce data residency, governance, and compliance policies.CI/CD Automation & TestingImplement CI/CD pipelines using GitHub Actions and Databricks Asset Bundles.Automate deployments across DEV, QA, and PROD environments.Develop unit and integration tests for data pipelines and ML models.Ensure version control, reproducibility, and automated deployment workflows.Observability & OperationsMonitor pipeline health, model performance, drift, and system reliability.Implement alerting, incident response workflows, and automated ticketing.Track LLM performance metrics including latency, hallucination rates, and API costs.Develop runbooks, disaster recovery procedures, and operational documentation.FinOps & Cost OptimizationApply tagging policies and cost tracking for ML infrastructure.Support budget monitoring, cost optimization, and resource management.Skills & ExperienceRequired:3–5 years of experience in MLOps, LLMOps, or ML platform engineering roles.Hands-on experience with Databricks, Delta Lake, Unity Catalog, and ML deployment workflows.Strong experience with CI/CD pipelines using GitHub Actions and infrastructure automation.Experience implementing data quality validation, schema governance, and data contracts.Experience building production-grade ML pipelines with monitoring and observability.Strong security knowledge including RBAC, encryption, data residency, and governance practices.Proficiency in Python, SQL, and distributed data processing frameworks.Preferred:Experience with LLM pipelines, prompt engineering, RAG workflows, and model optimization.Experience with vector databases, model serving, and MLflow.Experience with Azure and AWS cloud platforms, including security and networking.Experience with geospatial data and analytics.Familiarity with Power BI, semantic layers, and enterprise analytics platforms.Experience with disaster recovery, FinOps, and enterprise-scale ML operations.EDUCATIONBachelor’s or master’s degree in computer science, Software Engineering, or a related field, or equivalent professional experience.BenefitsWHAT IS IN IT FOR YOUBeing an integral part of a dynamic, growing company that is well respected in its industry.Competitive pay based on experience.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.