DevOps Engineer (AWS / Kubernetes / Terraform) at Linqia | Torre
video thumbnail

Now hiring!

DevOps Engineer (AWS / Kubernetes / Terraform)

You'll build a cloud-native team, automating and optimizing a leading AI platform for global brands.
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Contractor

Currency exchange and taxes to be paid by:

Company

Compensation USD4k/month
Non-negotiable
location_on
Remote (for Argentina residents)
Remote (for Bolivia residents)
Remote (for Brazil residents)
Remote (for Chile residents)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Posted 24 days ago

Requirements and responsibilities


Linqia is the leader in the influencer marketing industry. We are a growing tech start-up, having experienced 100% year-over-year growth and break-even. At Linqia, we partner with the world’s largest brands including Danone, AB InBev, Kimberly-Clark, Unilever, and Walmart to build compelling and effective influencer marketing campaigns. Our AI-driven platform and team of experts are leading the transformation of influencer marketing. We value intelligence, recognize talent, and have instilled a culture that supports career development and growth for our employees. We thrive on innovation and accountability, with a customer-first attitude that adds true value to everything we touch. Our team members are smart, hard-working, have integrity, and love to have fun as we play to win. SOFTWARE AND INFRASTRUCTURE DEVOPS ENGINEER Experience Level: - 3+ years of experience supporting production SaaS in AWS. Location: - Anywhere in LATAM, preferably in Colombia. Employment Type: - Full-time contract. ABOUT THE ROLE: - Build out a cloud-native team that owns the entire software delivery life cycle on Amazon Web Services. - You will combine deep Kubernetes expertise with Python and shell scripting to automate, monitor, and continuously improve the Linqia platform while driving FinOps practices to keep our cloud footprint efficient. - Work in a GitOps culture where every change is delivered through pull requests and rolled out by automated pipelines. WHAT YOU WILL DO: - Design, maintain, and evolve our AWS account structure, VPC networking, IAM policies, security boundaries, and cost-management controls using Terraform and the AWS console. - Maintain secure networking layers with AWS load balancers, ingress controllers, service-mesh policies, network policies, and zero-trust principles. - Operate and harden production-grade Kubernetes clusters on AWS EKS, including upgrades, service mesh, policy management, and multi-cluster architectures driven by Argo CD. - Build reusable infrastructure-as-code modules with Terraform that provision cloud resources in minutes while enforcing tagging standards and least-privilege access. - Create self-service CI/CD pipelines in Jenkins and GitHub Actions for fast, safe releases with automated testing and promotion across environments. - Deliver real-time observability with Datadog, Prometheus, Grafana, CloudWatch, and OpenTelemetry, and use these tools to assist in solving production bugs and issues. - Administer and maintain purpose-built Linux VMs via configuration management tools like Puppet, Ansible, or Chef. - Deploy, scale, and maintain databases on AWS (Aurora, PostgreSQL, MySQL, OpenSearch, etc.), maintaining high database performance/uptime, optimizing tables and datasets, and ensuring disaster recovery protocols are in place. - Support developers by maintaining Podman-based local dev boxes and Kubernetes staging environments that mirror production, ensuring smooth hand-off from local code to cloud-native deployments. - Implement FinOps practices: track and forecast AWS spend, enforce cost-allocation tagging, identify rightsizing opportunities, manage Savings Plans or Reserved Instances, and build cost-optimization dashboards for engineering and finance stakeholders. - Write automation utilities and command-line tools in Python and craft shell scripts that glue components and workflows together. - Champion reliability through incident reviews, capacity planning, game days, chaos testing, and service-level objective tracking. - Collaborate in Agile rituals, plan sprints, refine backlog tickets, and pair with peers to spread DevOps and FinOps best practices. WHAT YOU BRING: - Bachelor's degree in Computer Science or equivalent practical experience. - Three plus years working with cloud infrastructure or platform engineering focused on AWS. - Deep hands-on experience with Kubernetes, preferably EKS, covering upgrades, networking, storage, RBAC, and custom resources. - Proficiency in Python and Bash or Zsh scripting. - Strong understanding of core AWS services EC2, VPC, IAM, ALB, S3, RDS, CloudFormation, and CloudWatch. - Demonstrated experience applying FinOps principles: cost monitoring, forecasting, and optimization on AWS. - Solid experience with Docker and container runtimes, with emphasis on Podman for local development environments. - Hands-on practice with configuration-management tools such as Ansible or Puppet and infrastructure-as-code with Terraform. - Proven use of Datadog for metrics, logs, and APM, plus familiarity with Prometheus and Grafana dashboards. - Comfortable with Git-based workflows, feature branching, and pull-request reviews. - Strong SQL skills and a deep understanding of relational database internals. - Competent in Linux administration, process troubleshooting, and performance tuning. - Practical knowledge of TCP/IP, HTTP, TLS, DNS, and common networking tools. - Clear communication skills and an ability to translate complex technical topics to diverse audiences. - Familiarity with Scrum or Kanban and a continuous-improvement mindset. EXTRA CREDIT: - AWS certifications such as Solutions Architect, DevOps Engineer, or FinOps Practitioner. - Experience with AWS security tooling GuardDuty, Security Hub, IAM Access Analyzer, and KMS. - Building data pipelines with Apache Spark, Flink, or similar frameworks. - Implementing event-driven architectures with Kafka Streams or KSQL. - Applying SRE practices such as error budgets and service-level dashboards. - Exposure to machine-learning workflows, ModelOps, or MLOps in production.

Indefinitely open

Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.