Principal Engineer - AI at Safe Security | Torre

Principal Engineer - AI

You will architect AI's future in cyber risk.
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Employment

Provide your expected compensation while applying
location_on
Bengaluru, Karnataka, India
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Posted 6 months ago

Requirements and responsibilities


At SAFE Security, our mission is bold and ambitious: We Will Build CyberAGI - a super-specialized system of intelligence that autonomously predicts, detects, and remediates threats. This isn't just a vision - it's the future we're building every day, with the best minds in AI, cybersecurity, and risk. At SAFE, we empower individuals and teams with the freedom and responsibility to align their goals, ensuring we all move towards this goal together.We operate with radical transparency, autonomy, and accountability - there's no room for brilliant jerks. We embrace a culture-first approach, offering an unlimited vacation policy, a high-trust work environment, and a commitment to continuous learning. For us, Culture is Our Strategy - check out our Culture Memo to dive deeper into what makes SAFE unique.As a Principal Engineer - AI, you will define and lead the technical direction of AI systems that power Safe's CRQ, CTEM, and TPRM products, including agentic workflows, RAG pipelines, LLM orchestration, and AI-native developer tooling. You'll be the hands-on architect behind Safe's AI engineering stack, bridging model intelligence with production-grade infrastructure.You'll collaborate with product, data, and platform teams to design scalable, explainable, and enterprise-ready systems.This is a high-impact, technical leadership role that will shape how AI is built, deployed, and governed across Safe.Core Responsibilities:Architect Safe's AI Systems: Design and scale AI-driven components - LLM orchestration, retrieval-augmented generation (RAG), vector stores, prompt pipelines, and AI microservices. Drive architecture for AI observability, safety, and evaluation (precision, recall, F1, hallucination detection, cost metrics).Productionize AI Agents: Build multi-turn, goal-oriented agent systems that automate reasoning across TPRM, CTEM, and CRQ domains (e.g., control reviews, issue RCA, automated responses). Ensure reliability, traceability, and deterministic behavior in production.AI Infrastructure & Platform Ownership: Partner with Platform and DevOps teams to operationalize model serving (AWS SageMaker, Bedrock, or self-hosted Llama), build AI APIs, and manage model lifecycle and versioning. Establish feature stores, embedding management, and in-memory retrieval layers.Data Pipeline & Knowledge Graph Integration: Work with Data Engineering to design pipelines for structured and unstructured data ingestion, semantic indexing, and context retrieval (Snowflake + Iceberg + LlamaIndex).AI Evaluation, Monitoring & Governance: Define internal frameworks for golden dataset validation, LLM evaluation (LangFuse/LangSmith), and safety enforcement policies. Implement human-in-the-loop (HITL) mechanisms and continuous feedback loops.Mentor & Multiply: Guide AI and backend engineers on architectural design, experimentation methodologies, and prompt optimization. Collaborate with product leaders to translate abstract AI goals into measurable engineering deliverables.Minimum Qualifications:Experience: 12+ years total experience in software engineering, including 4+ years building AI/ML systems or large-scale data/LLM infrastructure.Core Technical Skills:Strong programming fundamentals in Python, Go, or TypeScriptDeep understanding of LLM-based architectures, prompt engineering, and RAG pipelinesHands-on experience with LangChain, LlamaIndex, or equivalent orchestration frameworksVector databases (FAISS, Pinecone, Weaviate, Redis Vector, or Milvus)Cloud model deployment (AWS SageMaker, Bedrock, Vertex AI, or custom inference APIs)Data systems: Snowflake, Iceberg, S3, Postgres/MySQLMLOps & Infra: Familiar with model versioning, CI/CD for ML, and performance optimization for real-time inference.Applied AI Focus: Practical understanding of evaluation metrics, hallucination detection, RAG reliability, and enterprise AI safety.Preferred Qualifications:Experience integrating AI into cybersecurity or risk management productsFamiliarity with multi-agent systems and autonomous workflows (CrewAI, LangGraph, AutoGen)Experience building AI evaluation dashboards and AI observability stacksKnowledge of knowledge graphs, semantic search, or retrieval pipelinesExposure to data governance, compliance, or SOC2/ISO 27001 environmentsPublished research, open-source contributions, or prior leadership of AI teams is a strong plusIf you're passionate about cyber risk, thrive in a fast-paced environment, and want to be part of a team that's redefining security—we want to hear from you! 🚀We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.