Head of ML Infra at Sourcegraph | Torre

Head of ML Infra

Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time
Compensation USD243k/year
location_on
Remote (anywhere)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Posted over 2 years ago

Requirements and responsibilities


Our mission at Sourcegraph is to make it so that everyone can code. Our code graph powers Cody, the most powerful and accurate code AI for writing, fixing, and maintaining code. Our customers include 4/5 FAANG companies, 4 of the top 10 banks, government organizations, Uber, Plaid, and many other companies building the software that pushes the world forward. We’ve raised $225M at a $2.625B valuation from Andreessen Horowitz, Sequoia, Redpoint, Craft and others. We are creating a machine learning team at Sourcegraph, aimed at creating the most powerful coding assistant in the world. Many companies are trying, but Sourcegraph has a unique advantage: Our rich code graph. In the world of prompting LLMs, context is key, and for creating the right context, Sourcegraph’s code data is simply the best you can get. We are looking for a seasoned and deeply technical ML-engineering leader, with a strong AI background and experience with both smaller models and the new LLM ecosystem, who can help us deliver the world’s best coding assistant and ML-powered developer tooling. Responsibilities: - Define our short-term roadmap for ML Infrastructure on GCP. - Set up the at-scale infrastructure for running benchmarks that compare coding assistants. - Define a strategy for how we will address getting GPUs at scale for various personas. - Define a rough roadmap for how to cost-optimize our ML spend. - Define our on-prem/self-hosted roadmap and recommended configurations for ML infra. - Be up to speed and driving Sourcegraph’s ML Infra strategy. - Hire a world-class team of ML engineers. - Deliver a ML-driven quality, benchmarking, and evaluation framework for coding assistants that runs at scale - Establish a longer-term roadmap that keeps us aligned with expected advances in LLMs. - Run dozens to hundreds of experiments with prompting, embedding, fine-tuning and other techniques.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.