AI Scientist, Safety at Mistral AI | Torre

AI Scientist, Safety

You'll shape the ethical future of AI by safeguarding groundbreaking LLMs.
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Employment

Provide your expected compensation while applying
location_on
Hybrid (Paris, France)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences
Posted 6 months ago

Requirements and responsibilities


About MistralAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.Role SummaryWe are seeking an AI Scientist, Safety to evaluate, enhance, and build safety mechanisms for our large language models (LLMs). This role involves identifying and addressing potential risks, biases, and misuses of LLMs, ensuring that our AI systems are ethical, fair, and beneficial to society. You will work to monitor models, prevent misuse, and ensure user well-being, applying your technical skills to uphold principles of safety, transparency, and oversight.LocationLocation: Paris or LondonWhat you will doAdversarial & Fairness TestingDesign and execute adversarial attacks to uncover vulnerabilities in LLMs.Evaluate potential risks and harms associated with LLM outputs.Assess LLMs for biases and unfairness in their responses, and develop strategies to mitigate these issues.Tools & MonitoringDevelop monitoring systems (eg. moderation tools) to detect unwanted behaviors in Mistral’s products.Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale.Investigate and respond to incidents involving LLM misuse or harmful outputs, and develop post-incident recommendations.Analyze user reports of inappropriate content or accounts.Contribute to the development of AI ethics policies and guidelines that govern the responsible use of LLMs.Safety Fine TuningWork on safety tuning to improve robustness of models.Collaborate with the AI development team to create and implement safety measures, such as content filters, moderation tools, and model fine-tuning techniques.Keep up-to-date with the latest research and trends in AI safety, LLMs, and responsible AI, and continuously improve our safety practices.Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scaleAbout youYou have a degree in Computer Science, AI, Machine Learning, or a related field. Advanced degrees (MSc, PhD) are preferred.You are familiar with Python and you are a highly proficient software engineer in a least one programming language (e.g. Python, Rust, Go, Java). You have hands-on experience with AI frameworks and tools (e.g., TensorFlow, PyTorch, Jax).You have high technical engineering competence. This means being able to design complex software and make them usable in production.You have a high scientific track record in a field of science.You are self-starter, autonomous and low-ego.Collaborative and have a real team player mindset.Note that this is not an exhaustive or necessary list of requirements, please consider applying if you believe you have the skills to contribute to Mistral's mission.Now, it would be ideal ifYou have proven experience in AI safety, responsible AI, or a related field. Familiarity with LLMs and their potential risks is essential.You have hands-on experience with Generative AI e.g. experience with transformer based models and a broad knowledge of the field of AI, and specific knowledge or interest in fine-tuning and using language models for applications.You are able to navigate the full MLOps technical stack, with a focus on architecture development and model evaluation and usage.BenefitsFrance💰 Competitive cash salary and equity🥕 Food : Daily lunch vouchers🥎 Sport : Monthly contribution to a Gympass subscription🚴 Transportation : Monthly contribution to a mobility pass🧑‍⚕️ Health : Full health insurance for you and your family🍼 Parental : Generous parental leave policy🌎 Visa sponsorshipUK💰 Competitive cash salary and equity🚑 Insurance🚴 Transportation: Reimburse office parking charges, or 90GBP/month for public transport🥎 Sport: 90GBP/month reimbursement for gym membership🥕 Meal voucher: £200 monthly allowance for its meals💰 Pension plan: SmartPension (percentages are 5% Employee & 3% Employer)We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.