AI Red Team Engineer at Easy Recruit Global | Torre

AI Red Team Engineer

You'll lead offensive security to identify high-risk AI agent vulnerabilities and strengthen LLM safety
Emma highlights
This highlight was written by Emma’s AI. Ask Emma to edit it.
Full-time

Legal agreement: Contractor

Currency exchange and taxes to be paid by:

Candidate

Compensation
USD10 - 15/hour
Non-negotiable
location_on
Remote (for India residents)
Remote (for Pakistan residents)
Remote (for Nigeria residents)
Remote (for Kenya residents)
skeleton-gauges
You have opted out of job matches in .
To undo this, go to the 'Skills and Interests' section of your preferences.
Review preferences

Published 2 days ago

Responsibilities & more


Role Overview: Lead offensive security testing of an AI Agent, a tool-augmented LLM that can browse, run code, access connectors (GDrive, Gmail, GitHub, etc.), and act on behalf of users. The goal is to uncover high-risk model mistakes, prompt-injection pathways, and data-exfiltration vectors before adversaries do. Day-to-day responsibilities: * Design and automate multi-turn attack chains spanning browser, terminal, and connector-API misuse. * Craft multi-turn conversations that co-opt Agent tools to induce high-impact mistakes, such as unauthorized purchases or data deletion. * Design prompt-injection and data-exfiltration scenarios, including malicious webpages, poisoned Google Docs, and cross-connector inference attacks. * Script repeatable tests in Python or bash inside the VM and build harnesses to replay payloads after mitigations. * Verify compliance with policy guardrails (PD5, FA2) and attempt policy-bypass exploits. Requirements: * 2+ years of hands-on offensive security or adversarial ML experience, including at least 1 year in LLM or prompt-injection testing. * Deep fluency with classic AppSec techniques (XSS, CSRF, SSRF) and LLM-specific issues (jailbreaks, hidden prompt channels). * Comfortable orchestrating attacks that chain browser automation, terminal commands, HTTP requests, and API calls. * Proficient in Python and bash; capable of prototyping tooling inside a constrained VM. * Proven track record of clear vulnerability write-ups (CVE, HackerOne, or internal bug bounty). * Working knowledge of privacy and financial-risk policies (GDPR, SOC2, or comparable). Nice-to-Have: * Published research or conference talks on AI red-teaming (DEF CON, Black Hat, MLSecOps, etc.). * Familiarity with OpenAI policy taxonomy (PD1-PD5, FA1-FA3). * Certifications: OSCP, GXPN, or CCSK (cloud). * Work in a fully remote environment. * Opportunity to work on cutting-edge AI projects with leading LLM companies. Offer Details: * Commitments required: At least 4 hours per day and a minimum of 20 hours per week with 4 hours overlapping with PST (options: 20, 30, or 40 hrs/week). * Employment type: Contractor assignment (no medical/paid leave). * Duration of contract: 2 months; expected start date next week. * Location: India, Pakistan, Nigeria, Kenya, Egypt, Ghana, Bangladesh, Turkey, Mexico.
Closes in:
0
days
0
hours
0
min
0
sec
Save time and money You can centralize your candidate pipeline with Torre's AI for free. Exclusively for reputable companies.
Optionally, you can add more information later (benefits, pre-screening questions, etc.)
check_circle

Payment confirmed

A member of the Torre team will contact you shortly

In the meantime, continue adding information to your job opening.