Remote Ghar
Login
← Back to Jobs

AI Red-Teamer — Adversarial AI Testing (Advanced); English & Arabic

USA, Egypt, Saudi Arabia, UAE
Up to $33/hour
Earnings depend on performance and successful conversions.
text • remote • Full-time & Part-time
New
Posted 3 days ago
External application links are provided by the employer.

Jobe Role

Why This Role Exists

At Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers.

This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.

What You’ll Do

  • Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation

  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks

  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent

  • Document reproducibly: produce reports, datasets, and attack cases customers can act on

Who You Are

  • You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing)

  • You’re curious and adversarial: you instinctively push systems to breaking points

  • You’re structured: you use frameworks or benchmarks, not just random hacks

  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders

  • You’re adaptable: thrive on moving across projects and customers

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction

  • Cybersecurity: penetration testing, exploit development, reverse engineering

  • Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing

  • Creative probing: psychology, acting, writing for unconventional adversarial thinking

What Success Looks Like

  • You uncover vulnerabilities automated tests miss

  • You deliver reproducible artifacts that strengthen customer AI systems

  • Evaluation coverage expands: more scenarios tested, fewer surprises in production

  • Mercor customers trust the safety of their AI because you’ve already probed it like an adversary

Company
USA, Egypt, Saudi Arabia, UAE
Employment
both
Work Type
remote
Location
remote
Skills
English & ArabicNative-level fluency in English and Arabic is required for this position.
Login to apply
Create an account or sign in to submit your application.
Login / Signup

Similar Jobs You May Like

AI Red-Teamer — Adversarial AI Testing (Advanced); English & German

USA & Europe
Up to $56/hour
3d ago
Text remote Full & Part-time

Generalist

Global
$35–$45/hour
4d ago
Text remote Full & Part-time

Bilingual Expert | English and Indonesian

Global
$10–$20/hour
4d ago
Text remote Full & Part-time
← Back to Jobs