Quick Summary
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms.
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
- Develop and run adversarial test suites—both manual and scripted—for LLMs and image / video models.
- Craft multilingual prompts, jailbreaks, and escalation chains targeting policy edge cases.
Analyze outputs, triage failures, and write concise vulnerability reports. - Contribute to internal tooling (e.g., prompt libraries, scenario generators, dashboards).
- Has 2-4 years of experience in red-teaming, security research, trust & safety, or related fields.
- Is comfortable scripting basic tests (Python, Bash, or similar) and working in Jupyter or prompt-engineering tools.
- Communicates clearly in English and at least one additional language (ideally major non-English language relevant to global threat landscapes).
- Thinks like an adversary, documents findings crisply, and iterates quickly.
Requirements
~1 min read- Bachelor’s degree—or equivalent experience—in CS, data science, linguistics, international studies, or security.
- Basic proficiency with Python and command-line tools.
- Demonstrated interest in AI safety, adversarial ML, or abuse detection.
- Strong writing skills for short vulnerability reports and long-form analyses.
- Ability to rapidly context switch across domains, modalities, and abuse areas.
- Excited to work in a fast-paced and ambiguous space.
Nice to Have
~1 min read- Full professional proficiency in Arabic, Chinese, Farsi, Portuguese, Russian, or Spanish, as well as English.
- Prior work in content moderation, disinformation analysis, or cyber-threat intelligence.
- Experience with prompt-automation frameworks (e.g., Promptfoo, LangChain, Garak).
Familiarity with vector search or LLM fine-tuning workflows. - Formal training or certification in red-teaming or penetration testing.
What We Offer
~1 min readListing Details
- Posted
- April 9, 2026
- First seen
- March 26, 2026
- Last seen
- April 12, 2026
Posting Health
- Days active
- 16
- Repost count
- 0
- Trust Level
- 68%
- Scored at
- April 12, 2026
Signal breakdown
Please let 10Alabs know you found this job on Jobera.
4 other jobs at 10Alabs
View all →Explore open roles at 10Alabs.
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.