Quick Summary
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms.
About the Role
~1 min readProtection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within the client team, you will assist in designing and building systems to proactively identify and enforce on abuse on their AI products. This includes ensuring they have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against the highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by existing safety systems. This will require developing understanding of the products and data, and working cross-functionally with product, policy, and engineering teams.
You will need a strong ability to use SQL and python to query, transform, and understand data, and to build and improve prototype detection. An investigative mindset is key, with experience in identifying and enforcing on bad actors (in any industry). A background including data science, machine learning and classification basics, AI, and/or threat investigation is a plus.
- Scope and implement abuse monitoring requirements for new product launches.
- Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.
- Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.
- Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to identify and address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.
Requirements
~1 min read- Ability to work remotely on GMT and must be geographically located in the UK.
- Quantitative and coding background , including statistics/metrics and proficiency in python and SQL.
- Experience in identifying (and ideally enforcing on) bad actors with scaled tooling.
- Ability to be on-call approximately once a quarter that will involve resolving urgent escalations outside of normal work hours, including occasional evenings and weekends
- Ability to rapidly context switch across domains, modalities, and abuse areas, to include high severity areas like violent activities and child safety
- Excited to work in a fast paced, ambiguous, and purposeful space with high impact across users and beyond, and to learn quickly.
- Background in machine learning and classification, especially on novel or poorly understood behaviours.
- Experience scaling and automating processes, especially with language models.
- Experience working with scaled labellers or reviewers, particularly at scale.
- Familiarity with one of the following topics/areas: AI safety, child safety, abuse by nation state and other malicious actors, digital mental health, fraud abuse, radicalization/persuasion/grooming; hateful activities and groups.
Listing Details
- Posted
- April 10, 2026
- First seen
- March 26, 2026
- Last seen
- April 12, 2026
Posting Health
- Days active
- 17
- Repost count
- 0
- Trust Level
- 68%
- Scored at
- April 12, 2026
Signal breakdown
Please let 10Alabs know you found this job on Jobera.
3 other jobs at 10Alabs
View all →Explore open roles at 10Alabs.
Similar Protection Scientist Engineer jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.