L
Lilasciences~19d ago
USD 224000-336000/yr

Staff / Principal Research Engineer, AI Safety, Technical Mitigations

London,Cambridge,United States - San Franciscolead
Data ScienceOtherResearch Engineer
2 views0 saves0 applied

Quick Summary

Overview

Your Impact at LILA We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.),

Technical Tools
Data ScienceOtherResearch Engineer

We're building a talent-dense, high-agency AI safety team at Lila that will engage all core teams within the organization (science, model training, lab integration, etc.), to prepare for risks from scientific superintelligence. The initial focus of this team will be to build and implement a bespoke safety strategy for Lila, tailored to its specific goals and deployment strategies. This will involve technical safety strategy development, broader ecosystem engagement, safety-focused evaluations, safety systems to mitigate risks, and a safety research agenda that explores longer-term needs such as oversight of superintelligent scientific systems.

We’re seeking a Technical Mitigations Lead, to lead the build out of safety systems at Lila for the safe deployment of our scientific capabilities to the world. Given the novelty of Lila’s workflows, integrating frontier-class language models with narrow scientific tools and lab-based automation, this role will require the design and deployment of technical safeguards beyond the current state-of-the-art.

We expect the person in this role to start off the initial mitigations build-out, and then slowly build a team to support this function.

  • Set the build and research strategy for Lila’s safety systems, across scientific data analysis and generation pipelines, safety post-training, refusal classifiers, automated safety-testing / red-teaming systems, and monitoring systems.
  • Conduct initial safeguards experimentation and buildout for Lila’s specific scientific needs, and subsequently lead a small team to execute on the build and research agenda
  • Lead safety systems research to iterate Lila’s systems beyond the state of the art, given the needs of technical safeguards for both in silico and lab-based scientific workflows.
  • Partner closely with
    • Other members of the safety team, such as domain-specific experts (bio, chem, materials) and eval buildout teams, and
    • Non-safety teams, such as core AI, lab automation, and product teams,
  • Contributing to broader, high-quality research efforts - as and when needed - for scientific capability evaluation and restriction.
  • Contribute to external communications on Lila’s safety efforts.
  • Track record of building safety systems, classifiers, or conducting post-training for frontier-class problems - science, reasoning, programming, etc.
  • 4-6+ years working in technically engineering with ML systems.
  • Experience building scalable, production systems, not just prototypes.
  • Demonstrated ability to set research directions for open problems in post-training, classifier buildouts, and other relevant systems.
  • Ability to communicate complex technical concepts and concerns to non-expert audiences effectively.

Nice to Have

~1 min read
  • Experience in developing or applying ML to biological or physical sciences
  • Experience in building safeguards for scientific risks for frontier models / narrow scientific tools.
  • Demonstrated ability to lead teams towards engineering goals

 

What We Offer

~1 min read

We offer competitive base compensation with bonus potential and generous early-stage equity. Your final offer will reflect your background, expertise, and expected impact.

What We Offer

~1 min read

Lila Sciences is building Scientific Superintelligence™ to solve humankind's greatest challenges. We believe science is the most inspiring frontier for AI. Rather than hard-coding expert knowledge into tools, LILA builds systems that can learn for themselves.

LILA combines advanced AI models with proprietary AI Science Factory™ instruments into an operating system for science that executes the entire scientific method autonomously, accelerating discovery at unprecedented speed, scale, and impact across medicine, materials, and energy. Learn more at www.lila.ai.

Guided by our core values of truth, trust, curiosity, grit, and velocity, we move with startup speed while tackling problems of historic importance. If this sounds like an environment you'd love to work in, even if you don't meet every qualification listed above, we encourage you to apply.

Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.

Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.

Lila Sciences does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Lila Sciences or its employees is strictly prohibited unless contacted directly by Lila Science’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Lila Sciences, and Lila Sciences will not owe any referral or other fees with respect thereto.

Location & Eligibility

Where is the job
Location terms not specified
Who can apply
Same as job location
Listed under
Worldwide

Listing Details

First seen
April 7, 2026
Last seen
April 27, 2026

Posting Health

Days active
19
Repost count
0
Trust Level
47%
Scored at
April 27, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

L
Staff / Principal Research Engineer, AI Safety, Technical MitigationsUSD 224000-336000