A
Aisafety1mo ago
USD 9700–19000/yr

Research Engineer Intern (Fall 2026)

United StatesSan FranciscoFull-Timeentry
Ai Research EngineerResearch Engineer InternData & AI
0 views0 saves0 applied

Quick Summary

Overview

Introduction The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization,

Technical Tools
Ai Research EngineerResearch Engineer InternData & AI
Introduction
The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions.
 
As a research engineer intern here, you will work very closely with our researchers on projects in areas such as AI security, machine ethics, AI alignment, and benchmarking AI risks. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.
 
Timing
This application is for the full-time fall internship position. Applications are due by May 29, 2026.
  • Are a current PhD student or researcher in machine learning or a related field. Exceptional candidates with a strong publication record may be considered regardless of degree level.
  • Have co-authored at least one paper published at a top ML conference venue (e.g., NeurIPS, ICML, ICLR, ACL, CVPR). Workshop papers are considered, though peer-reviewed conference publications are strongly preferred. Publications in journals such as IEEE or Springer Nature are typically given less weight. 
  • Have a track record of empirical research in AI or ML, particularly in AI safety-relevant areas (e.g. adversarial robustness, calibration, benchmarking). We weight empirical research heavily; candidates with primarily theoretical backgrounds are generally not a strong fit.
  • Alternatively, have made meaningful research contributions at a leading AI lab.
  • Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature.
  • Are comfortable setting up, launching, and debugging ML experiments.
  • Are familiar with relevant frameworks and libraries (e.g., PyTorch).
  • Communicate clearly and promptly with teammates.
  • Take ownership of your individual part in a project.
  • Listing Details

    Posted
    March 5, 2026
    First seen
    March 26, 2026
    Last seen
    April 22, 2026

    Posting Health

    Days active
    26
    Repost count
    0
    Trust Level
    37%
    Scored at
    April 22, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    A
    Research Engineer Intern (Fall 2026)USD 9700–19000