cursor
cursor25d ago
New

Software Engineer, Agent Evaluation and Quality

United StatesUnited States·San Franciscofull-timemid
Software EngineerSoftware Engineering
0 views0 saves0 applied

Quick Summary

Overview

Our mission is to automate coding. The first step in our journey is to build the best tool for professional programmers, using a combination of inventive research, design, and engineering. Our organization is very flat, and our team is small and talent dense.

Technical Tools
ab-testing

Our mission is to automate coding. The first step in our journey is to build the best tool for professional programmers, using a combination of inventive research, design, and engineering. Our organization is very flat, and our team is small and talent dense. We particularly like people who are truth-seeking, passionate, and creative. We enjoy spirited debate, crazy ideas, and shipping code.

About the Role

~1 min read

As a Software Engineer on the Agent Quality team at Cursor, you’ll build the measurement, evaluation, and feedback-loop infrastructure that makes the Cursor core agent reliably better over time.

This role sits at the intersection of product, data, and engineering: you’ll instrument what matters, help define how we judge quality, build pipelines and tooling to analyze agent behavior at scale, and partner closely with research, product, and infrastructure teams to turn insights into improvements.

Your impact will compound across every Cursor product built on the shared harness—and across high-stakes decisions around model choice, quality, and cost.

  • Designing and building best-in-class AI evaluation system: curated datasets, offline replay, scorers / judges, regression alerts, and dashboards.

  • Designing feedback loops from real usage: collecting, cleaning, and interpreting user signals to inform model and harness changes.

  • Developing analysis tooling and workflows for debugging agent behavior: deep dives on failure modes, clustering themes, and surfacing actionable insights.

  • Improving reliability and guardrails by making quality measurable and operational: defining “good/bad/degraded” sessions, alerting, and triage primitives.

  • You’ve built and operated evaluation or measurement systems, such as AI evals, experimentation, ranking/relevance, or search quality. You can turn ambiguous “quality” questions into concrete metrics, pipelines, and decisions.

  • You have strong data acumen, and can collaborate effectively with data scientists and researchers.

  • You have taste and strong opinions on model and agent behaviors. You stay up-to-date and informed on emerging research and industry trends.

  • You have strong software engineering fundamentals and enjoy shipping production systems.

If there appears to be a fit, we'll reach to schedule 2-3 short technicals. After, we'll schedule an onsite in our office, where you'll work on a small project, discuss ideas, and meet the team.

#LI-DNI

Location & Eligibility

Where is the job
San Francisco, United States
On-site at the office
Who can apply
US

Listing Details

Posted
April 13, 2026
First seen
May 6, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
14%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

cursorSoftware Engineer, Agent Evaluation and Quality