cartesia
cartesia17mo ago
New

Inference Engineer

United StatesUnited States·*hq - San Franciscofull-timemid
OtherEngineer
0 views0 saves0 applied

Quick Summary

Overview

About Cartesia Our mission is to architect AI that learns from and interacts with the world like humans do. We're pioneering the model architectures that will make this possible.

Key Responsibilities

Design and build low latency, scalable, and reliable model inference and serving stack for our cutting edge foundation models using Transformers, SSMs and hybrid models.

Requirements Summary

Given the scale and difficulty of problems we work on, we value strong engineering skills at Cartesia. Strong engineering skills, comfortable navigating complex codebases and an eye for writing clean and maintainable code.

Technical Tools
distributed-systemsmachine-learning

Our mission is to architect AI that learns from and interacts with the world like humans do.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

About the Role

~1 min read

We're hiring an Inference Engineer to advance our mission of building real-time multimodal intelligence.

  • Design and build low latency, scalable, and reliable model inference and serving stack for our cutting edge foundation models using Transformers, SSMs and hybrid models.

  • Work closely with our research team and product engineers to serve our suite of products in a fast, cost-effective, and reliable manner. 

  • Design and build robust inference infrastructure and monitoring for our products. 

  • Have significant autonomy to shape our products and directly impact how cutting-edge AI is applied across various devices and applications.

Given the scale and difficulty of problems we work on, we value strong engineering skills at Cartesia.

  • Strong engineering skills, comfortable navigating complex codebases and an eye for writing clean and maintainable code. 

  • Experience building large-scale distributed systems with high demands on performance, reliability, and observability.

  • Technical leadership with the ability to execute and deliver zero-to-one results amidst ambiguity. 

  • Background in or experience working on inference pipelines with machine learning and generative models.

  • Experience implementing state of the art Machine Learning models and research to applied problems.

  • Preferable: experience with vLLM, SGLang, Continuous Batching or other inference frameworks.

  • Preferable: experience working in CUDA, Triton or similar

🏢 In-office policy: We’re an in-person team based out of offices in 🇺🇸 San Francisco, 🇬🇧 London and 🇮🇳 Bangalore We love being in the office, hanging out together, and learning from each other every day.

What We Offer

~1 min read

Location & Eligibility

Where is the job
*hq - San Francisco, United States
On-site at the office
Who can apply
US

Listing Details

Posted
December 12, 2024
First seen
May 5, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
14%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

cartesiaInference Engineer