Applied Researcher, Audio Understanding
Quick Summary
About Cartesia Our mission is to architect AI that learns from and interacts with the world like humans do. We're pioneering the model architectures that will make this possible.
Architect and develop novel, large-scale models for complex audio understanding tasks, including multi-speaker ASR, diarization, and non-speech audio classification and deploy them to production at scale.
Deep expertise in ASR, audio understanding, language modeling, or generative modeling more broadly. Experience with large-scale training, GPU/TPU acceleration, and model optimization.
Our mission is to architect AI that learns from and interacts with the world like humans do.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
About the Role
~1 min readAs a Senior Applied Researcher in Audio Understanding, you will be responsible for tackling the most challenging problems in audio perception. Your work will go beyond traditional speech recognition to encompass the full spectrum of audio perception, from identifying speakers and interpreting emotion to understanding complex acoustic environments. You will lead high-impact projects that are critical to our mission of building truly aware AI.
Architect and develop novel, large-scale models for complex audio understanding tasks, including multi-speaker ASR, diarization, and non-speech audio classification and deploy them to production at scale.
Pioneer research in areas like self-supervised learning for audio, few-shot learning, and robust audio-visual perception.
Set new standards for how we evaluate and benchmark our audio understanding systems.
Build large scale pre-training and fine-tuning datasets for audio understanding capabilities.
Deep expertise in ASR, audio understanding, language modeling, or generative modeling more broadly.
Experience with large-scale training, GPU/TPU acceleration, and model optimization.
Strong applied mindset—able to balance scientific novelty with product impact.
🏢 In-office policy: We’re an in-person team based out of offices in 🇺🇸 San Francisco, 🇬🇧 London and 🇮🇳 Bangalore We love being in the office, hanging out together, and learning from each other every day.
What We Offer
~1 min readLocation & Eligibility
Listing Details
- Posted
- September 16, 2025
- First seen
- May 5, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 14%
- Scored at
- May 6, 2026
Signal breakdown
Please let cartesia know you found this job on Jobera.
4 other jobs at cartesia
View all →Explore open roles at cartesia.
Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.