Senior Applied Research Scientist - Foundation Models
Quick Summary
Build a safer world with us, one incident at a time. Ambient.ai is the category creator and leader in Agentic Physical Security. Powered by Ambient Pulsar, the first reasoning Vision-Language Model purpose-built for physical security, our platform seamlessly integrates with existing security…
Develop & Optimize VLMs: Design and optimize transformer-based vision-language models to understand images, videos, and text, and optimize for real-time inference.
Ambient.ai is the category creator and leader in Agentic Physical Security. Powered by Ambient Pulsar, the first reasoning Vision-Language Model purpose-built for physical security, our platform seamlessly integrates with existing security cameras and physical access control systems to unify monitoring, access control, threat assessment, response, and investigations through an always-on reasoning layer that augments security operators with superhuman capabilities. The results: 95% fewer false alarms, investigations 20x faster, and 10x faster response.
The momentum speaks for itself: we doubled new ARR in FY26, we process 200M+ video hours per day, and have delivered results for world-class customers including Cisco, ServiceNow, SentinelOne, TikTok, Bayer, and MoMA. That kind of momentum creates an environment where great people thrive, and it shows: we recently ranked #71 out of 500 on the Forbes best startup employers list.
Founded in 2017 and backed by Andreessen Horowitz, Y Combinator, and Allegion Ventures, Ambient.ai is on a fast-paced journey to fulfill our mission: prevent every security incident possible.
Ready to learn more? Connect with us on LinkedIn and YouTube
About the Role
~1 min readAmbient.ai is hiring a Senior Applied Research Scientist to build the next generation of foundation models for computer vision. You will join a team responsible for building multimodal models with state-of-the-art performance on real-world vision benchmarks. In this role, you’ll own full-cycle model development: from pre-training and fine-tuning on image-language data to applying distillation and compression techniques for deployment. This is a hands-on, cross-functional role where your work will directly impact our mission of preventing every security incident possible.
Responsibilities
~1 min read- →
Develop & Optimize VLMs: Design and optimize transformer-based vision-language models to understand images, videos, and text, and optimize for real-time inference.
- →
Pre-training & Fine-tuning: Own the full training pipeline—from pre-training on image-text data to fine-tuning for Ambient.ai’s physical security domain and use cases.
- →
Model Compression & Optimization: Apply techniques like distillation, quantization, and pruning to reduce model size and latency, enabling efficient edge deployment.
- →
Leverage Open-Source & Innovate: Use and extend state-of-the-art open-source models. Prototype new architectures and training methods to advance Ambient.ai’s multimodal AI research.
- →
Cross-Team Collaboration: Work with engineering and product teams to integrate models into the platform. Iterate based on real-world feedback and deployment data to improve performance.
- →
Research and Experimentation: Stay current with vision, NLP, and multimodal AI research. Design experiments to test new algorithms and continually enhance our core AI systems.
Ph.D. or Master’s in CS, EE, or related field, with a strong foundation in AI/ML (Ph.D. preferred or Master’s with strong experience)
Hands-on experience with CNNs, Transformers, and Vision Transformers (ViT). Strong understanding of vision-language models and how to fine-tune or adapt them
Proven skills in model training and optimization, including fine-tuning on large datasets and applying distillation, quantization, or similar techniques. Experience with foundation or multimodal models is a plus.
Strong problem-solving ability: quick prototyping, diagnosing failure cases, and iterating on solutions
Startup experience preferred: Comfortable with ambiguity, fast iteration, and owning projects end-to-end
What We Offer
~2 min readLocation & Eligibility
Listing Details
- Posted
- July 29, 2025
- First seen
- May 6, 2026
- Last seen
- May 7, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 16%
- Scored at
- May 6, 2026
Signal breakdown
Please let ambient.ai know you found this job on Jobera.
4 other jobs at ambient.ai
View all →Explore open roles at ambient.ai.
Similar Applied Research Scientist jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.