ambient.ai
ambient.ai6mo ago
New

Applied Research Scientist - Foundation Models

Redwood Cityfull-timemid
OtherApplied Research Scientist
0 views0 saves0 applied

Quick Summary

Overview

Build a safer world with us, one incident at a time. Ambient.ai is the category creator and leader in Agentic Physical Security. Powered by Ambient Pulsar, the first reasoning Vision-Language Model purpose-built for physical security, our platform seamlessly integrates with existing security…

Key Responsibilities

Develop & Optimize VLMs: Design and optimize transformer-based vision-language models to understand images, videos, and text, and optimize for real-time inference.

Technical Tools
cpppythonpytorchtensorflowab-testingdeep-learning

Ambient.ai is the category creator and leader in Agentic Physical Security. Powered by Ambient Pulsar, the first reasoning Vision-Language Model purpose-built for physical security, our platform seamlessly integrates with existing security cameras and physical access control systems to unify monitoring, access control, threat assessment, response, and investigations through an always-on reasoning layer that augments security operators with superhuman capabilities. The results: 95% fewer false alarms, investigations 20x faster, and 10x faster response.

The momentum speaks for itself: we doubled new ARR in FY26, we process 200M+ video hours per day, and have delivered results for world-class customers including Cisco, ServiceNow, SentinelOne, TikTok, Bayer, and MoMA. That kind of momentum creates an environment where great people thrive, and it shows: we recently ranked #71 out of 500 on the Forbes best startup employers list.

Founded in 2017 and backed by Andreessen Horowitz, Y Combinator, and Allegion Ventures, Ambient.ai is on a fast-paced journey to fulfill our mission: prevent every security incident possible.

Ready to learn more? Connect with us on LinkedIn and YouTube

About the Role

~1 min read

Ambient.ai is hiring a Applied Research Scientist to build the next generation of foundation models for computer vision. You will join a team responsible for building multimodal models with state-of-the-art performance on real-world vision benchmarks. In this role, you’ll own full-cycle model development: from pre-training and fine-tuning on image-language data to applying distillation and compression techniques for deployment. This is a hands-on, cross-functional role where your work will directly impact our mission of preventing every security incident possible.

Responsibilities

~1 min read
  • Develop & Optimize VLMs: Design and optimize transformer-based vision-language models to understand images, videos, and text, and optimize for real-time inference.

  • Pre-training & Fine-tuning: Own the full training pipeline—from pre-training on image-text data to fine-tuning for Ambient.ai’s physical security domain and use cases.

  • Model Compression & Optimization: Apply techniques like distillation, quantization, and pruning to reduce model size and latency, enabling efficient edge deployment.

  • Leverage Open-Source & Innovate: Use and extend state-of-the-art open-source models. Prototype new architectures and training methods to advance Ambient.ai’s multimodal AI research.

  • Cross-Team Collaboration: Work with engineering and product teams to integrate models into the platform. Iterate based on real-world feedback and deployment data to improve performance.

  • Research and Experimentation: Stay current with vision, NLP, and multimodal AI research. Design experiments to test new algorithms and continually enhance our core AI systems.

  • Ph.D. or Master’s in CS, EE, or related field, with a strong foundation in AI/ML (Ph.D. preferred or Master’s with strong experience)

  • Hands-on experience with CNNs, Transformers, and Vision Transformers (ViT). Strong understanding of vision-language models and how to fine-tune or adapt them

  • Proven skills in model training and optimization, including fine-tuning on large datasets and applying distillation, quantization, or similar techniques. Experience with foundation or multimodal models is a plus.

  • Strong problem-solving ability: quick prototyping, diagnosing failure cases, and iterating on solutions

  • Startup experience preferred: Comfortable with ambiguity, fast iteration, and owning projects end-to-end

  • What We Offer

    ~2 min read
    We are creating an entirely new category within a 120+ billion-dollar physical security industry and looking for team members who are also passionate about our mission to prevent every security incident possible
    We have an impressive customer roster of F500 companies, including Adobe, SentinelOne, and TikTok
    Regular Full-time employees receive stock options for the opportunity to share ownership in the success of our company
    Comprehensive health + welfare package (Medical, Dental, Vision, Life, EAP, Legal Services, 401k plan)
    We offer flexible time off to rest and recharge, including Winter Break (time off between Christmas and New Year’s for most roles, depending on customer demand)
    You’ll receive everything you need to hit the ground running, including cutting-edge equipment and branded gear
    Enjoy a full range of opportunities to connect with your awesome co-workers
    We love to hike, are foodies, and love music! Check out our most recent Ambient Spotify Playlist

    Location & Eligibility

    Where is the job
    Redwood City
    Hybrid — some on-site time required
    Who can apply
    Same as job location

    Listing Details

    Posted
    October 22, 2025
    First seen
    May 6, 2026
    Last seen
    May 8, 2026

    Posting Health

    Days active
    0
    Repost count
    0
    Trust Level
    16%
    Scored at
    May 6, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    ambient.aiApplied Research Scientist - Foundation Models