Machine Learning Operations Engineer
Quick Summary
Modulate is the leader in conversational voice intelligence. We enable enterprises to deeply understand how people communicate and take timely action based on those insights.
Modulate is the leader in conversational voice intelligence. We enable enterprises to deeply understand how people communicate and take timely action based on those insights. Our products help detect harm, prevent fraud, and build safer, more trusted online and real-world voice environments. We are building a Conversation Intelligence Platform — APIs, workflows, and applications that bring voice understanding to customers at enterprise scale.
We’re looking for a Machine Learning Operations Engineer to own and scale the production inference systems behind Modulate’s machine learning models. This role will focus on ensuring high availability, reliability, and efficiency of deployed models across our APIs and enterprise products as we rapidly grow in customer usage and model demand.
-
Own the reliability and performance of ML model inference systems in production
-
Ensure high availability of deployed models across APIs and enterprise products
-
Build systems to handle scaling, load variability, and production traffic growth
-
Reduce operational burden through better tooling, automation, and processes
-
Help define how Modulate runs ML systems at scale with reliability and efficiency
Responsibilities
~1 min read- →
Deploy, monitor, and maintain production machine learning inference systems
- →
Oversee fleets of inference machines and ensure system health and performance
- →
Design monitoring, alerting, and incident response systems for ML workloads
- →
Participate in on-call rotations and lead incident response and debugging
- →
Build systems and processes for scaling inference infrastructure under variable load
- →
Improve reliability and observability of production ML services
- →
Collaborate on infrastructure-as-code for production deployments
- →
Support or contribute to GPU-based training and inference infrastructure
- →
Work closely with ML and engineering teams to ensure smooth model deployments
- →
(Optional growth area) Optimize model inference performance and latency
-
Experience deploying and maintaining production software systems
-
Experience building monitoring and alerting systems for production environments
-
Experience with on-call rotations and incident response
-
Strong experience with AWS, Python, and Linux
-
Exposure to PyTorch or similar ML frameworks
-
Experience working with GPU-based applications and basic GPU tooling (drivers, runtime, monitoring)
-
Strong debugging and systems thinking skills
-
Ability to operate calmly in production incident environments
Nice to Have
~1 min read-
Experience with ML model serving systems or dedicated model servers
-
Experience monitoring GPU performance for inference workloads
-
Experience optimizing machine learning model inference
-
Familiarity with audio or multimedia data (codecs, streaming, real-time systems)
-
Experience with infrastructure-as-code (e.g., Terraform, CloudFormation)
What We Offer
~1 min readWhat We Offer
~1 min readLocation & Eligibility
Listing Details
- Posted
- April 16, 2026
- First seen
- May 13, 2026
- Last seen
- May 13, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 32%
- Scored at
- May 13, 2026
Signal breakdown
Please let Modulate know you found this job on Jobera.
3 other jobs at Modulate
View all →Explore open roles at Modulate.
Similar Machine Learning Operations Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.
