Staff Software Engineer, Inference Cloud

Data ScienceOtherSoftware EngineerEngineerCloud Software EngineerSoftware Engineering
0 views0 saves0 applied

Quick Summary

Overview

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip,

Technical Tools
Data ScienceOtherSoftware EngineerEngineerCloud Software EngineerSoftware Engineering

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Location: Sunnyvale 

We're hiring a Staff Engineer to own major areas of the architecture of our Inference Cloud Platform. This team owns the cloud layer behind our Inference Service, with responsibility for availability, latency, reliability, and global scale. 

This is a hands on IC role for an engineer who wants to work on the hardest distributed systems problems in the stack: multi-region traffic architecture, graceful degradation under bursty AI workloads, performance at high QPS, and the operating model for a platform that has to stay fast and available under load. You'll write code, lead key architectural decisions in your domain, debug production issues, and help shape technical direction across adjacent teams. 

If you're interested in building the next-generation architecture of a globally distributed inference platform, we'd like to talk. 

Responsibilities 

  • Platform Direction. Help shape the technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time, and own the roadmap for major technical areas. 
  • Core Cloud Systems. Design and build critical platform components such as service discovery, request routing, load balancing, caching, batching, and traffic management for AI inference workloads. 
  • Reliability & Performance. Architect active-active systems with rapid failover, graceful degradation, and clear SLOs. Drive system-level improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand. 
  • Traffic Control & Service Tiers. Define platform mechanisms for admission control, quota management, rate limiting, and differentiated quality of service across workload types and customer tiers. 
  • Execution on Critical Paths. Write and review production code in the most important parts of the platform. Make high-consequence architectural decisions within your area and set the technical bar through design reviews, code reviews, and sound engineering judgment. 
  • Production Leadership. Lead on the hardest production issues and cross-system bottlenecks. Drive observability, incident response, capacity planning, and post-incident improvement with a high standard for operational rigor.  
  • Technical Influence. Partner with ML, Product, Infrastructure, and Platform teams to translate product and business requirements into scalable system designs, and drive alignment on shared technical decisions within your domain and adjacent platform surfaces. 
  • Mentorship. Raise the effectiveness of senior engineers through design feedback, pairing, and clear technical standards. 

Skills & Qualifications 

  • 8+ years of experience in software engineering, with substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure. 
  • Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services. 
  • Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale. 
  • Experience optimizing latency, throughput, and efficiency in high-QPS systems. Experience with TTFT and tail-latency reduction is a strong plus. 
  • Strong proficiency in backend or systems languages such as Go, C++, or Python, with the expectation that you can contribute production code directly. 
  • Experience designing observability and reliability practices, including metrics, logging, tracing, alerting, incident response, and SLO-driven operations. 
  • Ability to influence senior engineers and cross-functional partners through technical credibility, communication, and judgment, especially within your domain and adjacent systems. 
  • Experience with ML inference infrastructure, model serving systems, or GPU-accelerated workloads is a plus. 

 

What We Offer

~1 min read
Build a breakthrough AI platform beyond the constraints of the GPU.
Publish and open source their cutting-edge AI research.
Work on one of the fastest AI supercomputers in the world.
Enjoy job stability with startup vitality.
Our simple, non-corporate work culture that respects individual beliefs.

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.


This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Location & Eligibility

Where is the job
Sunnyvale, United States
On-site at the office
Who can apply
Open to applicants worldwide
Listed under
United States

Listing Details

First seen
April 14, 2026
Last seen
April 29, 2026

Posting Health

Days active
14
Repost count
0
Trust Level
28%
Scored at
April 29, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Cerebras Systems

Cerebras Systems is revolutionizing AI acceleration with its innovative hardware solutions designed to enhance deep learning capabilities.

Employees
350
Founded
2016
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

Cerebras SystemsStaff Software Engineer, Inference Cloud