Staff Software Engineer, ML Performance & Systems
Quick Summary
fal is the generative media ecosystem powering the next generation of AI products. We build the infrastructure, tools, and model access that teams need to move from idea to production, and do it at scale without compromise.
Help fal maintain its frontier position on model performance for generative media models. Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing…
Strong foundation in systems programming with expertise in identifying and fixing bottlenecks. Deep understanding of cutting edge ML infrastructure stack (anything from PyTorch, TensorRT, TransformerEngine to Nsight), including model compilation,…
fal is the generative media ecosystem powering the next generation of AI products. We build the infrastructure, tools, and model access that teams need to move from idea to production, and do it at scale without compromise. For developers and enterprises, fal is the foundation that makes generative media not just possible, but practical: a unified platform where high-performance inference, orchestration, and observability come together to unlock new categories of AI-native products.
As generative media reshapes industries across a market projected to grow by hundreds of billions over the next decade, fal is becoming the ecosystem that ambitious teams build on.
About the Role
~1 min readResponsibilities
~1 min read- →
Help fal maintain its frontier position on model performance for generative media models.
- →
Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage.
- →
Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities.
- →
Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Requirements
~1 min read-
Strong foundation in systems programming with expertise in identifying and fixing bottlenecks.
-
Deep understanding of cutting edge ML infrastructure stack (anything from PyTorch, TensorRT, TransformerEngine to Nsight), including model compilation, quantization, and serving architectures. Ideally following closely the developments in all these systems as they happen.
-
Have a fundamental view of the underlying hardware (Nvidia based systems at the moment), and when necessary go deeper into the stack to fix bottlenecks (custom GEMM kernels with CUTLASS for common shapes).
-
Proficient in Triton or willingness to learn with comparable experience in lower-level accelerator programming.
-
New frontier: multi-dimensional model parallelism (combining multiple parallelism techniques like TP with context parallel / sequence parallel).
-
Familiar with internals of Ring Attention, FA3, FusedMLP implementations.
What We Offer
~1 min readWhat We Offer
~1 min read-
We are currently hiring in downtown San Francisco.
Location & Eligibility
Listing Details
- Posted
- December 16, 2025
- First seen
- March 26, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 43
- Repost count
- 0
- Trust Level
- 42%
- Scored at
- May 8, 2026
Signal breakdown
Please let Fal know you found this job on Jobera.
4 other jobs at Fal
View all →Explore open roles at Fal.
Similar Software Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.
