Fal
Fal6mo ago

Staff Technical Lead for Inference & ML Performance

San Francisco,San Franciscolead
OtherTechnical Lead
4 views0 saves0 applied

Quick Summary

Overview

fal is the generative media ecosystem powering the next generation of AI products. We build the infrastructure, tools, and model access that teams need to move from idea to production, and do it at scale without compromise.

Key Responsibilities

What success looks like Set technical direction. Guide your team (kernels, applied performance, ML compilers, distributed inference) to build high-performance inference solutions.

Technical Tools
notionpytorchperformance-optimization

fal is the generative media ecosystem powering the next generation of AI products. We build the infrastructure, tools, and model access that teams need to move from idea to production, and do it at scale without compromise. For developers and enterprises, fal is the foundation that makes generative media not just possible, but practical: a unified platform where high-performance inference, orchestration, and observability come together to unlock new categories of AI-native products.

As generative media reshapes industries across a market projected to grow by hundreds of billions over the next decade, fal is becoming the ecosystem that ambitious teams build on.

You’ll shape the future of fal’s inference engine and ensure our generative models achieve best-in-class performance. Your work directly impacts our ability to rapidly deliver cutting-edge creative solutions to users, from individual creators to global brands.

Responsibilities

~1 min read
Day-to-day
What success looks like
Set technical direction. Guide your team (kernels, applied performance, ML compilers, distributed inference) to build high-performance inference solutions.
fal’s inference engine consistently outperforms industry benchmarks in throughput, latency, and efficiency.
Hands-on IC leadership. Personally contribute to critical inference performance enhancements and optimizations.
You regularly ship code that significantly improves model serving performance.
Collaborate closely with research & applied ML teams. Influence model inference strategies and deployment techniques.
Seamless integration of inference innovations rapidly moves from research to production deployment.
Drive advanced performance optimizations. Implement model parallelism, kernel optimization, and compiler strategies.
Performance bottlenecks are quickly identified and eliminated, dramatically enhancing inference speed and scalability.
Mentor and scale your team. Coach and expand your team of performance-focused engineers.
Your team independently innovates, proactively solves complex performance challenges, and consistently levels up their skills.
  • Are deeply experienced in ML performance optimization. You've optimized inference for large-scale generative models in production environments.
  • Understand the full ML performance stack. From PyTorch, TensorRT, TransformerEngine, Triton to CUTLASS kernels, you’ve navigated and optimized them all.
  • Know inference inside-out. Expert-level familiarity with advanced inference techniques: quantization, kernel authoring, compilation, model parallelism (TP, context/sequence parallel, expert parallel), distributed serving and profiling.
  • Lead from the front. You're a respected IC who enjoys getting hands-on with the toughest problems, demonstrating excellence to inspire your team.
  • Thrive in cross-functional collaboration. Comfortable interfacing closely with applied ML teams, researchers, and stakeholders.
  • Experience building inference engines specifically for diffusion and generative media models
  • Track record of industry-leading performance improvements (papers, open-source contributions, benchmarks)
  • Leadership experience in scaling technical teams

One of the highest impact roles at one of the fastest growing companies (revenue is growing 40% MoM, we are 60x+ RR compared to last year, raised Series A/B/C within the last 12 months) with a world changing vision: hyperscaling human creativity.

Sound like your calling? Share your proudest optimization breakthrough, open-source contribution, or performance milestone with us. Let's set new standards for inference performance, together.

Location & Eligibility

Where is the job
San Francisco
On-site at the office
Who can apply
Same as job location
Listed under
Worldwide

Listing Details

Posted
October 29, 2025
First seen
March 26, 2026
Last seen
May 15, 2026

Posting Health

Days active
49
Repost count
0
Trust Level
31%
Scored at
May 15, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Fal
Fal
greenhouse
Employees
5
Founded
2004
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

FalStaff Technical Lead for Inference & ML Performance