I
Ifm Us10mo ago
USD 150000–450000/yr
Research Scientist - Distributed Machine Learning
Data ScienceData ScientistResearch ScientistDataData & AI
0 views0 saves0 applied
Quick Summary
Overview
About the Institute of Foundation Models We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research,
Technical Tools
Data ScienceData ScientistResearch ScientistDataData & AI
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy.
As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers.
Role Overview
Build and scale distributed pre-training frameworks
· Set up DeepSpeed / FSDP / Megatron-LM across multi-node GPU clusters.
· Create robust launch scripts, resilient checkpoints, and job monitoring (e.g. NCCL/GLOO/GPU).
Turn mathematical ideas into fast production code
· Prototype new optimizers or attention methods (like in PyTorch/NumPy/JAX orothers).
· Convert them into efficient CUDA/Triton kernels with custom gradients and tests.
Boost training efficiency and stability
· Lead mixed-precision training, push bf16, fp8, etc, into daily runs, track their accuracy-vs-speed gains, and be able to analyze numeric stability
· Apply kernel fusion, communication tuning, and memory optimization to reach state-of-the-art throughput.
Accelerate research velocity
· Build logging, metrics, and other experiment-tracking tools for rapid iteration.
· Design ablation studies and statistical tests that validate—or refute—new ideas.
· Mentor interns and junior engineers through clear async design docs and code reviews.
You’ll work side-by-side with researchers, ship production code, and shape the future of large language models.
Why You’ll Love This Job
· Frontier-scale impact – Train and ship cutting-edge models powering MBZUAI research and industry collaborations.
· Research × Engineering blend – Move breakthrough papers into real systems and publish your own results.
· End-to-end mastery – Touch everything from petabyte data loaders to custom low-level kernels—experience that’s rare elsewhere.
· Open, mission-driven science – Join a transparent culture tackling problems that truly advance AI.
· Founding-team growth – Help set direction for IFM U.S. and lead the next generation of AI development.
Key Responsibilities
· Framework Ownership – Productionize a PyTorch/JAX pre-training stack and keep it reliable at scale.
· Custom Optimizer Implementation – Code new algorithms in distributed frameworks directly from mathematical specs.
· Experiment Infrastructure – Build reusable modules, logging, and metrics dashboards that speed up research cycles.
· Performance Optimization – Apply kernel fusion, communication optimization, and memory management to thousands of GPU jobs.
· Distributed Debugging – Rapidly diagnose gradient synchronization, collective-ops, or fault-tolerance issues.
· Collaboration – Document designs clearly, run post-mortems, and partner with global research teams.
Qualifications
Must-Haves
· 5 + years combined industry or hands-on research experience with large-scale deep-learning training.
· Led at least one large-scale transformer pre-training run
· Expert PyTorch or JAX/Flax plus DeepSpeed, FSDP, Megatron-LM, or MosaicML Composer.
· Experience with distributed training at scale (100+ GPUs).
· Proven multi-node GPU work (Slurm, K8s, or Ray) and NCCL/GLOO debugging.
· Strong software engineering skills on large ML codebases
· Ownership of mixed- or low-precision paths (bf16, fp8, 4-bit) with accuracy validation.
· Clear written communication (design docs, RFCs, post-mortems).
Nice-to-Haves
· NeurIPS / ICML / ICLR papers or open-source contributions to major ML frameworks.
· Experience implementing optimization algorithms (e.g., SGD variants, Adam, second-order methods).
· Background in numerical computing.
· Ability to translate math and
· build high-perf CUDA/Triton kernels.
Listing Details
- Posted
- June 9, 2025
- First seen
- March 26, 2026
- Last seen
- April 25, 2026
Posting Health
- Days active
- 29
- Repost count
- 0
- Trust Level
- 42%
- Scored at
- April 25, 2026
Signal breakdown
freshnesssource trustcontent trustemployer trust
Salary
USD 150000–450000
per year
External application · ~5 min on Ifm Us's site
Please let Ifm Us know you found this job on Jobera.
Similar Data Scientist jobs
View all →World Model Research Scientist- Physical AI
USD 180000-240000
People Research Scientist
USD 245000-310000
Research Scientist, World Models
Full-time
Research Scientist, Neural Reconstruction
Full-time
H
HeartflowincCollaborating Research Scientist
S
SnorkelaiRemoteResearch Scientist
USD 200000-275000
Remote
Newsletter
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
A
B
C
D
No spam. Unsubscribe at any time.
I
Research Scientist - Distributed Machine LearningUSD 150000–450000