Senior AI Researcher - Pre-training (f/m/d)
Quick Summary
Our Mission Aleph Alpha is one of the few companies in Europe doing serious foundation model pre-training. Our customers — in finance, manufacturing, and public administration — need models that understand German, meet European regulatory requirements, and work reliably in high-stakes settings.
You have experience training large language models (LLMs) or multimodal models on large GPU clusters. You have experience with distributed training frameworks such as torchtitan, Megatron-LM, or DeepSpeed.
Aleph Alpha is one of the few companies in Europe doing serious foundation model pre-training. Our customers — in finance, manufacturing, and public administration — need models that understand German, meet European regulatory requirements, and work reliably in high-stakes settings. We’re building that in Heidelberg.
We are hiring a Senior AI Researcher to join our Pre-training team and to advance the architecture and training of our next generation of foundation models. If you are excited about designing inference-efficient architectures, optimising training recipes that scale reliably, and training models on a large scale cluster (thousands of NVIDIA Blackwell GPUs), we would love to hear from you.
We foster a culture built on ownership, autonomy, and empowerment. Teams and individual contributors are trusted to take responsibility for their work and drive meaningful impact. We maintain a flat organisational structure with efficient, supportive management that enables quick decision-making, open communication, and a strong sense of shared purpose. We collaborate closely on complex technical problems, working in pairs or using mob programming to resolve challenging issues.
About the Role
~1 min readAs a Senior AI Researcher in Pre-training, you will work on the core technical problems that determine whether large-scale pre-training succeeds: architecture, optimisation, stability, and scaling up.
You will work at the intersection of model architecture, training dynamics, and large-scale distributed training, translating empirical observations into principled training decisions. From small-scale proxy experiments to multi-thousand-GPU runs, you will ensure our models converge as expected and scale efficiently.
We are looking for someone who combines significant research experience with strong engineering ability. You should be comfortable reasoning mathematically about training behaviour, designing rigorous experiments, and maintaining a high-quality production codebase.
Your work sits at high leverage: the training decisions you make directly determine model quality, run reliability, inference efficiency, and how quickly we can improve the next generation of models. You’ll have direct influence on the models we ship.
Responsibilities
~1 min read- →
Requirements
~1 min readYou are proficient in Python and deeply familiar with PyTorch-based training workflows.
You have a strong track record in machine learning research and software engineering, demonstrated through shipped models, impactful open-source contributions, or published research.
You have a strong mathematical foundation and are comfortable reasoning formally about optimisation, scaling behaviour, and training dynamics.
You deeply understand transformer training dynamics, optimisation, and the behaviour of large distributed training jobs.
You can design rigorous experiments, reason clearly from noisy results, and translate empirical observations into robust training decisions.
You apply strong software engineering practices, including writing maintainable, well-tested code and supporting reproducible experimentation workflows.
You are able to implement complex model architectures efficiently and reliably and to debug complex issues across model code, training dynamics, and distributed systems.
You collaborate effectively within a research and engineering team and communicate clearly about your work across Pre-training and the broader AAR/AA organization.
You are able to work in Germany and collaborate regularly on site in Heidelberg as part of the Pre-training team.
Requirements
~1 min readYou have experience training large language models (LLMs) or multimodal models on large GPU clusters.
You have experience with distributed training frameworks such as torchtitan, Megatron-LM, or DeepSpeed.
You have experience with scaling laws, hyperparameter transfer, or other methods for predicting large-scale training behaviour from smaller experiments.
You have experience diagnosing and improving training stability in large runs, including divergence, numerical instability, or optimiser pathologies.
You have experience profiling, debugging, or improving the performance of large distributed training jobs.
You are familiar with sparse training approaches such as Mixture-of-Experts and the associated systems and routing trade-offs.
You have a track record of research excellence demonstrated through publications in top-tier conferences (e.g. NeurIPS, ICML, ICLR), impactful open-source contributions, or other significant technical work.
We do not require prior experience in low-level kernel optimisation for this role, but we value curiosity about the hardware and systems constraints that shape model design and training at scale.
What We Offer
~1 min readLocation & Eligibility
Listing Details
- Posted
- April 22, 2026
- First seen
- May 5, 2026
- Last seen
- May 6, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 23%
- Scored at
- May 6, 2026
Signal breakdown
Please let alephalpha know you found this job on Jobera.
3 other jobs at alephalpha
View all →Explore open roles at alephalpha.
Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.