adaptive-ml
adaptive-ml15mo ago
New

Member of Technical Staff (Open Role)

Torontofull-timelead
OtherMember Of Technical Staff
0 views0 saves0 applied

Quick Summary

Overview

About the team Adaptive ML is a frontier AI startup building a Reinforcement Learning Operations (RLOps) platform that enables enterprises to specialize and deploy LLMs into production with measurable impact.

Technical Tools
pythonrustdistributed-systemsetlmachine-learningroadmap-planning

Our Technical Staff develops the foundational technology that powers Adaptive ML in alignment with requests and requirements from our Commercial and Product teams. We are committed to building robust, efficient technology and conducting at-scale, impactful research to drive our roadmap and deliver value to our customers.

About the Role

~1 min read

This is an open-role, describing a generic position in our Technical Staff. If any of the below seems like a fit, please apply!

As a Member of Technical Staff, you will contribute to building the foundational technology that powers Adaptive ML, primarily by working on our internal LLM Stack, Adaptive Harmony. We believe that generative AI is best approached as a “big science”--combining large-scale engineering with rigorous empirical research. As such, we emphasize scalability and systematic,  empirical demonstrations in our approach. We are looking for self-driven, business-minded, and ambitious individuals interested in supporting real-world deployments of a highly technical product. As this is an early role, you will have the opportunity to shape our research efforts and product as we grow.

Examples of tasks our Technical Team pursue on a daily basis:

  • Develop robust software in Rust, interfacing between easy-to-use Python recipes and high-performance, distributed training code running on hundreds of GPUs;

  • Profile and iterate GPU inference kernels in Triton or CUDA, identifying memory bottlenecks and optimizing latency—and decide how to adequately benchmark an inference service;

  • Develop and execute an experiment analyzing nuances between DPO and PPO in a fair and systematic way;

  • Build data pipelines to support reinforcement learning from noisy and diverse user' interactions across varied tasks;

  • Experiment with new ways to combine adapters and steer the behavior of language models;

  • Build hardware correctness tests to identify and isolate faulty GPUs at scale.

Responsibilities

~1 min read

Generally,

  • Build the foundational technology powering Adaptive, with a focus on high-performance software engineering and large-scale RL research;

  • Contribute to our product roadmap, by identifying promising trends and high-impact findings;

  • Report clearly on your work to a distributed collaborative team, with a bias for asynchronous written communication.

On the engineering side,

  • Write high-quality software in Rust, with a focus on performance and robustness;

  • Profile dedicated GPU kernels in CUDA or Triton, optimizing across latency/compute-bound regimes for complex workloads;

  • Identify and resolve bugs in large distributed systems, at the intersection of software and hardware correctness.

On the research side,

  • Conduct research on large language models or diffusion models, systematically exploring how reinforcement learning can be used to personalize models;

  • Reproduce results from the RL, LLM, and diffusion literature, distinguishing the noise from the groundbreaking;

  • Own a research agenda, with a bias for at-scale, systematic empirical research.

Nearly all members of our Technical Staff hold a position that is a blend of engineering and research.

The background below is only suggestive of a few pointers we believe could be relevant. We welcome applications from candidates with diverse backgrounds; do not hesitate to get in touch if you think you could be a great fit,even if the below doesn't fully describe you.

  • A M.Sc./Ph.D. in computer science, or demonstrated experience in software engineering, preferably with a focus on machine learning;

  • Strong programming skills, especially regarding distributed problems where performance is key;

  • Contributions to relevant open-source projects, such as efficient implementations of models and RL;

  • A track record of publications at top-tier machine learning venues (e.g., NeurIPS, JMLR);

  • Passionate about the future of generative AI, and eager to build foundational technology to help machines deliver more singular experiences.

What We Offer

~1 min read
Comprehensive medical (health, dental, and vision) insurance;
401(k) plan with 4% matching (or equivalent);
Unlimited PTO — we strongly encourage at least 5 weeks each year;
Mental health, wellness, and personal development stipends;
Visa sponsorship if you wish to relocate to New York or Paris.

Location & Eligibility

Where is the job
Toronto
Hybrid — some on-site time required
Who can apply
Same as job location

Listing Details

Posted
February 3, 2025
First seen
May 5, 2026
Last seen
May 9, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
16%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

adaptive-mlMember of Technical Staff (Open Role)