datologyai
datologyai9mo ago
New

Research Scientist, Post-Training

Redwood Cityfull-timemid
Data ScientistData
0 views0 saves0 applied

Quick Summary

Requirements Summary

3+ years of deep learning research experience Experience with post-training large vision, language, and multimodal models Post-training algorithm development, data curation, and/or synthetic data methods for: Preference-based tuning (e.g.

Technical Tools
pytorchsnowflakedeep-learningmachine-learning

Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy.

At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost (7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.

We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data.

This role is based in Redwood City, CA. We are in office 4 days a week.

About the Role

~1 min read

We’re looking for a Research Scientist to lead work on post-training data curation for foundation models. You’ll design and implement algorithms to generate and improve instruction, preference, and other post-training datasets. You’ll also help bridge the gap between pre-training and post-training by exploring how to jointly optimize data across stages. This role requires strong scientific judgment, fluency with the deep learning literature, and a drive to turn research ideas into real-world impact. You’ll work autonomously, collaborate closely with engineers and product teams, and shape the future of data curation at DatologyAI.

  • Post-training data curation. You’ll conduct research on how to algorithmically curate post-training data—e.g., how to generate and refine preference and instruction-following data, how to curate capability- and domain-specific data, and make post-training more effective, controllable, and generalizable.

  • Unifying pre-training and post-training data curation. Pushing the bounds on model capabilities requires unifying post-training and pre-training data curation. You will pursue research on end-to-end data curation: how to curate pre-training data to improve the post-trainability of models and how to jointly optimize pre- and post-training data curation, all in service of maximizing the final performance of post-trained models.

  • Transform messy literature into practical improvements. The research literature is vast, rife with ambiguity, and constantly evolving. You will use your skills as a scientist to source, vet, implement, and improve promising ideas from the literature and of your own creation.

  • Conduct science driven by real-world needs. At DatologyAI, we understand that conference reviewers and academic benchmarks don’t always incentivize the most impactful research. Your research will be guided by concrete customer needs and product improvements.

  • Nobody knows how to do your work better than you. We believe that scientists do their best work when they have the autonomy to pursue problems in the manner they prefer, and we will ensure that you are equipped with the context and resources you need to succeed.

  • Science is more than just experiments. We expect our Research Scientists to collaborate closely with engineers, talk to customers, and shape the product vision.

  • 3+ years of deep learning research experience

  • Experience with post-training large vision, language, and multimodal models

  • Post-training algorithm development, data curation, and/or synthetic data methods for:

    • Preference-based tuning (e.g. DPO, RLVR, RRHF)

    • Alternative supervision & self-supervision techniques such as self-training and chain-of-thought distillation

    • SFT (e.g. instruction tuning and demonstration fine-tuning)

  • Post-training tooling development and engineering experience

  • Strong understanding of the fundamentals of deep learning

  • Sufficient software engineering + deep learning framework (PyTorch or a willingness to learn PyTorch) skills to conduct large-scale research experiments and build production prototypes.

  • Demonstrated track record of success in deep learning research, whether papers, tools, or other research artifacts.

We would love it if candidates have:

  • Experience with data management and distributed data processing solutions (e.g. Spark, Snowflake, etc.)

  • Experience building + shipping ML products

Candidates do not need a PhD or extensive publications. Some of the best researchers we’ve worked with have no formal training in machine learning, and obtained all of their experience by working in industry and building products. We believe that adaptability, combined with exceptional communication and collaboration skills are the most important ingredients for successful research in a startup environment.

What We Offer

~1 min read
The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.
100% covered health benefits (medical, vision, and dental).
401(k) plan with a generous 4% company match.
Unlimited PTO policy
Annual $2,000 wellness stipend.
Annual $1,000 learning and development stipend.
Daily lunches and snacks are provided in our office!
Relocation assistance for employees moving to the Bay Area.

Location & Eligibility

Where is the job
Redwood City
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
July 16, 2025
First seen
May 6, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
14%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

datologyaiResearch Scientist, Post-Training