maincode
maincode2mo ago
New

AI Researcher

Melbournefull-timemid
OtherAi Researcher
0 views0 saves0 applied

Quick Summary

Overview

About the role Maincode builds foundation models from first principles on Australian infrastructure. We design architectures, run our own compute, shape the training process, and operate the systems that serve our models.

Requirements Summary

Experience with distributed training concepts and tooling (data parallel, tensor parallel, sharding, checkpointing) Experience running training across multiple nodes and managing long training cycles Familiarity with large-model training stacks and…

Technical Tools
pythonpytorch

About the Role

~1 min read

Maincode builds foundation models from first principles on Australian infrastructure. We design architectures, run our own compute, shape the training process, and operate the systems that serve our models.

We have built Matilda, the first large language model built and trained from scratch in Australia. Our new compute cluster is live; we are scaling the next version of Matilda and deploying and serving it live for public access.

We are looking for AI researchers who want to work on the core architecture, training, and evaluation of large-scale language models that power Matilda.

This role is not focused on incremental benchmarking or paper output. You will work directly with the engineers running large-scale training systems and help design models that learn efficiently and behave reliably in production.

Responsibilities

~1 min read

You will work across the model development loop, from research questions to training runs to evaluation.

This includes:

  • Designing and testing architecture changes and training regimes for large language models

  • Running controlled experiments at scale and isolating causal effects

  • Studying failure modes in reasoning, generalisation, robustness, and representation

  • Shaping objectives, data mixtures, and optimisation choices that influence model behaviour

  • Building and refining evaluations that measure capability and reliability, not just scores

  • Analysing training dynamics using logs, metrics, and model outputs

  • Collaborating with ML systems engineers on distributed training and training operations

  • Writing clear internal notes that turn experimental results into design decisions

You will spend substantial time in code, training runs, logs, and evaluation outputs. The goal is clarity about what improves the model and why.

We care about depth of reasoning, experimental discipline, and the ability to make progress under ambiguity.

We expect:

  • Hands-on experience writing and running production-grade ML or research code

  • Strong Python and experience with PyTorch or JAX

  • Solid understanding of transformer-based language models and the basics of pre-training and evaluation

  • Ability to design experiments, interpret results, and communicate tradeoffs clearly

  • Comfort working close to infrastructure, performance constraints, and operational reality

  • Interest and exposure to reasoning-oriented architectures and training methods beyond standard approaches, and beyond standard LLMs

Nice to Have

~1 min read
  • Experience with distributed training concepts and tooling (data parallel, tensor parallel, sharding, checkpointing)

  • Experience running training across multiple nodes and managing long training cycles

  • Familiarity with large-model training stacks and frameworks (for example Megatron-style systems, DeepSpeed-like tooling, FDSP or similar)

  • Comfort across the full workflow: training, evaluation, and deployment constraints

  • Experience working in ROCm-based environments

This is hands-on research. You will use code as a primary tool for thinking.

You will be expected to:

  • Move between theory and implementation quickly and precisely

  • Prefer controlled experiments over broad sweeps

  • Use logs, metrics, and model behaviour to guide decisions

  • Work closely with engineering counterparts to scale and validate ideas

  • It is not a product research role

  • It is not prompt engineering

  • It is not fine-tuning someone else’s model and shipping wrappers around external APIs

You will work on Matilda, trained from scratch on our infrastructure, and pushed until its behaviour is understood and improved.

Maincode builds and operates the full stack: training infrastructure, model code, evaluation systems, and deployment. We run one of the largest private AI compute environments in Australia, built for the sole purpose of training and deploying large scale models.

If you want to work directly on training and evaluating a large language model built from scratch, this is the only role in Australia that will put you inside that work.

This is a full time role based in Melbourne, working closely with our in person team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.

Location & Eligibility

Where is the job
Melbourne
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
March 5, 2026
First seen
May 8, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
14%
Scored at
May 8, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

maincodeAI Researcher