Staff, ML Engineer - Road & Lane Detection
Quick Summary
For this position, we are open to hiring in either the Torc Montreal, Quebec (Canada) or Ann Arbor, MI (U.S.) office work locations in a hybrid capacity.
As a Staff Machine Learning Engineer focused on Road & Lane Detection, you will lead the model development efforts that enable Torc’s autonomous vehicles to perceive and interpret road geometry, lane structures, and drivable surfaces with precision and robustness.
You’ll define the next generation of deep learning architectures and data-driven approaches that extract high-fidelity road and lane semantics from multi-modal sensor data — driving critical improvements in perception accuracy, stability, and scalability.
This is a technical leadership role focused on model innovation and maturity, not downstream feature integration.
Responsibilities
~1 min read- →
Own the model roadmap for Road & Lane Detection within the Model Dev ML org — from concept through production-grade model maturity.
- →
Research, design, and train advanced neural architectures (e.g., multi-camera BEV transformers, LiDAR-vision fusion models, topological lane graph networks) to detect, segment, and model road structures and lane connectivity.
- →
Lead data strategy for this domain — defining data curation, labeling policies, and active learning pipelines to capture long-tail scenarios (e.g., occlusions, complex merges, construction zones).
- →
Develop robust metrics and evaluation frameworks for lane and road geometry accuracy, temporal consistency, and cross-domain generalization.
- →
Advance foundational capabilities such as self-supervised pretraining, synthetic-to-real adaptation, and temporal modeling for road and lane understanding.
- →
Drive large-scale experiments — designing, running, and analyzing results from distributed training workflows and ablations to identify scalable improvements.
- →
Collaborate with other model dev/perception teams to ensure model coherence and interface consistency.
- →
Mentor engineers and scientists, setting best practices for model training, evaluation, and code quality.
- →
Stay ahead of the research frontier by evaluating and adapting emerging techniques (e.g., BEV-based large models, vectorized map prediction, lane graph transformers) to production-grade perception.
- 10+ years of experience developing deep learning models for perception or computer vision at scale.
-
M.S. or Ph.D. in Computer Science, Electrical Engineering, Robotics, or a related field (or equivalent experience).
-
Deep expertise in semantic and instance segmentation, BEV modeling, or scene topology estimation.
-
Strong understanding of lane and road geometry modeling, camera calibration, and sensor projection.
-
Proficiency with Python and modern ML frameworks (e.g., PyTorch, Lightning).
-
Experience with distributed training pipelines, experiment management, and large-scale dataset handling.
-
Proven leadership in guiding technical roadmaps, mentoring engineers, and driving measurable model improvements.
Nice to Have
~1 min read-
Experience developing ML models for autonomous driving, mapping, or ADAS systems.
-
Familiarity with multi-modal fusion (camera, LiDAR, radar, HD maps).
-
Hands-on experience with BEV-based and topological prediction models.
-
Contributions to perception-related ML research (CVPR, NeurIPS, ICCV, ICLR, ICRA).
-
Strong intuition for data quality, bias mitigation, and uncertainty modeling.
Requirements
~1 min read$209,300-313,800 CAD
Job ID: R-102402
Listing Details
- First seen
- March 26, 2026
- Last seen
- April 20, 2026
Posting Health
- Days active
- 25
- Repost count
- 0
- Trust Level
- 32%
- Scored at
- April 20, 2026
Signal breakdown
Please let Torcrobotics know you found this job on Jobera.
3 other jobs at Torcrobotics
View all →Explore open roles at Torcrobotics.
Similar Ml Engineer Foundation Model Infrastructure jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.