Lead ML Engineer - Mapping
Quick Summary
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan,
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think.
Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We’re building the world’s best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we’ve given more than 500,000 autonomous rides to real people around the globe. And we’re just getting started. We’re hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
Responsibilities
~2 min read- →Architect, design, and implement a production-grade lane and route network mapping stack, ensuring high-performance integration with the broader autonomy system
- →Lead the research, design, training and validation of advanced neural architectures. This includes object detection, classification, segmentation, tracking, depth estimation, and 3D reconstruction to extract and model lane and route networks, alongside key semantic features (e.g., traffic signs, signals, and road markings), for automated mapping.
- →Drive major feature development from inception to deployment. This includes high-level architecture design, rigorous code reviews, automated testing, mentorship of junior engineers, and technical resolution.
- →Own the end-to-end data strategy for the mapping domain. You will define data curation, auto-labeling, synthetic data, and active learning pipelines to capture and resolve long-tail scenarios.
- →Develop robust metrics and evaluation frameworks for lane and route network accuracy, temporal consistency, and scaling across diverse Operational Design Domains (ODDs).
- →Work independently with cross-functional teams to translate complex autonomy goals into clear software and system requirements.
- →Collaborate with ML and Autonomy engineers to ensure the seamless deployment and validation of mapping features to the vehicle fleet.
- →Stay at the research frontier by evaluating, adapting, and innovating cutting-edge techniques. This includes online vectorized HD map construction, end-to-end mapping models, and vision/fusion foundation models to deliver production-ready solutions.
Requirements
~1 min readCandidates most successful in this role typically hold the following qualifications or comparable knowledge or experience:
- Ph.D. or Master’s degree in Computer Science, Electrical Engineering, Robotics, or a related field with a strong mathematical and engineering foundation.
- 7+ years of industry experience developing and deploying ML/DL models for mapping or computer vision at scale.
- Deep expertise in several of the following areas:
- Computer Vision Foundations: Object detection, classification, segmentation, tracking, depth estimation, and 3D reconstruction.
- Lane-level topology and connectivity, intersection modeling, and lane/road network graph construction.
- Vectorized mapping networks (e.g., MapTR), BEV-based scene representation, and temporal modeling.
- Self-supervised/semi-supervised and vision/fusion Foundation Models.
- Strong understanding of HD maps, including lane and road network geometry modeling, connectivity, and semantic attributes.
- Expertise in ML/DL development using PyTorch or TensorFlow, including experience with distributed training, synthetic data generation, large-scale dataset handling, and data curation strategies.
- Strong programming skills in Python and/or C++ with experience in modular software design and Linux-based development.
- Proven leadership in guiding technical roadmaps, mentoring engineers, and driving measurable improvements in model performance and system reliability.
- Strong communication skills with the ability to lead technical discussions and align with cross-functional teams.
- 10+ years of experience in ML/DL for autonomous driving or ADAS systems.
- Experience with feature extraction and/or fusion from both street-level and overhead imagery.
- Experience utilizing Vision-Language Models (VLMs) and/or Foundation Models for auto-labeling and long-tail (edge-case) detection.
- Expertise in ML optimization for real-time products with limited compute, such as quantization and pruning of large transformer models.
- A proven record of inventions and/or publication record at top-tier conferences (e.g., CVPR, NeurIPS, ICCV, ECCV, ICLR).
Requirements
~1 min read- Standard office working conditions which includes but is not limited to:
- Prolonged sitting
- Prolonged standing
- Prolonged computer use
- Travel required? - Moderate: 11%-25%
What We Offer
~2 min readLocation & Eligibility
Listing Details
- Posted
- April 3, 2026
- First seen
- April 3, 2026
- Last seen
- April 27, 2026
Posting Health
- Days active
- 23
- Repost count
- 0
- Trust Level
- 51%
- Scored at
- April 27, 2026
Signal breakdown
Please let Maymobility know you found this job on Jobera.
3 other jobs at Maymobility
View all →Explore open roles at Maymobility.
Similar Machine Learning Engineer jobs
View all →Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.