Senior Embedded Vision Engineer
Quick Summary
Lime is the largest global shared micromobility business, operating in close to 30 countries across five continents. We’re on a mission to build a future where transportation is shared, affordable and carbon-free.
5+ years of industry experience in Computer Vision and Machine Learning, with hands-on experience deploying models in real-world or embedded environments.
Lime is the largest global shared micromobility business, operating in close to 30 countries across five continents. We’re on a mission to build a future where transportation is shared, affordable and carbon-free. Our electric bikes and scooters have powered more than one billion rides in cities around the world. Named a 2025 Time 100 Most Influential Company, Lime continues to set the pace for shared micromobility globally, spurring a new generation of clean alternatives to car ownership.
We are looking for a high-impact Senior Embedded Vision Engineer to help build and deploy the core perception capabilities for the Lime Vision team. In this role, you will focus on developing, optimizing, and deploying computer vision solutions that enable our vehicles to understand and interact with the real world in real time.
You will work on challenging, real-world problems in micro-mobility—such as tandem riding detection, precision parking validation, and sidewalk riding prevention—helping bring these solutions from concept to reliable deployment on edge hardware. This role requires strong expertise in Computer Vision, Machine Learning, and embedded systems, with a focus on performance, reliability, and scalability in production environments.
You will work closely with applied scientists, hardware/firmware engineers, and product teams to execute on the technical roadmap and ensure robust system performance in diverse and unpredictable real-world conditions.
This is a remote position with a requirement for candidates to reside in the United States to maintain effective collaboration across teams.
Responsibilities
~1 min read- →
Model Development & Deployment: Design, train, and deploy computer vision models for real-time inference on edge devices, balancing accuracy, latency, and power constraints.
- →
Edge Optimization: Optimize models for embedded hardware (e.g., NVIDIA Jetson, Ambarella, ARM-based SoCs), including working within vendor SDKs to meet latency, memory, and power constraints.
- →
System Performance & Profiling: Identify and resolve performance bottlenecks across the full pipeline (data ingestion, preprocessing, inference, postprocessing) to meet strict real-time requirements.
- →
Sensor Fusion: Work with camera data alongside onboard sensors such as IMUs and GPS, contributing to multi-sensor fusion approaches to improve system robustness and state estimation (e.g., filtering and smoothing techniques such as Kalman filters).
- →
Robustness in the Real World: Build systems that perform reliably under challenging conditions such as varying lighting, motion, occlusions, and environmental noise.
- →
Cross-Functional Collaboration: Work closely with hardware, firmware, and applied science teams to integrate and deploy vision models on edge devices.
- →
Evaluation & Iteration: Contribute to model evaluation and validation efforts, helping identify performance gaps and inform improvements in data collection and model design.
- →
End-to-End Contribution: Participate in the full development lifecycle, from data understanding and model iteration to deployment and monitoring in the field.
5+ years of industry experience in Computer Vision and Machine Learning, with hands-on experience deploying models in real-world or embedded environments.
Strong programming skills in C/C++ and Python, with experience developing and optimizing applications on resource-constrained devices. Experience working with SoC vendor SDKs, hardware interfaces, and low-level system components is a plus.
Experience with edge AI and embedded platforms (e.g., NVIDIA Jetson, Ambarella, ARM-based SoCs), including performance optimization under latency and power constraints.
Familiarity with model deployment and optimization frameworks such as TensorRT, ONNX Runtime, OpenVINO, or similar.
Experience working with camera systems and additional sensors (e.g., IMU, GPS), with a practical understanding of sensor fusion techniques and state estimation methods.
Solid understanding of computer vision techniques (e.g., detection, classification, segmentation) and practical experience applying them in production.
Experience working with real-world data challenges, including noisy inputs, edge cases, and variability across environments.
Strong debugging and problem-solving skills, particularly in constrained or low-visibility environments.
Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Robotics, or a related field (or equivalent practical experience).
What We Offer
~2 min readLocation & Eligibility
Listing Details
- Posted
- March 20, 2026
- First seen
- May 7, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 34%
- Scored at
- May 7, 2026
Signal breakdown
Please let lime know you found this job on Jobera.
Similar Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.