Platform Support Engineer (APAC)
Quick Summary
Work Directly With ML Engineers Partner directly with customer engineering teams running training and inference workloads in production Help customers diagnose and resolve complex distributed systems and ML infrastructure issues Act as a technical…
Infrastructure & Systems Strong software engineering and systems troubleshooting background Experience with Kubernetes and containerized environments Linux systems knowledge, including networking, storage, process management, and performance tuning…
Lightning AI is the company behind PyTorch Lightning. Founded in 2019, we build an end-to-end platform for developing, training, and deploying AI systems—designed to take ideas from research to production with less friction.
Through our merger with Voltage Park, a neocloud and AI Factory, Lightning AI combines developer-first software with cost-efficient, large-scale compute. Teams get the tools they need for experimentation, training, and production inference, with security, observability, and control built in.
We serve solo researchers, startups, and large enterprises. Lightning AI operates globally with offices in New York City, San Francisco, Seattle, and London, and is backed by Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.
Responsibilities
~1 min read- →Partner directly with customer engineering teams running training and inference workloads in production
- →Help customers diagnose and resolve complex distributed systems and ML infrastructure issues
- →Act as a technical advisor during high impact incidents and platform degradation events
- →Translate infrastructure level issues into actionable guidance for ML engineers
- →Build credibility with customers through strong technical reasoning and clear communication
- Investigate failures involving distributed training, Kubernetes orchestration, GPU allocation, networking, and storage systems
- Troubleshoot PyTorch, CUDA, NCCL, and inference serving related issues
- Analyze logs, metrics, traces, and system behavior to isolate root causes
- Debug containerized workloads running across Kubernetes and bare metal GPU environments
- Support customers scaling workloads across multi node GPU systems
- Diagnose performance bottlenecks involving compute, memory, networking, or storage
- Identify recurring patterns across customer issues and drive long term reliability improvements
- Contribute to post incident reviews and operational improvements
- Build internal tooling, automation, documentation, and runbooks
- Partner closely with infrastructure, networking, and platform engineering teams
- Help improve observability, operational visibility, and troubleshooting workflows
- Improve the customer experience through better processes and technical guidance
To set clear expectations:
- This is not a traditional help desk or ticket routing support role
- This is not purely customer success or account management
- This is not a backend engineering role
- This is not a passive escalation position
This role is for engineers who enjoy solving difficult technical problems while working closely with other engineers.
Requirements
~1 min read- Strong software engineering and systems troubleshooting background
- Experience with Kubernetes and containerized environments
- Linux systems knowledge, including networking, storage, process management, and performance tuning
- Experience with cloud infrastructure and distributed systems
- Experience with observability and debugging tools such as Prometheus, Grafana, or OpenTelemetry
- Hands on experience operating machine learning workloads in production or research environments
- Experience with distributed ML systems and tooling such as PyTorch, CUDA, or NCCL
- Familiarity with GPU infrastructure and orchestration
- Experience troubleshooting performance, reliability, or scaling issues in ML infrastructure
- Understanding of the operational challenges involved in running ML systems at scale
- Strong communication skills and ability to work directly with highly technical customers and engineering teams
- Comfortable operating in fast moving, highly ambiguous environments
- Enjoys solving complex technical problems collaboratively
- Experience with large scale model training or distributed inference systems
- Familiarity with Ray, Kubeflow, Slurm, or similar distributed scheduling platforms
- Experience with InfiniBand, RDMA, or high-performance networking
- Experience operating bare metal infrastructure
- Familiarity with storage systems commonly used in ML environments
- Experience working at an AI infrastructure, cloud, MLOps, or developer tooling company
- Contributions to platform engineering, developer infrastructure, or operational tooling projects
- Experience writing automation, tooling, or scripts in Python or similar languages
What We Offer
~1 min readWe offer a comprehensive and competitive benefits package designed to support our employees’ health, well-being, and long-term success. Benefits may vary by location, team, and role.
What We Offer
~1 min readLocation & Eligibility
Listing Details
- Posted
- May 15, 2026
- First seen
- May 16, 2026
- Last seen
- May 16, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 75%
- Scored at
- May 16, 2026
Signal breakdown
Please let Lightningai know you found this job on Jobera.
3 other jobs at Lightningai
View all →Explore open roles at Lightningai.
Similar Platform Support Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.