Quick Summary
Ohme is on a mission to accelerate the global transition to clean, affordable energy. We do that by serving as an integrated hardware and software smart-grid platform, focused on the residential EV charging market.
Own the design, build and operation of our agent runtime: the host service that orchestrates models, tools, MCP connections and memory for everything we ship.
Production engineering rigour. Several years building and operating backend services in Python and/or TypeScript. You’ve shipped to real users, written tests that mattered, run on-call, and reasoned about cost and reliability, not just built…
Ohme is on a mission to accelerate the global transition to clean, affordable energy. We do that by serving as an integrated hardware and software smart-grid platform, focused on the residential EV charging market.
The worlds of energy, transport and artificial intelligence are colliding and Ohme is at the heart of this new era. By using technology and data integrations to connect cars, chargers, people, energy providers and more, Ohme has a powerful platform that puts the consumer at the core.
Ohme has been selling its chargers to consumers since mid 2019 and has had exponential growth since. We are now operating in multiple countries and have partnerships with the likes of VW, Mercedes, Octopus Energy, and other innovative brands.
We are scaling up the business and are building out the team for rapid growth. If you’re interested joining a fast-growing cleantech venture on a data and AI-first journey to speed up the global transition to clean, affordable energy, read on!
We’re hiring an AI Engineer to help us build, run and scale the agentic platform powering Ohme’s applied AI work. You’ll own and evolve our agent runtime: the middleware that orchestrates LLM calls, tool execution, MCP connections, memory and routing, and contribute across the wider platform: integration layers, internal MCP servers, tools design, evaluation, observability and the dev experience our team relies on.
Our agents are already live with internal and external users, and we’re scaling fast. The work this year is hardening what’s running, opening new use cases, and turning real production traffic into the feedback loop that shapes what we ship next.
This is a hands-on, engineering-led role at the intersection of applied AI, cloud infrastructure on AWS, modern agent protocols, and the energy, grid, EV charging and customer outcomes domains we work in every day. Our stack is Python and TypeScript on AWS, and MCP is central to how our agents reach the systems they need.
Responsibilities
~1 min read- →Own the design, build and operation of our agent runtime: the host service that orchestrates models, tools, MCP connections and memory for everything we ship.
- →Build and harden the integration layer between agents and the systems they reach: our internal MCP server, third-party connectors, our data platform and our APIs.
- →Contribute across the full stack (backend services, integration glue, AWS infrastructure-as-code, dev experience) to keep the platform scaling reliably as load and use cases grow.
- →Maintain and evolve the evaluation, tracing and observability we rely on to measure agent quality and operate confidently in production.
- →Keep agent services secure, cost-efficient and operationally healthy as they grow.
- →Productionise patterns from the wider MCP and agent ecosystem (advanced tool use, code orchestration, agent skills, MCP server design, elicitation and sampling) and translate them into our stack.
- →Partner with data scientists, AI engineers, applied AI analysts and product to ship end-to-end across applied AI use cases.
- →Stay close to the fast-moving agent ecosystem (frontier model releases, MCP spec evolution, new client capabilities) and bring practical, high-leverage patterns back to the team. Set engineering standards as we go.
- Production engineering rigour. Several years building and operating backend services in Python and/or TypeScript. You’ve shipped to real users, written tests that mattered, run on-call, and reasoned about cost and reliability, not just built prototypes.
- Hands-on with agents, with a production mindset. You’ve built seriously with LLMs; tool-using agents, multi-step workflows, agent memory, evaluation harnesses. Whether that’s in production, in PoCs, hackathon projects, internal tools or ambitious side projects. You think about what would break under real load, how you’d know, and how you’d fix it. If you’ve shipped to real users, even better; if you haven’t yet, show us you’ve been pushing on the hard parts anyway.
- Deep MCP fluency, beyond "tools". You understand MCP as a protocol, not just a buzzword. You know the difference between tools, resources, prompts and sampling, you’ve used or built servers with elicitation, and you’re tracking newer capabilities like the MCP Apps extension. You can explain when MCP is the right choice over a direct API call or a CLI, and when it isn't.
- Daily driver of frontier coding agents. You use coding agents (e.g. Claude Code, OpenCode, Pi, Hermes, similar) as a core part of how you work. You can show what your workflow looks like, what you’ve automated, and where you’ve pushed the tooling, including authoring skills, sub-agents or plugins of your own.
- Cloud-native delivery. Comfortable shipping on AWS with infrastructure-as-code. You don’t need to have used our exact stack; pragmatic familiarity with cloud primitives and IaC of any flavour is what we care about.
- Obsessive curiosity and a builder’s instinct. The agent space is moving weekly and you’re the kind of engineer who keeps up because you can’t help it. You read the specs, try the new clients, automate the annoying parts of your own dev loop, and turn weekend experiments into things your team actually uses. Show us the receipts: dev-ex automations, PoCs, hackathon wins, side projects, anything where you saw a sharp edge and went after it.
Nice to Have
~1 min read- You’ve built and shipped MCP servers or contributed to the open MCP ecosystem.
- You’ve thought about agent context efficiency: progressive disclosure, tool search, programmatic tool calling, code execution sandboxes, REPL-style tool patterns.
- You’ve designed agent skills/playbooks for procedural knowledge, separately from tools.
- You’ve built or used agent eval harnesses, golden datasets, or production tracing for agents.
- Frontend / UI experience: chat surfaces, internal tooling, agent-facing dashboards.
- AWS specifics: CDK, Bedrock, AgentCore, Lambda, DynamoDB, API Gateway.
- Background in energy, grid, EV charging, IoT-heavy or other data-rich domains.
We care about evidence over claims. Things that catch our attention: agents or services you’ve shipped to users, MCP servers or skills you’ve open-sourced, blog posts or talks where you’ve broken down an agent design, contributions to the MCP spec or SDKs, or a workflow setup you’d be proud to demo. If your best work isn’t public, tell us about it in the application.
- Customer Obsessed: The customer is at the heart of everything we do.
- Brave: We try new things and lead through possibility.
- Collaborative: We believe our success is built on strong relationships.
- Do Good: We care about people and the environment.
- Progressive: We innovate, disrupt and are always learning.
What We Offer
~1 min readLocation & Eligibility
Listing Details
- Posted
- May 8, 2026
- First seen
- May 8, 2026
- Last seen
- May 14, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 60%
- Scored at
- May 8, 2026
Signal breakdown
Please let Ohme know you found this job on Jobera.
3 other jobs at Ohme
View all →Explore open roles at Ohme.
Similar Machine Learning Engineer jobs
View all →Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.
