Software Engineer III/Senior, Data Platform
Quick Summary
You’ll design and evolve the pipelines and orchestration systems that move data across ngrok - from product events to financial reporting. Ingestion, transformation, modeling, reliability.
ngrok is an all-in-one cloud networking platform that secures, transforms, and routes traffic to services running anywhere. Instead of cobbling together nginx, NLBs, VPNs, model routers, and oodles of other tools, developers solve every networking problem with one gateway. Doesn’t matter if they’re sharing localhost or running AI workloads in production.
We're trusted by more than 9 million developers at companies like GitHub, Okta, HashiCorp, and Twilio. What started as a way to put your local app on a public URL has grown into a universal gateway for API delivery, AI inference, device fleets, and site-to-site connectivity. It’s the same ngrok that millions of developers have loved and leaned on every day for years, now with the power to run production traffic at scale.
A few things you should know:
- We are obsessed with our pets, Viper sunglasses and Bufo (yes, the toad)
- We have a designated Chief Emoji Officer - they are vital to our success!
- We like software that’s serious and culture that’s not
Still reading? Good. There's more below worth your time.
The Data Platform team owns the data platform and analytics systems that power decision-making across ngrok. We handle ingestion, modeling, metrics, and reporting-the systems that make sure every event is counted correctly and every number in a deck can be defended.
We manage about 500TiB of data, run a dagster instance with over 1,600 assets, maintain 550+ dbt models, and own Flink streaming pipelines that process ~22,000 messages per second on average.
This data is used across all teams @ ngrok, from marketing to financial reporting. Our systems must be correct, explainable, and resilient under real-world conditions: traffic spikes, schema changes, late-arriving events, and other challenges that come from running a large, globally distributed system.
We treat data as a product: reliable, observable, well-modeled, and thoughtfully designed. Our Data Platform team is part of the Engineering organization and doesn’t live in a silo.
Responsibilities
~1 min read- →Build the data backbone: You’ll design and evolve the pipelines and orchestration systems that move data across ngrok - from product events to financial reporting. Ingestion, transformation, modeling, reliability. The foundation everything else depends on.
- →Make the numbers make sense: You’ll own core business and product datasets - usage, revenue, growth, performance-and ensure they’re accurate, reconciled, and trusted. No mystery metrics. No “why doesn’t this match?” Slack threads.
- →Turn raw events into decision-ready insight: You’ll build and refine the models that power dashboards, planning, forecasting, and experimentation. Clean schemas. Durable definitions. Metrics people actually align on.
- →Raise the bar on data reliability: You’ll implement validation, testing, observability, and monitoring across our data systems. Pipelines shouldn’t silently fail. Dashboards shouldn’t drift. Finance shouldn’t find surprises.
- →Own the platform as it scales: You’ll improve performance, cost efficiency, and architectural design across our data stack (Airbyte, Dagster, dbt, Athena, Flink, Superset, and beyond). As traffic and customers grow, the data platform keeps up.
- →Partner across the company: You’ll work closely with Product, Engineering, GTM, Finance, and Leadership. They’ll have hard questions. You’ll build the systems that make those answers obvious.
- You’re familiar with Python, SQL, and Scala
- You’re also comfortable in a language such as Go, Rust, C++, or Java (with bonus points for Go)
- You’re comfortable writing production-quality code and treating data systems like real software-not just queries in a notebook
- You’re interested in AWS infrastructure and Kubernetes, managed through Infrastructure as Code (Terraform or similar) — not click ops
- You’ve built and operated large-scale event streams, product telemetry, or high-volume ingestion pipelines in production
- You enjoy thinking about data models, invariants, lineage, and failure modes
- You care about data quality and observability, and you design systems that make errors visible-not silent
- You’re the person people ping when the numbers don’t add up-and you actually enjoy figuring out why
- Usage-based billing, metering, revenue, or financial reporting systems
- Event-driven or streaming data architectures
- Customer-facing dashboards or internal executive reporting
ngrok runs entirely on AWS. Engineers develop by using remote development tools and/or ssh to connect to remote EC2 environments that run a full Kubernetes cluster of the ngrok stack, closely mirroring production.
We self host a large part of our data stack on Kubernetes, namely dagster, Superset, Airbyte, and Flink. Our core data warehouse is in Athena, with the data being stored as Apache Iceberg. We also run some workloads on our Clickhouse Cloud infrastructure.
Our data codebase is a mix of Python and Scala 3. We use dbt for SQL modeling. All data tools are fully integrated with the rest of our developer platform.
The ngrok codebase is primarily Go and TypeScript. We use Postgres for persistence, Kafka for streaming, Protobuf for service boundaries, and Kubernetes, Terraform, Helm, and Buildkite to operate and ship reliably. React is used for user interfaces, and GitHub supports our development workflows and remembers everything.
This is a remote position for candidates outside of the Bay Area and a hybrid role for candidates within commuting distance to San Francisco. Our Bay Area employees commute to the office on Tuesdays and Wednesdays.
All candidates must be US-based, and legally authorized to work in the United States.
At this time, ngrok is unable to provide visa sponsorship for this position. Applicants must be authorized to work in the United States on a permanent, ongoing basis without the need for current or future sponsorship.
What We Offer
~1 min read- Tier 1 (SF, LA, Seattle, NYC): $180,000 – $225,000
- Tier 2 (rest of US): $165,600 – $207,000
- Tier 1 (SF, LA, Seattle, NYC): $160,000 – $200,000
- Tier 2 (rest of US): $147,200 – $184,000
Job level and actual compensation will be evaluated based on factors including, but not limited to, qualifications objectively assessed during the interview process (including skills and prior relevant experience, potential impact, and scope of role), internal equity with other team members, market data, and specific work location. We provide an attractive mix of salary and equity. #LI-Remote
What We Offer
~2 min readListing Details
- Posted
- March 4, 2026
- First seen
- March 26, 2026
- Last seen
- April 16, 2026
Posting Health
- Days active
- 20
- Repost count
- 0
- Trust Level
- 32%
- Scored at
- April 16, 2026
Signal breakdown
Please let Ngrokinc know you found this job on Jobera.
3 other jobs at Ngrokinc
View all →Explore open roles at Ngrokinc.
Similar Software Engineer III/Senior, Data Platform jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.