Senior Data Engineer (Remote, US)
Quick Summary
design, development, testing, CI/CD deployment, monitoring, and maintenance. Leverage streaming and event-driven architectures using Kafka (Aiven/Debezium) to build real-time data pipelines.
Openly is rebuilding insurance from the ground up. We are re-envisioning and enhancing every aspect of the customer experience. Doing this requires a rapidly growing team of exceptional, curious, empathetic people with a wide range of skill sets, spanning technology, data science, product, marketing, sales, service, claims handling, finance, etc.
We created Openly because we saw an evident gap in the market for premium insurance made simple. Consumers deserve more complete coverage at competitive prices.
- The Price Difference: Using cutting-edge data and technology, we provide you with customizable, competitive prices to protect your most valuable assets.
- The Policy Difference: Coverages are truly customizable to meet your individual protection needs, for both standard coverages and optional add-ons.
- The Experience Difference: From tailored claims handling to highly responsive customer service, we are focused on making the home insurance purchasing process a better overall experience.
At Openly, our people are just as important as our product. For us, collaboration, communication, and work-life balance are more than nice-to-haves— they’re the must-haves that make us who we are. We believe a great company is the result of a shared set of values, so we look for these qualities in every candidate we hire.
- Integrity
- Empathy
- Teamwork
- Curiosity
- Urgency
We've designed our hiring process with you, the candidate, in mind. At every step, you have the chance to present your strengths and learn more about what makes Openly a great place to work.
We embrace individuality and believe diverse teams are winning teams. Our commitment to inclusion across race, gender, age, religion, identity, and experience drives us forward every day.
We are seeking a Senior Data Engineer to be a technical leader on our data engineering team. This role involves designing and delivering robust, scalable data solutions for Openly's insurance platform. You will apply deep expertise across the data lifecycle—from architecture and pipeline design to optimization and mentoring—to shape how we build, manage, and access data for our products and business intelligence.
Responsibilities
~1 min read- →Architect, build, and maintain high-quality, scalable data pipelines, data models, and data infrastructure across our GCP-based platform.
- →Lead technical design decisions related to data architecture, pipeline orchestration, data modeling, and cloud infrastructure.
- →Develop and optimize complex SQL queries and data transformations in BigQuery and PostgreSQL, ensuring performance, reliability, and correctness.
- →Write production-grade code in Python and/or Go to build and enhance data management frameworks, services, and pipeline tooling.
- →Partner with data science, business intelligence, product, and operations teams to translate business requirements into reliable data solutions.
- →Own and improve the full Software Development Lifecycle (SDLC) for data projects: design, development, testing, CI/CD deployment, monitoring, and maintenance.
- →Leverage streaming and event-driven architectures using Kafka (Aiven/Debezium) to build real-time data pipelines.
- →Utilize and optimize distributed data processing frameworks such as Apache Spark for large-scale data transformations.
- →Mentor and provide technical guidance to junior and mid-level data engineers; conduct code reviews and promote engineering best practices.
- →Share your knowledge within the data engineer team and the engineering organization through all-hands presentations, learning hours, domain meetings, and written documentation.
- Backend/Core: Go & Postgresql
- Frontend: Browser-based, VueJS, Vite, Webpack, Nuxt & Tailwind
- Research/Data Science: R, ArcGIS, Vertex, & Python
- Data: GCP GCS, BigQuery, Composer/Airflow, Cloud Functions, Postgres, SQL, Python, Aiven Debezium and Kafka, Fivetran
- Infrastructure: Google Cloud, specifically Cloud Run, Kubernetes, Pub/Sub, BigQuery, and CloudSQL, managed with Terraform. We use GitHub for code hosting, DataDog and for monitoring, PagerDuty for on-call, and CircleCI for running our CI/CD pipelines.
- Remote work tools: Slack, Zoom
Requirements
~2 min read- 4+ years of data engineering and data management experience, with a proven track record of delivering complex, production-grade data systems.
- Strong proficiency in SQL and SQL optimization — including query tuning, indexing strategies, execution plan analysis, and data modeling in BigQuery and PostgreSQL.
- Expert-level scripting and programming in Python; experience with Go is a strong plus.
- Deep expertise with Google Cloud Platform (GCP), including BigQuery, GCS, Composer/Airflow, Cloud Functions, Cloud Run, Pub/Sub, and CloudSQL.
- Proven experience building and operating event-driven and streaming data pipelines using Kafka or similar technologies (Aiven/Debezium experience a plus).
- Strong understanding of modern data warehouse and Lakehouse architectures, including multi-layered data modeling patterns (bronze/silver/gold or equivalent).
- Infrastructure as Code (IaC) experience with Terraform to define, manage, and version cloud data infrastructure.
- Solid understanding of Software Development Lifecycle (SDLC) best practices: CI/CD pipelines, automated testing, code review processes, code repositories, and deployment management.
- Experience with data replication tools (e.g., Fivetran, Debezium) and understanding of CDC (Change Data Capture) patterns.
- Ability to independently drive data architecture decisions, translate business requirements into source-to-target data mappings, and deliver working, maintainable solutions.
- Strong communication skills; able to effectively collaborate with and educate both technical and non-technical stakeholders.
- Experience mentoring engineers and leading technical initiatives within a team environment.
Nice to Have
~1 min read- Hands-on experience with Apache Spark or other distributed data processing frameworks for large-scale batch and/or streaming workloads.
- Familiarity with data observability and monitoring tools (e.g., DataDog, Monte Carlo, Great Expectations).
- Prior experience in a regulated industry such as insurance, finance, or healthcare.
- Contributions to open-source data engineering projects or internal data platform tooling.
- Experience with AI tools such as Claude, Copilot or equivalent
What We Offer
~3 min readListing Details
- Posted
- March 20, 2026
- First seen
- March 26, 2026
- Last seen
- April 20, 2026
Posting Health
- Days active
- 24
- Repost count
- 0
- Trust Level
- 45%
- Scored at
- April 20, 2026
Signal breakdown

Openly is a premium home insurance provider that sells exclusively through independent agents, leveraging technology to offer comprehensive and transparent coverage.
View company profilePlease let Openly know you found this job on Jobera.
3 other jobs at Openly
View all →Explore open roles at Openly.
Similar Senior Data Engineer (Remote, US) jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.