Senior / Staff Full-Stack Data Engineer – Databricks (Pune)
Quick Summary
Job Title: Senior / Staff Full-Stack Data Engineer – Databricks
Design, build, and maintain ETL/ELT pipelines on Databricks using Spark, Delta Lake, andDatabricks Workflows Build and operate batch and real-time data pipelines for ingestion, transformation, andorchestration Operationalize machine learning…
Experience with Databricks infrastructure tuning and cost optimization Exposure to streaming frameworks and real-time data processing- Experience with Infrastructure-as-Code (Terraform or similar) What Success Looks Like Reliable, scalable, and…
At Codvo, we are committed to building scalable, future-ready data platforms that power business impact. We believe in a culture of innovation, collaboration, and growth, where engineers can experiment, learn, and thrive. Join us to be part of a team that solves complex data challenges with creativity and cutting-edge technology.
We are seeking a Senior / Staff Full-Stack Data Engineer with deep Databricks expertise to design, build, and operate scalable data and machine learning pipelines. This role works closely with data scientists, platform teams, and application engineers to productionize analytics and ML workloads with high reliability, performance, and cost efficiency.
Responsibilities
~1 min readDesign, build, and maintain ETL/ELT pipelines on Databricks using Spark, Delta Lake, andDatabricks Workflows
Build and operate batch and real-time data pipelines for ingestion, transformation, andorchestration
Operationalize machine learning inference pipelines authored by data scientists (batch andreal-time)
Ensure consistency between model training and inference environments
Implement data quality checks, validation rules, monitoring, alerting, and automated recovery- Collaborate with data scientists to productionize models and optimize inference performance and cost
Implement CI/CD, DevOps, and MLOps best practices for data pipelines and ML workflows
Optimize compute, storage, and job configurations for performance and cost efficiency- Implement and manage enterprise data governance using Unity Catalog (schemas, lineage, ownership, documentation)
Work with Databricks infrastructure and platform configurations
Requirements
~1 min readStrong hands-on experience with Databricks, Apache Spark, and Delta Lake
Proven experience building and operating production-grade data pipelines
Experience operationalizing machine learning models and inference pipelines
Strong understanding of data reliability, observability, and monitoring practices
Experience with CI/CD, DevOps, and MLOps workflows
Experience working with cloud platforms (AWS or Azure)
Familiarity with Unity Catalog and enterprise data governance concepts- Experience with spec-driven development and coding agents
Nice to Have
~1 min readExperience with Databricks infrastructure tuning and cost optimization
Exposure to streaming frameworks and real-time data processing- Experience with Infrastructure-as-Code (Terraform or similar)
Reliable, scalable, and cost-efficient Databricks data and ML pipelines
Smooth productionization of ML models with strong collaboration across teams
High data quality, observability, and platform stability
Well-governed data assets with clear ownership and lineage
Location & Eligibility
Listing Details
- First seen
- May 6, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 51%
- Scored at
- May 6, 2026
Signal breakdown
Please let codvo-team know you found this job on Jobera.
4 other jobs at codvo-team
View all →Explore open roles at codvo-team.
Similar Data Engineer jobs
View all →Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.