codvo-team
codvo-team~2d ago
New

Fullstack Data Engineer (India) (Remote)

IndiaIndia·PuneRemotemid
Data EngineerData
0 views0 saves0 applied

Quick Summary

Overview

Job Description : Data Engineer Role Overview We are looking for a highly skilled Full Stack Data Engineer with expertise in data technologies like snowflake, Azure Data Factory, Databricks to design, develop, and optimize end-to-end data pipelines, data platforms, and analytics solutions.

Key Responsibilities

• Design and develop ETL/ELT pipelines on platforms like Databricks (PySpark, Delta Lake, SQL), Informatica, Teradata, Snowflake. • Architect data models (batch and streaming) for analytics, ML, and reporting.

Requirements Summary

• Bachelor’s or Master’s in Computer Science, Data Engineering, or related field. • 4–7 years of experience in data engineering, with deep expertise in Databricks. Soft Skills • Strong problem-solving and analytical skills.

Technical Tools
angularawsazuredockerfastapiflaskgcpgithub-actionsjavajenkinskafkakuberneteslookerpower-bipythonreactscalasnowflakesparksqltableauci-cddata-visualizationdatabase-designetlmachine-learningmicroservicesnetworkingstreaming-data

We are looking for a highly skilled Full Stack Data Engineer with expertise in data technologies like snowflake, Azure Data Factory, Databricks to design, develop, and optimize end-to-end data pipelines, data platforms, and analytics solutions. This role combines strong data engineering, cloud platform expertise, and software engineering skills to deliver scalable, production-grade solutions.

Responsibilities

~1 min read

•            Design and develop ETL/ELT pipelines on platforms like Databricks (PySpark, Delta Lake, SQL), Informatica, Teradata, Snowflake.

•            Architect data models (batch and streaming) for analytics, ML, and reporting.

•            Optimize performance of large-scale distributed data processing jobs.

•            Implement CI/CD pipelines for Databricks workflows using GitHub Actions, Azure DevOps, or similar.

•            Build and maintain APIs, dashboards, or applications that consume processed data (full-stack aspect).

•            Collaborate with data scientists, analysts, and business stakeholders to deliver solutions.

•            Ensure data quality, lineage, governance, and security compliance.

•            Deploy solutions across cloud environments (Azure, AWS, or GCP).

Requirements

~1 min read

Core Databricks Skills:

•            Strong in PySpark, Delta Lake, Databricks SQL.

•            Experience with Databricks Workflows, Unity Catalog, and Delta Live Tables.

•            Experience in snowflake, data engineering technologies like ETL, ELT

Programming & Full Stack:

•            Python (mandatory), SQL (expert).

•            Exposure to Java/Scala (for Spark jobs).

•            Knowledge of APIs, microservices (FastAPI/Flask), or basic front-end (React/Angular) is a plus.

Cloud Platforms:

•            Proficiency with at least one: Azure Databricks, AWS Databricks, or GCP Databricks.

•            Knowledge of cloud storage (ADLS, S3, GCS), IAM, networking.

DevOps & CI/CD:

•            Git, CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).

•            Containerization (Docker, Kubernetes is a plus).

Data Engineering Foundations:

•            Data modeling (OLTP/OLAP).

•            Batch & streaming data processing (Kafka, Event Hub, Kinesis).

•            Data governance & compliance (Unity Catalog, Lakehouse security).

Nice-to-Have

•            Experience with machine learning pipelines (MLflow, Feature Store).

•            Knowledge of data visualization tools (Power BI, Tableau, Looker).

•            Exposure to Graph databases (Neo4j) or RAG/LLM pipelines.

•            Experience working in Informatica, Teradata in ETL, ELT

Requirements

~1 min read

•            Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.

•            4–7 years of experience in data engineering, with deep expertise in Databricks.

•            Strong problem-solving and analytical skills.

•            Ability to work in fusion teams (business + engineering + AI/ML).

•            Clear communication and documentation abilities.

At Codvo, we are committed to building scalable, future-ready data platforms that power business impact. We believe in a culture of innovation, collaboration, and growth, where engineers can experiment, learn, and thrive. Join us to be part of a team that solves complex data challenges with creativity and cutting-edge technology.


Location & Eligibility

Where is the job
Pune, India
Remote within one country
Who can apply
IN

Listing Details

First seen
May 6, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
46%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

codvo-teamFullstack Data Engineer (India) (Remote)