Senior Data Engineer / Databricks
Quick Summary
Take ownership of data engineering features, architecture, and code quality Design, implement,
7+ years of relevant experience as a Data Engineer Strong hands-on experience with Databricks and Apache Spark Proficiency in Python or Scala (both strongly preferred) Very good knowledge of SQL,
We are looking for an Experienced Data Engineer with strong hands-on experience in Databricks-based data platforms. The role focuses on building, optimizing, and maintaining scalable data pipelines, data lakes, and Lakehouse solutions that enable advanced analytics and data-driven products.
You will work on heavy data processing tasks, third-party integrations, ETL/ELT pipelines, and orchestration of data workloads in cloud environments. If you enjoy working with Databricks, Spark, large datasets, and modern cloud data stacks - and you are not constrained by a single programming language or tool - this role could be a great fit.
Responsibilities
~1 min read- →
Take ownership of data engineering features, architecture, and code quality
- →
Design, implement, and maintain Databricks-based data pipelines and workflows
- →
Build and optimize ETL/ELT processes using Apache Spark on Databricks
- →
Design and manage data lakes and Lakehouse architectures (Delta Lake)
- →
Integrate diverse data sources and ensure reliable data ingestion
- →
Automate orchestration, scheduling, and monitoring of Databricks jobs
- →
Design and implement fault-tolerant and scalable data processing workflows
- →
Ensure high data quality, consistency, and accuracy across the platform
- →
Make informed decisions about storage, compute, and performance optimization
- →
Collaborate with analytics, BI, and business stakeholders to support data-driven products
Requirements
~1 min read7+ years of relevant experience as a Data Engineer
Strong hands-on experience with Databricks and Apache Spark
Proficiency in Python or Scala (both strongly preferred)
Very good knowledge of SQL, relational databases, and data warehousing concepts
Solid experience with ETL/ELT principles and data pipeline design
Hands-on experience with cloud platforms (Azure, AWS, or GCP), preferably Databricks workloads
Experience working with distributed systems and large-scale data processing
Familiarity with Unix-like operating systems
Experience with version control systems
Strong communication skills and English language proficiency
Nice to Have
~1 min readDatabricks certifications - Professional level
Experience with Delta Lake, performance tuning, and cost optimization
Experience with streaming technologies (Kafka or similar)
Knowledge of workflow orchestration tools (Databricks Workflows, Airflow, etc.)
Experience with cloud-native and serverless data architectures
Familiarity with containerization and virtualization (Docker, Kubernetes)
Experience building data assets that directly support analytics and business decision-making
Location & Eligibility
Listing Details
- Posted
- April 30, 2026
- First seen
- May 5, 2026
- Last seen
- May 6, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 28%
- Scored at
- May 6, 2026
Signal breakdown
Please let htecgroup know you found this job on Jobera.
4 other jobs at htecgroup
View all →Explore open roles at htecgroup.
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.