Truecaller
Truecaller15d ago

Senior Data Engineer

Bangalore,Bangaloresenior
EngineeringData EngineeringData EngineerDataData & AI
0 views0 saves0 applied

Quick Summary

Key Responsibilities

Design, develop, and maintain scalable data pipelines to process and analyze large data sets in real-time and batch environments. Play a crucial role in the team and own ETL pipelines.

Technical Tools
EngineeringData EngineeringData EngineerDataData & AI

Truecaller's mission is to build trust in communication by making it safer, smarter, and more efficient. Born in Sweden, trusted by the world, and here’s why we stand out:

  • We are trusted by over 450 million active users every month across 190+ countries
  • We identify over 15 billion calls daily, helping users avoid spam and scams
  • We are powered by a team of 450+ employees from 45+ nationalities

We always look for people who take initiative, own their work, and keep raising the bar. An entrepreneurial mindset matters here, especially when it turns bold ideas into real actions. We stay collaborative and focused, always searching for smarter paths forward. If you want to make an impact and grow with a team that inspires millions, you’ll fit right in.

You will play an important role in the development of data pipelines, frameworks and models to support the understanding of our users and making better product decisions. You will contribute to empowering the product teams with a complete self-serve analytics platform by working on scalable and robust solutions while collaborating with data engineers, data scientists and data analysts across the company.

Responsibilities

~1 min read
  • Design, develop, and maintain scalable data pipelines to process and analyze large data sets in real-time and batch environments.
  • Play a crucial role in the team and own ETL pipelines.
  • Collaborate with data scientists, analysts, and stakeholders to gather data requirements, translate them into robust ETL solutions, and optimize the data flows.
  • Implement best practices for data ingestion, transformation, and data quality to ensure data consistency and accuracy.
  • Develop, test, and deploy complex data models and ensure the performance, reliability, and security of the infrastructure.
  • Own the architecture and design of data pipelines and systems, ensuring they are aligned with business needs and capable of handling growing volumes of data.
  • Make data-driven decisions accompanied by past experience.
  • Monitor data pipeline performance and troubleshoot any issues related to data ingestion, processing, or extraction.
  • Work with big data technologies to enable storage, processing, and analysis of massive datasets.
  • Ensure compliance with data protection and privacy regulations, particularly in regions like the EU where GDPR compliance is essential.
  • 6+ years of experience as a Data Engineer
  • Hands-on experience with Airflow for managing workflows and building complex data pipelines in a production environment.
  • Experience working with big data and ETL development.
  • Strong proficiency in SQL and experience working with relational databases
  • Programming skills in PySpark, Spark with Scala, Apache Spark, Kafka, or Flink.
  • Experience working with cloud computing services (eg : GCP, AWS, Azure).
  • Experience with Data Science workflows.
  • Experience in data modeling and creating data lakes using GCP services like BigQuery and Cloud Storage.
  • Expertise in containerization and orchestration using Docker and Kubernetes (GKE) for scaling applications and services on GCP.
  • Build data models and transformations using DBT following software engineering best practices (modularity, testing).
  • Version control experience with Git and familiarity with CI/CD pipelines (e.g., Github actions).
  • Strong understanding of data security, encryption, and GCP IAM roles to ensure privacy and compliance (especially in relation to GDPR and other regulations).
  • Experience in ML model lifecycle management (model deployment, versioning, and retraining) using GCP tools like AI Platform, TensorFlow Extended (TFX), or Kubeflow, and Vertex AI. 
  • Experience in working with Data Analysts and Scientists in building Systems in Production.
  • Excellent problem solving and communication skills both with peers and experts from other areas.
  • Self-motivated and have a proven ability to take initiative to solve problems.
  • Familiarity with event-driven architecture and microservices using Cloud Pub/Sub, Cloud Run, or GKE to build highly scalable, resilient, and loosely coupled systems.
  • Proficiency in backend programming languages like Go, Python, Java, or Scala specifically for building highly scalable, low-latency data services and APIs.
  • Hands-on experience in designing and implementing RESTful APIs or gRPC services for seamless integration with data pipelines and external systems.
  • Hands-on experience with GCP-native tools for advanced analytics, such as Looker, Data Studio, or BigQuery BI Engine, for building visualizations and reporting dashboards.
  • Knowledge of real-time data processing and analytics using Apache Flink, Kafka Streams, or Druid for ultra-low latency use cases.
  • Experience with data observability tools such as Monte Carlo, Databand.ai, or OpenLineage, ensuring the integrity and quality of data across pipelines.
  • Experience optimizing Cloud Storage, BigQuery partitioning, and clustering strategies for large-scale datasets, ensuring cost-effectiveness and query performance.
  • Domain knowledge in specific industries (e.g., telecom, calls, and message communication) where large-scale data pipelines and regulatory compliance are critical, allowing you to bring domain-specific expertise to complex challenges

What We Offer

~2 min read
A comprehensive compensation package:  Learning and development allowance, voluntary provident fund (VPF) and/or national pension scheme (NPS) tax saving option provided, creche allowance
Modern tools to do your best work: Choose your preferred computer and phone within our budget, so you can work comfortably and efficiently.
A people-focused office culture: We value in-person collaboration and follow an office-first model, with some flexibility. Our offices offer a vibrant environment with opportunities to learn, connect, and recharge, from breakfast, lunch and quiet spaces to team activities such as movie nights, tech meetups, and cultural events. There's something for everyone.
Truecaller’s “Lab Days” offer a space for imagination: 5 days each quarter, where everyone steps away from their normal tasks to explore new, bold ideas and build things they’ve always wanted to. It’s a space where curiosity leads the way, and prototypes take shape. Some concepts even make it into production, and a few have grown into real features used by millions today. Lab Days allow you to be creative, learn fast, and help shape Truecaller's future.

We will fill the position as soon as we find the right candidate, so please send your application as soon as possible. As part of the recruitment process, we will conduct a background check.

We only accept applications in English.

Location & Eligibility

Where is the job
Bangalore
On-site at the office
Who can apply
Same as job location
Listed under
Worldwide

Listing Details

Posted
April 14, 2026
First seen
April 14, 2026
Last seen
April 29, 2026

Posting Health

Days active
15
Repost count
0
Trust Level
36%
Scored at
April 29, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Truecaller
Truecaller
greenhouse
Employees
350
Founded
2009
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

TruecallerSenior Data Engineer