Quick Summary
About PayPay India PayPay, a fintech company providing a service enjoyed by 70 million users (as of July 2025) merely 7 years since its launch in 2018 in Japan. The company is now home to a very diverse team of members from more than 50 countries.
Minimum 7 years as a Data Engineer or similar role. Hands-on experience with Databricks, Delta Lake, Spark, and Scala. Proven ability to design, build, and operate Data Lakes or Data Warehouses.
To build our Payment services, we got technical cooperation from Paytm (A large payment service company in India). And based on their customer-first technologies , we created and expanded the smartphone payment service in Japan. Therefore, we have decided to establish a development base in India, because it is a major IT country with many talented engineers, as evidenced by the fact that cutting-edge mobile payments can continue to be generated.
PayPay's rapid growth necessitates the expansion of its product teams and underscores the critical need for a resilient Data Engineering Platform. This platform is vital to support our increasing business demands. The Data Pipeline team is tasked with creating, deploying, and managing this platform, utilizing leading technologies like Databricks, Delta Lake, Spark, PySpark, Scala, and the AWS suite.
We are actively seeking skilled Data Engineers to join our team and contribute to scaling our platform across the organization.
Responsibilities
~1 min read- →Create and manage robust data ingestion pipelines leveraging Databricks, Airflow, Kafka, and Terraform.
- →Ensure high performance, reliability, and efficiency by optimizing large-scale data pipelines.
- →Develop data processing workflows using Databricks, Delta Lake, and Spark technologies.
- →Maintain and improve the Data Lakehouse, utilizing Unity Catalog for efficient data management and discovery.
- →Construct automation, frameworks, and enhanced tools to streamline data engineering workflows.
- →Collaborate across teams to facilitate smooth data flow and integration.
- →Enforce best practices in observability, data governance, security, and regulatory compliance
Requirements
~1 min read- Minimum 7 years as a Data Engineer or similar role.
- Hands-on experience with Databricks, Delta Lake, Spark, and Scala.
- Proven ability to design, build, and operate Data Lakes or Data Warehouses.
- Proficiency with Data Orchestration tools (Airflow, Dagster, Prefect).
- Familiarity with Change Data Capture tools (Canal, Debezium, Maxwell).
- Strong command of at least one primary language (Scala, Python, etc.) and SQL.
- Experience with data catalog and metadata management (Unity Catalog, Lakeformation).
- Experience in Infrastructure as Code (IaC) using Terraform.
- Excellent problem-solving and debugging abilities for complex data challenges.
- Strong communication and collaboration skills.
- Capability to make informed decisions, learn quickly, and consider complex technical contexts.
- Leverage AI/LLM-based tools in daily workflows (e.g., code development, reviews, testing, debugging, documentation), while ensuring human oversight, judgment, and accountability drive the final product.
*Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate.
- Please refer PayPay 5 senses to learn what we value at work.
- Full Time
- Gurugram (Wework)
*The development center requires you to work in the Gurugram office to establish the strong core team.
Location & Eligibility
Listing Details
- Posted
- May 8, 2026
- First seen
- May 8, 2026
- Last seen
- May 11, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 60%
- Scored at
- May 8, 2026
Signal breakdown
Please let Pay2Dc know you found this job on Jobera.
3 other jobs at Pay2Dc
View all →Explore open roles at Pay2Dc.
Similar Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.