Quick Summary
Are you obsessed with data, partner success, taking action, and changing the game? If you have a whole lot of hustle and a touch of nerd, come work with Pattern! We want you to use your skills to push one of the fastest-growing companies headquartered in the US to the top of the list.
Designing and implementing robust ETL/ELT pipelines using Airflow, DBT, and cloud-native architectures.
Writing sophisticated, production-grade Python code to automate data orchestration and processing.
Building/optimizing complex SQL queries and dimensional models for OLAP and OLTP based systems
Collaborating with cross-functional teams to ingest and harmonize data from dozens of global marketplaces.
Building and maintaining infrastructure-as-code and containerized workflows to ensure platform reliability.
Leveraging AI thoughtfully to optimize processes and workflows
Bachelor’s degree in Computer Science, Data Science, or a related technical field (or equivalent experience).
7+ years of professional data engineering experience with a heavy focus on ETL/ELT and data modeling.
5+ years of expert-level SQL mastery, including window functions, CTEs, and deep performance tuning.
4+ years of professional Python development specifically tailored for data pipelines and tooling.
3+ years of hands-on experience building/optimizing large-scale data warehouses like Snowflake, BigQuery, or Redshift.
Proficiency with open-source frameworks such as Apache Spark, Trino, Kafka, and Debezium.
A "Data Fanatic" mindset with experience handling petabyte-scale diverse datasets.
Successfully executing the migration or optimization of massive data streams with zero downtime.
Consistently delivering clean, well-documented, and high-quality code that sets the standard for the engineering team.
Acting as a 'Doer' by taking the initiative to resolve platform bottlenecks before they impact partners.
Elevating the technical bar of the team through mentorship and the introduction of innovative engineering practices.
Opportunity to lead major architectural shifts within a rapidly expanding global tech company.
Regular networking and collaboration with high-level technical leadership and AI experts.
Upward mobility toward Staff Data Engineer or specialized technical leadership roles.
Continuous learning opportunities with cutting-edge technologies like Apache Iceberg and real-time streaming architectures.
30 Days: Complete onboarding, gain a deep understanding of current data architectures, and begin contributing to existing projects.
60 Days: Identify and implement at least one major performance optimization within the data environment and lead a small-scale pipeline project.
90 Days: Take responsibility for a significant segment of data processes, collaborating with other engineers and contributing to the long-term roadmap for lakehouse integration.
This role reports directly to the Director of Data Engineering.
You will be joining a growing team of data professionals that span multiple geographies.
In this role, you will collaborate closely with Data Scientists, Software Engineers, AI Engineers, and Product Managers as well as other departments including Marketing and Sales.
Game Changers- A game changer is someone who looks at problems with an open mind and shares new ideas with team members, regularly reassesses existing plans and attaches a realistic timeline to goals, makes profitable, productive, and innovative contributions, and actively pursues improvements to Pattern’s processes and outcomes.
Data Fanatics- A data fanatic is someone who recognizes problems and seeks to understand them through data, draws unbiased conclusions based on data that lead to actionable solutions, and continues to track the effects of the solutions using data.
Partner Obsessed- An individual who is partner obsessed clearly explains the status of projects to partners and relies on constructive feedback, actively listens to partner’s expectations, and delivers results that exceed them, prioritizes the needs of your partners, and takes the time to create a personable experience for those interacting with Pattern.
Team of Doers- Someone who is a part of a team of doers uplifts team members and recognizes their specific contributions, takes initiative to help in any circumstance, actively contributes to supporting improvements, and holds themselves accountable to the team as well as to partners.
Phone Interview with Talent Acquisition
Video Interview
Onsite Interview
Executive Review
Offer
Strong Nice-to-Haves: Expertise in AWS services (Terraform, EKS, Lambda), experience with Apache Iceberg or Delta Lake, and a background in real-time streaming (Kafka/Kinesis).
Interview Tips: Be prepared to discuss your experience managing large-scale data outages or complex optimizations; highlight any 'Partner Obsessed' moments where your data work solved a critical business problem; and demonstrate your 'Data Fanatic' nature through a deep dive into a past side project or complex pipeline you built.
Location & Eligibility
Listing Details
- Posted
- May 6, 2026
- First seen
- May 7, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 62%
- Scored at
- May 7, 2026
Signal breakdown

Pattern is a leading ecommerce acceleration platform that provides a comprehensive range of services to help brands grow in the digital marketplace.
Please let Pattern know you found this job on Jobera.
3 other jobs at Pattern
View all →Explore open roles at Pattern.
Similar Data Engineer jobs
View all →Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.