decentriq
decentriq4mo ago
New

Senior Data Pipeline Engineer - Freelance (80-100%, remote/Zurich/Berlin)

SwitzerlandSwitzerland·ZürichFreelancesenior
OtherEngineer
0 views0 saves0 applied

Quick Summary

Overview

Decentriq is the rising leader in data-clean-room technology. With Decentriq, advertisers, retailers, and publishers securely collaborate on 1st-party data for optimal audience targeting and campaign measurement.

Requirements Summary

(Must have) Bachelor/Master/PhD in Computer Science, Data Engineering, or a related field and 5+ years of professional experience.

Technical Tools
airflowpythonrustscalasparkadtechetl

Decentriq is the rising leader in data-clean-room technology. With Decentriq, advertisers, retailers, and publishers securely collaborate on 1st-party data for optimal audience targeting and campaign measurement. Headquartered in Zürich, Decentriq is trusted by renowned institutions in the DACH market and beyond, such as RTL Ad Alliance, Publicis Media, and PostFinance.

Our analytics & ML pipelines are the heartbeat of this platform. Built in Python and Apache Spark, they run in Databricks workspaces. We are looking for a Senior Data Pipeline Engineer freelancer (≥ 80 %, start as soon as possible) for 6 months (possible conversion into FTE) to support our team during a crunch time driven by customer demand. The role can be fully remote (± 4 h CET) or based in our Zürich/Berlin office.

Would you like to help us make the advertising industry ready for the 1st-party era? Then we’d love to hear from you!

  • Own, Design, Build & Operate Data Pipelines – Take responsibility for our Spark-based pipeline, from development through production and monitoring.
  • Advance our ML Models – Improve and productionise models for AdTech use-cases such as lookalike modelling and demographics modeling.
  • AI-Powered Productivity – Leverage LLM-based code assistants, design generators, and test-automation tools to move faster and raise the quality bar. Share your workflows with the team
  • Drive Continuous Improvement – Profile, benchmark, and tune Spark workloads, introduce best practices in orchestration & observability, and keep our tech stack future-proof.

Requirements

~1 min read
  • (Must have) Bachelor/Master/PhD in Computer Science, Data Engineering, or a related field and 5+ years of professional experience.
  • (Must have) Expert-level Python and PySpark/Scala Spark experience
  • (Must have) Proven track record building resilient, production-grade data pipelines with rigorous data-quality and validation checks.
  • (Must have) Data-platform skills: operating Spark clusters, job schedulers, or orchestration frameworks (Airflow, Dagster, custom schedulers).
  • (Plus) Working knowledge of ML lifecycle and model serving; familiarity with techniques for audience segmentation or look-a-like modelling is a big plus.
  • (Plus) Rust proficiency (we use it for backend services and compute-heavy client-side modules).

What We Offer

~1 min read
Competitive rates
Ownership instead of simply an executor
An amazing and fun team that is distributed all over Europe.

Location & Eligibility

Where is the job
Zürich, Switzerland
On-site at the office
Who can apply
CH

Listing Details

Posted
January 7, 2026
First seen
May 6, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
14%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

decentriqSenior Data Pipeline Engineer - Freelance (80-100%, remote/Zurich/Berlin)