S
Sambatv1mo ago
PLN 180000–330000/yr

Data Scientist

Warsaw,WarsawInternational Full Time Employeeentry
Data ScienceData ScientistDataData & AI
0 views0 saves0 applied

Quick Summary

Overview

Samba is an AI-powered media intelligence company on a mission to give marketers the complete picture of their audiences. Our AI indexes media consumption across millions of smart TVs and 2.

Technical Tools
Data ScienceData ScientistDataData & AI
Samba is an AI-powered media intelligence company on a mission to give marketers the complete picture of their audiences. Our AI indexes media consumption across millions of smart TVs and 2.5 billion web pages, combining that data with third-party signals through the Samba Knowledge Graph, a map of the real interests, behaviors, and purchase intent of 1.5 billion user profiles globally. Brands, agencies, publishers, and platforms use Samba to make smarter decisions across every stage of the marketing funnel.

As a mid-level Data Scientist at Samba in Warsaw, you will own end-to-end delivery of significant data science projects with minimal guidance. You are a reliable, autonomous contributor with deep expertise in at least one of Samba's core domains — measurement, or audience modelling — and the technical range to build production-ready solutions using modern ML and AI methodologies. You'll work closely with peers, product, and engineering, and play an active role in mentoring junior data scientists on the team.
  • Own end-to-end delivery of significant data science projects — from problem scoping and approach design through to production deployment
  • Make sound, independently-reasoned decisions on methodology, model selection, and evaluation; document them clearly in technical solution documents covering problem statement, approach, metrics, and timeline
  • Lead solution design for your own initiatives; break down complex epics into well-scoped user stories with clear acceptance criteria, adopting DataOps and MLOps best practices throughout — experiment tracking, pipeline orchestration, model monitoring, and reproducibility
  • Build production-quality Python and PySpark code on Databricks — well-tested, documented, and reusable — and implement advanced ML and AI-powered workflows including entity resolution, probabilistic record linkage, embedding-based matching, semantic similarity, and LLM-augmented pipelines
  • Develop and maintain reusable tools, libraries, and documentation that improve team efficiency and technical standards; conduct code reviews with constructive, specific feedback that raises the bar
  • Mentor junior data scientists on technical execution, code quality, and career development; lead internal talks or workshops on ML topics
  • Collaborate cross-functionally with product, engineering, and operations — translate business requirements into technical specifications, partner with data engineering on scalable pipeline design, and participate in cross-functional design reviews and working groups
  • Bachelor's degree required in Statistics, Data Science, Computer Science, Mathematics or a related quantitative field; Master's strongly preferred
  • 3–5 years of hands-on data science experience with demonstrated ability to own and deliver complex, multi-sprint projects independently
  • Advanced Python with production-quality code, testing, and documentation; strong SQL and PySpark for billion-row datasets
  • Databricks workflows, Delta Lake, and job orchestration; working knowledge of cloud platforms (AWS or GCP)
  • Solid command of core ML — regression, classification, clustering, model evaluation, and experimental design — applied to complex, high-volume data
  • Proficiency with MLOps practices: experiment tracking, pipeline orchestration (Airflow), and reproducible model deployment
  • Exposure to modern AI methodologies: RAG systems, LLM-augmented models, vector databases, and semantic search
  • Strong communicator — able to translate technical work into clear documentation, user stories, and cross-functional conversations
  • Demonstrated ability to mentor junior data scientists and contribute to team standards
  • Hands-on experience with knowledge graph construction, entity resolution, or semantic data modeling (RDF, OWL, SPARQL, or equivalent graph frameworks)
  • Familiarity with probabilistic record linkage, identity graph approaches, or embedding-based entity matching at scale
  • Experience with causal inference methods (A/B testing, synthetic control, uplift modeling)
  • Experience with deduplication, enrichment, or web-to-TV linkage problems
  • Background in media, ad tech, or measurement — TV viewership (ACR/STB data), digital audience modeling, cross-platform measurement (linear + CTV/OTT), or identity resolution in privacy-constrained environments
  • Familiarity with the measurement and identity vendor landscape (Nielsen, Comscore, LiveRamp, The Trade Desk
  • Location & Eligibility

    Where is the job
    Warsaw
    On-site at the office
    Who can apply
    Same as job location
    Listed under
    Worldwide

    Listing Details

    Posted
    March 5, 2026
    First seen
    March 26, 2026
    Last seen
    April 30, 2026

    Posting Health

    Days active
    34
    Repost count
    0
    Trust Level
    42%
    Scored at
    April 30, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    S
    Data ScientistPLN 180000–330000