C
Coupa2mo ago

Sr. Lead Software Engineer - 11282

US RemoteRemoteMid Levelsenior
OtherData EngineeringSoftware EngineeringLead Software EngineerData Platform Software Engineer
1 views0 saves0 applied

Quick Summary

Overview

Coupa makes margins multiply through its community-generated AI and industry-leading total spend management platform for businesses large and small. Coupa AI is informed by trillions of dollars of direct and indirect spend data across a global network of 10M+ buyers and suppliers.

Requirements Summary

Nice to have exposure to semantic layers, metrics frameworks, or BI-friendly data modeling Experience supporting analytics or AI/ML workloads Candidates could be based out of any of these locations: San Francisco, Seattle, New York, Austin, Chicago,…

Technical Tools
awsazurebigquerygcppythonsnowflakesqlcode-reviewdatabase-designetlmentoring
Coupa makes margins multiply through its community-generated AI and industry-leading total spend management platform for businesses large and small. Coupa AI is informed by trillions of dollars of direct and indirect spend data across a global network of 10M+ buyers and suppliers. We empower you with the ability to predict, prescribe, and automate smarter, more profitable business decisions to improve operating margins.

Why join Coupa?

🔹 Pioneering Technology: At Coupa, we're at the forefront of innovation, leveraging the latest technology to empower our customers with greater efficiency and visibility in their spend.
🔹 Collaborative Culture: We value collaboration and teamwork, and our culture is driven by transparency, openness, and a shared commitment to excellence.
🔹 Global Impact: Join a company where your work has a global, measurable impact on our clients, the business, and each other. 

Learn more on Life at Coupa blog and hear from our employees about their experiences working at Coupa. 

The Impact of a Sr. Lead Software Engineer at Coupa: 
 
At Coupa, we are building a centralized, modern data platform on Apache Iceberg, designed to operate across a multi-cloud environment and power customer-facing analytics at scale. Our platform processes large volumes of data every day, combining batch ingestion and CDC-based streaming events, and serves data that is directly consumed by our customers.
 
This is a senior, hands-on individual contributor role with high ownership and technical depth. A key part of the role is continuously improving latency, reliability, and performance as the platform scales.
 
You will work on challenging, real-world data problems—ranging from high-throughput ingestion to enabling sub-second data retrieval and query performance—while influencing architecture and best practices across the broader data ecosystem. If you enjoy solving complex data platform problems, working close to production systems, and seeing your work directly impact customers, this role offers constant opportunities to make a meaningful impact without becoming repetitive.
  • Design and implement scalable, high-throughput data ingestion systems that integrate internal and external data across domains
  • Design and build core data platform components, including ingestion, validation, orchestration, and lineage, with a focus on code quality and reliability
  • Build and evolve a centralized data lake using Apache Iceberg (or similar table formats)
  • Work across multi-cloud environments (AWS, GCP, Azure) to design and implement cloud-agnostic data ingestion and processing patterns
  • Contribute hands-on to the Semantic layer, ensuring data is easy to consume for BI and analytics teams
  • Partner with Senior Data Engineers, Platform Engineers, and Analytics Engineers to align how data is produced, stored, and consumed
  • Establish practical engineering standards for testing, observability, and operational excellence
  • Provide technical leadership through mentorship, code reviews, and design discussions, while remaining hands-on
  • 8-10+ years of experience in software or platform engineering, with a focus on building scalable data and analytics platforms
  • Strong understanding of data ingestion patterns at scale, including CDC, and how data should be modeled and stored in a data lake for fast, efficient retrieval
  • Proven experience building and operating large-scale data pipelines in production
  • Experience working with modern data warehouses such as Databricks, BigQuery, or Snowflake
  • Strong proficiency in Python and SQL, with a focus on writing production-quality, maintainable, and testable code
  • Hands-on experience working with cloud data services in AWS, GCP, or Azure
  • Experience working with query engines such as Presto or Trino to enable fast, reliable analytics over data lakes
  • Familiarity with Lakehouse architectures and table formats such as Iceberg or Delta Lake
  • Familiarity with data governance, lineage, metadata, cataloging, and data quality practices
  • Nice to have:
  • Nice to have exposure to semantic layers, metrics frameworks, or BI-friendly data modeling
  • Experience supporting analytics or AI/ML workloads
  • Candidates could be based out of any of these locations: San Francisco, Seattle, New York, Austin, Chicago, Phoenix, Northern Virginia (Fairfax, Arlington, Richmond) OR remote in the US.
  •  

    Location & Eligibility

    Where is the job
    Worldwide
    Fully remote, anywhere in the world
    Who can apply
    Same as job location
    Listed under
    Worldwide

    Listing Details

    Posted
    March 10, 2026
    First seen
    March 27, 2026
    Last seen
    May 10, 2026

    Posting Health

    Days active
    43
    Repost count
    0
    Trust Level
    32%
    Scored at
    May 10, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    C
    Sr. Lead Software Engineer - 11282