Senior Data Engineer
Quick Summary
Filevine is a Legal AI company delivering Legal Operating Intelligence for the future of legal work. Grounded in a singular system of truth, Filevine brings together data, documents, workflows,
- Own and evolve our agentic data modeling and natural language data retrieval (text-to-sql) capabilities: build and curate semantic models, refine prompts, expand verified question libraries, and measure answer quality so that natural-language analytics get more accurate over time.
- Design and build batch and streaming data pipelines that ingest, transform, and model data from Filevine's product, CRM, billing, and telemetry systems into trusted, well-documented data products.
- Build the data foundations that power agentic AI workflows and LOIS — including feature pipelines, retrieval datasets, and low-latency serving paths for LLM-based reasoning over customer data.
- Establish reliability and governance standards including data quality checks, lineage, monitoring, incident response, access control, and PII handling consistent with our compliance posture.
- Partner with product and engineering stakeholders to define event contracts, model business concepts (matters, firms, users, billing) consistently, and reduce ambiguity across downstream consumers.
- Lead the evaluation and adoption of emerging tools across the modern data stack, recommending right-fit solutions that align with Filevine's strategic and security goals.
- Provide technical mentorship within the Data Engineering team, contribute to code reviews and design documents (DDs/ADRs), and help raise the bar on data engineering practice at Filevine.
- Participate in on-call rotations to maintain SLAs for production data pipelines and analytics surfaces.
- 5+ years of professional data engineering or backend engineering experience, with a proven track record of delivering production-grade data systems that drive measurable business outcomes.
- Significant hands-on experience operating a modern cloud data warehouse in production (e.g., Snowflake, BigQuery, Redshift, Databricks, Synapse, or equivalent) — including performance tuning, warehouse and cost management, role-based access control, and orchestration of warehouse-native compute (stored procedures, UDFs, streams/tasks, or equivalent).
- Demonstrated experience building with Agentic AI or LLM-powered systems in production — e.g., RAG pipelines, tool-using agents, MCP servers, warehouse-native LLM functions (such as Snowflake Cortex, BigQuery ML, or Databricks AI), or comparable frameworks.
- Expertise in advanced SQL and Python for building reliable, well-tested data pipelines and transformations.
- Experience with modern data modeling and transformation tooling such as dbt, including testing, documentation, and backward-compatible model design that supports self-service analytics.
- Experience with workflow orchestration (Airflow, Dagster, or similar) and cloud-native deployment on AWS, Azure, or GCP.
- Strong fundamentals in data modeling (dimensional, star/snowflake schemas), distributed systems, performance tuning, and data quality / observability principles.
- Professional experience with modern software development methodologies: Agile/Kanban, Git, CI/CD, and DevOps.
- Excellent written and verbal communication skills, with the ability to explain complex technical and data concepts to both technical and non-technical stakeholders.
- B.S., M.S., or Ph.D. in Computer Science, Information Systems, Engineering, or a related field — or equivalent professional experience
Nice to Have
~1 min read- Hands-on Snowflake experience, including Snowpipe, streams/tasks, data sharing, and cost/governance tuning at scale.
- Experience with Snowflake Cortex Analyst specifically, including authoring and iterating on semantic models and verified queries.
- .NET / C# experience, or familiarity with reading and integrating against a .NET-based application backend.
- Experience using modern UI development tools, particularly Svelte or React
- Experience supporting machine learning workflows: feature stores, training datasets, or real-time scoring infrastructure.
- Experience in SaaS or product-led growth environments, including product analytics and revenue/usage telemetry.
- Infrastructure-as-code experience (Terraform), containerization (Docker, Kubernetes), and deployment (Octopus).
- Familiarity with the legal tech domain, document-heavy data, or working with unstructured data at scale.
- Track record of mentoring engineers and contributing to hiring and team-building.
- You will be a core builder of the data and AI foundations that LOIS and Filevine's product surfaces are built on.
- Your work will directly shape how legal professionals query, reason over, and act on their data — and will determine how fast, accurate, and trustworthy our agentic AI experiences become.
Location & Eligibility
Listing Details
- Posted
- May 13, 2026
- First seen
- May 13, 2026
- Last seen
- May 13, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 87%
- Scored at
- May 13, 2026
Signal breakdown

Filevine is a cloud-based legal technology company providing case management, document management, and automation tools for law firms and legal departments. [1, 4, 30] It aims to simplify and elevate complex legal work. [5, 19]
View company profilePlease let Filevine know you found this job on Jobera.
3 other jobs at Filevine
View all →Explore open roles at Filevine.
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.