Dlocal
Dlocal13mo ago

Artificial Intelligence, Technical Referent - AI Lab

MadridFull Timemid
Data ScienceOtherArtificial Intelligence
0 views0 saves0 applied

Quick Summary

Overview

Why should you join dLocal? dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets.

Technical Tools
Data ScienceOtherArtificial Intelligence
Why should you join dLocal?
 
dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we operate, we make it possible for our merchants to make inroads into the world’s fastest-growing, emerging markets. 
 
By joining us you will be a part of an amazing global team that makes it all happen. Being a part of dLocal means working with 1000+ teammates from 30+ different nationalities and developing an international career that impacts millions of people’s daily lives. We are builders, we never run from a challenge, we are customer-centric, and if this sounds like you, we know you will thrive in our team.
 
 
 

What's the Opportunity? 
You'll join the AI Lab, a team whose mission is to validate high-value emerging AI and automation technologies and de-risk their adoption across dLocal. This is a rare opportunity to work at the frontier of applied AI in fintech: running rigorous experiments on the latest models and tools, and turning results into decisions that shape how a global payments company like dLocal adopts AI.
 

As Technical Referent, you are the technical referent for technology scouting and evaluation within dLocal. You will run instrumented spikes and benchmarking on emerging AI technologies, produce clear recommendations for internal teams (could be business, legal, IT, etc.) at dLocal, and coordinate hand-offs to the teams that take validated technologies into production.

You will partner closely with engineering teams responsible for taking your proof of concepts into production (either domain teams or platformization teams): covering platform infrastructure, enablement programs, IT automations, and knowledge systems. Your core focus is evaluation and recommendation, not long-term ownership of production systems.

 
 

Technology Scouting & Evaluation

  • Run short, instrumented spikes and benchmarking on new models, tools and frameworks: LLMs, vector databases, orchestration frameworks, copilots, assistants and more.

  • Compare vendor and open-source options, documenting trade-offs across quality, cost, latency, security and integration complexity.

  • Deliver concise decision memos with clear recommendations: adopt, watch, or avoid.

  • Evaluation Harnesses & Sandboxes

    • Design and maintain evaluation environments (i.e. datasets, prompts, scenarios, telemetry) to test models under realistic constraints.

    • Build automation and tooling to measure quality, robustness, latency and cost, including regression tracking over time.

    • Ensure every evaluated technology has benchmark coverage and a documented risk and limitations view.

    • Readiness Playbooks & Hand-offs

      • For promising technologies, produce readiness playbooks describing recommended patterns, guardrails and integration guidelines.

      • Coordinate with platformization teams to turn validated technologies into platformized capabilities.

      • Track which validated items progress to platformization or pilots, and capture learnings to sharpen future bets.

      • Governance, Risk & Standards

        • Work with Security, Legal, Compliance and other AI teams to document risk assessments, mitigations and governance recommendations for each evaluated technology.

        • Maintain checklists, decision templates and lightweight standards reusable across evaluations and by partner teams.

        • Incorporate learnings from third-party AI tooling already in use (e.g., external copilots, AWS AI suite) into adoption guidelines.

        • Collaboration, Mentoring & Community

          • Partner with other AI teams and domain teams to ensure clear boundaries and smooth collaboration.

          • Participate in hiring as a technical evaluator and culture champion.

          • Mentor engineers in the Lab and adjacent teams on evaluation methods, benchmarking and experimental design.

          • Share knowledge through internal write-ups, tech talks and occasional external meetups and conferences.

Technical depth

  • 8+ years in software engineering; 3+ years working hands-on with LLMs and AI tooling.

  • Strong experience with distributed systems and event-driven architectures, both synchronous and asynchronous.

  • Proficiency with LangChain, LangGraph or similar orchestration frameworks, including custom tools and multi-step workflows.

  • Solid knowledge of AWS infrastructure and how to run evaluation workloads in a secure, cost-aware way.

  • Track record designing and running benchmarks comparing AI models and tools under real constraints.

  • Evaluation & decision-making

    • Able to turn ambiguous "we should try this new thing" ideas into well-scoped evaluation plans with clear hypotheses and metrics.

    • Comfortable making trade-off calls — quality vs. latency vs. cost vs. vendor lock-in — and documenting them clearly.

    • Experience writing short, opinionated decision memos that help others move fast.

    • Collaboration & communication

      • Can explain technical results to non-specialists in concrete, concise terms.

      • Experience working with platform, product and operations teams to align evaluations with real use cases.

      • Able to influence without authority, aligning teams around shared standards and guardrails.

      • Mindset

        • Curious and biased toward experimentation, combined with disciplined measurement and risk awareness.

        • Comfortable in a small, high-leverage team without embedded PMs. You structure your own work and keep stakeholders informed.

        • Builder attitude: you prefer reusable tools, templates and playbooks over one-off work.

Listing Details

Posted
March 25, 2025
First seen
March 26, 2026
Last seen
April 24, 2026

Posting Health

Days active
29
Repost count
0
Trust Level
33%
Scored at
April 24, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Dlocal
Dlocal
lever

dLocal is a Uruguayan company that specializes in cross-border payments, providing innovative local payment solutions for emerging markets.

Employees
750
Founded
2016
View company profile
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

DlocalArtificial Intelligence, Technical Referent - AI Lab