S
Shieldai7h ago
New
USD 320000–490000/yr

Principal Engineer, AI and Data Platform Engineering (R4941)

United StatesUnited States·San FranciscoFull Time Employee
OtherPlatform Engineering
0 views0 saves0 applied

Quick Summary

Overview

Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems.

Technical Tools
OtherPlatform Engineering
Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube

Shield AI builds autonomy systems for defense applications, including air, maritime, and space platforms operating in complex and contested environments.  

We are establishing a centralized AI and Data Platform organization responsible for the infrastructure that underpins autonomy development across Hivemind and other programs. This team owns the systems used to train models, run simulation, manage data, and deploy models to operational environments.  

We are seeking a Principal Engineer that will scale an initial architecture into a platform that supports multiple autonomy programs.  

Success in this role requires disciplined execution, delivering fast iteration for engineering teams while maintaining reliability, cost control, and architectural consistency as the system scales.  

The Principal Engineer is accountable for ensuring engineers can move efficiently from idea to trained model to deployed capability, and that infrastructure decisions reflect the realities of the domain, including simulation-driven development, continuously evolving multi-modal sensor data, and deployment to constrained and reliability-critical systems.  

This role spans the full lifecycle of autonomy development, training foundation models, running large-scale and multi-fidelity simulation, managing training data, evaluating models, and deploying optimized models to edge systems.  

A key part of this role is defining how these capabilities extend beyond internal use. This includes establishing how Shield AI delivers AI infrastructure in customer environments across on-premise, cloud, hybrid, and sovereign or nationally constrained environments.  

  • Platform Ownership: Define and operate the core AI and data platform across training, simulation, data management, evaluation, and deployment. 
  • Compute Strategy and Infrastructure:Own where and how workloads run across on-premise, cloud, and hybrid environments. Drive capacity planning, utilization, and cost-per-compute decisions, including support for classified and air-gapped systems 
  • Training and Simulation Systems:Build infrastructure for distributed training (supervised learning, RL/MARL, foundation models) and large-scale, multi-fidelity simulation. Ensure training and simulation systems operate together without bottlenecks. 
  • Data Platform: Ingest and manage multi-modal sensor data (EO, IR, radar, EW, IMU). Establish dataset versioning, data lineage, feature storage, data cataloging, and classification-aware storage and access controls. 
  • MLOps, Evaluation, and Model Lifecycle:Establish a consistent workflow for experiment tracking, model registry, artifact provenance, and automated validation. Implement evaluation and V&V gates so models meet defined standards before deployment. 
  • Deployment and Operational Feedback:Own the pipeline from training to deployment, including model optimization (e.g., distillation, quantization, pruning), deployment to edge systems, monitoring, drift detection, and retraining triggers. 
  • Customer AI Infrastructure:Define how AI infrastructure is deployed in customer environments across on-premise, cloud, hybrid, and sovereign settings. Establish a consistent approach that avoids one-off solutions while adapting to operational constraints. 
  • Platform Standardization:Define common tools, interfaces, and workflows across teams. Reduce duplication while maintaining flexibility where needed. 
  • Cross-Team Partnership: Work directly with Hivemind and other autonomy teams to ensure the platform supports real workloads and evolves with program needs.
  • Faster iteration from idea to trained model to evaluated result 
  • High utilization of compute resources with clear visibility into usage and cost 
  • Simulation capacity that supports large-scale training without bottlenecks 
  • Consistent end-to-end lifecycle: development, evaluation, deployment, monitoring, and retraining 
  • Repeatable data loop: telemetry, scenario extraction, retraining, and redeployment  
  • Reliable deployment of optimized models to edge systems 
  • Broad platform adoption across autonomy programs 
  • Repeatable approach for deploying AI infrastructure in customer environments  
  • Representative performance targets: 

  • Training iteration cycles measured in days, not weeks 
  • Sustained high utilization of GPU resources under production workloads  
  • Experience building and operating ML infrastructure at scale (100+ GPU clusters, distributed systems) 
  • Experience defining compute strategy, including on-premise vs cloud tradeoffs, capacity planning, and cost management 
  • Strong understanding of ML workloads, including foundation models, RL/MARL, simulation-based training, and fine-tuning 
  • Experience building data platforms with dataset versioning, lineage, and cataloging 
  • Ability to debug and resolve system issues when needed  
  • Experience in defense or classified environments (e.g., air-gapped systems, SCIFs) 
  • Experience with simulation-heavy ML systems (robotics, autonomy, or similar domains) 
  • Experience deploying and optimizing models for edge hardware 
  • Familiarity with HPC systems (schedulers, parallel storage, high-speed networking)  
  • You will define the infrastructure that supports the development and deployment of autonomy systems across Shield AI.  

    This role establishes the foundation for how models are trained, evaluated, and deployed, and directly impacts how quickly new capabilities are delivered into operational environments.  

    You will have ownership over systems and decisions that are often distributed across multiple teams at other organizations, with the opportunity to shape how AI infrastructure is built and used both internally and in customer environments.  

     

    Location & Eligibility

    Where is the job
    San Francisco, United States
    On-site at the office
    Who can apply
    Open to applicants worldwide

    Listing Details

    Posted
    May 14, 2026
    First seen
    May 14, 2026
    Last seen
    May 14, 2026

    Posting Health

    Days active
    0
    Repost count
    0
    Trust Level
    79%
    Scored at
    May 14, 2026

    Signal breakdown

    freshnesssource trustcontent trustemployer trust
    Newsletter

    Stay ahead of the market

    Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

    A
    B
    C
    D
    Join 12,000+ marketers

    No spam. Unsubscribe at any time.

    S
    Principal Engineer, AI and Data Platform Engineering (R4941)USD 320000–490000