Lead/Staff AI Runtime
Join us ! 🚀
We usually respond within three days
Join FlexAI:
FlexAI is at the forefront of revolutionizing AI computing by reengineering infrastructure at the system level. Our groundbreaking architecture, combined with sophisticated software intelligence, abstraction, and an orchestration layer, allows developers to leverage a diverse array of compute, resulting in efficient, more reliable computing at a fraction of the cost. We are seeking a skilled and experienced Lead/Staff AI Runtime Engineer.
Founded by Brijesh Tripathi, who bring experience from Nvidia, Apple, Tesla, Intel, and Zoox, FlexAI is not just building a product – we’re shaping the future of AI. Our teams are strategically distributed across Paris, Silicon Valley, and Bangalore, united by a shared mission: to deliver more compute with less complexity.
Position Overview:
At FlexAI, we’re building a high-performance, cloud-agnostic AI compute platform designed for next-generation training and inference workloads. As Lead/Staff AI Runtime Engineer, you’ll play a pivotal role in the design, development, and optimization of the core runtime infrastructure that powers distributed training and deployment of large AI models (LLMs and beyond).
This is a hands-on leadership role - perfect for a systems-minded software engineer who thrives at the intersection of AI workloads, runtimes, and performance-critical infrastructure. You’ll own critical components of our PyTorch-based stack, lead technical direction, and collaborate across engineering, research, and product to push the boundaries of elastic, fault-tolerant, high-performance model execution.
What you’ll do:
Lead Runtime Design & Development
Own the core runtime architecture supporting AI training and inference at scale.
Design resilient and elastic runtime features (e.g. dynamic node scaling, job recovery) within our custom PyTorch stack.
Optimize distributed training reliability, orchestration, and job-level fault tolerance.
Drive Performance at Scale
Profile and enhance low-level system performance across training and inference pipelines.
Improve packaging, deployment, and integration of customer models in production environments.
Ensure consistent throughput, latency, and reliability metrics across multi-node, multi-GPU setups.
Build Internal Tooling & Frameworks
Design and maintain libraries and services that support model lifecycle: training, checkpointing, fault recovery, packaging, and deployment.
Implement observability hooks, diagnostics, and resilience mechanisms for deep learning workloads.
Champion best practices in CI/CD, testing, and software quality across the AI Runtime stack.
Collaborate & Mentor
Work cross-functionally with Research, Infrastructure, and Product teams to align runtime development with customer and platform needs.
Guide technical discussions, mentor junior engineers, and help scale the AI Runtime team’s capabilities.
What you’ll need to be successful:
8+ years of experience in systems/software engineering, with deep exposure to AI runtime, distributed systems, or compiler/runtime interaction.
Experience in delivering PaaS services.
Proven experience optimizing and scaling deep learning runtimes (e.g. PyTorch, TensorFlow, JAX) for large-scale training and/or inference.
Strong programming skills in Python and C++ (Go or Rust is a plus).
Familiarity with distributed training frameworks, low-level performance tuning, and resource orchestration.
Experience working with multi-GPU, multi-node, or cloud-native AI workloads.
Solid understanding of containerized workloads, job scheduling, and failure recovery in production environments.
Bonus Points:
Contributions to PyTorch internals or open-source DL infrastructure projects.
Familiarity with LLM training pipelines, checkpointing, or elastic training orchestration.
Experience with Kubernetes, Ray, TorchElastic, or custom AI job orchestrators.
Background in systems research, compilers, or runtime architecture for HPC or ML.
Start up previous experience
What we offer:
- A competitive salary and benefits package, tailored to recognize your dedication and contributions.
- The opportunity to collaborate with leading experts in AI and cloud computing, learning from the best and the brightest, fostering continuous growth.
- An environment that values innovation, collaboration, and mutual respect.
- Support for personal and professional development, empowering you with the tools and resources to elevate your skills and leave a lasting impact.
- A pivotal role in the AI revolution, shaping the technologies that power the innovations of tomorrow.
Offices :
Our teams are strategically distributed across three continents - Europe, North America, and Asia—united by a shared mission: to deliver more compute with less complexity.
- Paris - HQ
- San Francisco (Bay Area) - US office
- Bangalore - India office
Location: Based in Paris (Hybrid) or EU (full-remote) - with at least two trips to Paris per month to sync with the team.
Apply NOW!
You’ve seen what this role entails. Now we want to hear from you! Does this opportunity align with your aspirations? If you’re even slightly curious, we encourage you to apply – it could be the start of something extraordinary!
At FlexAI, we believe diverse teams are the most innovative teams. We’re committed to creating an inclusive environment where everyone feels valued, and we proudly offer equal opportunities regardless of gender, sexual orientation, origin, disabilities, veteran status, or any other facets of your identity that make you uniquely you.
- Department
- R&D SW
- Role
- AI Stack Engineer
- Locations
- EU (Remote), Paris
- Remote status
- Hybrid
- Employment type
- Full-time
Already working at FlexAI?
Let’s recruit together and find your next colleague.