Software Engineer β GPU Networking & Distributed Systems
Baseten
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
At Baseten, we are building the global operating system for distributed, heterogeneous AI hardware. We believe that as LLM and multi-modal workloads scale, the network is the computer. We are looking for foundational engineers to lead our GPU Networking efforts, making RDMA a first-class building block in our infrastructure and unlocking the next generation of distributed inference optimizations.
THE OPPORTUNITY
Networking and compute are no longer separate disciplines; they are converging. The massive throughput of H100, B200, and NVL72 architectures enables and demands a new approach where communication is co-optimized alongside computation. We are entering an era where the network is an active accelerator, leveraging smart hardware offloads and direct interconnects to ensure that data movement operates at wire-speed.
In this role, you will go beyond network configuration to architect the software fabric that unifies thousands of GPUs into a cohesive operating system. While you will leverage the best of the open-source ecosystem, you won't be limited by it. Where off-the-shelf solutions stop, you will build from scratch, engineering the primitives required to co-optimize communication and compute for Disaggregated Serving, Wide Expert Parallelism (WideEP), and lightening cold starts.
WHAT YOU'LL DO
- Make RDMA First-Class: You will work on integrating RDMA/RoCE/InfiniBand capabilities directly into our inference stack, helping us move beyond TCP/IP to unlock order-of-magnitude improvements in bandwidth and latency.
- Optimize Distributed Inference: You will implement and tune the networking layers necessary for efficient Disaggregated KV Cache Offload and WideEP, ensuring seamless communication across NVLink and InfiniBand for our MoE models.
- Enable Serverless-Grade Startup Speeds for LLMs: You will work deeply with checkpointing and storage mechanisms to enable sub-10-second startup for trillion-parameter models.
- Deep-Dive into Hardware: You will characterize and validate networking performance on bleeding-edge clusters (H100/H200, B200/B300, GB200/300 NVL72), writing the acceptance tests that ensure our hardware delivers peak achievable throughput and minimal latency.
- Build Observability: You will design the tools that let us visualize packet flow, congestion, and effective bandwidth across the G