B

Software Engineer - GPU Kernels

Baseten
10 months ago
Full-time
Remote
Worldwide
Remote Engineering
ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

THE ROLE

We’re seeking a GPU Kernel Engineer to join our team at the cutting edge of AI acceleration, where your code directly impacts the performance of state-of-the-art machine learning models. As a GPU Kernel Engineer, you'll craft the foundation that powers modern AI workloads, optimizing every microsecond of computation to enable breakthrough applications.

You'll work in a fast-paced, intellectually stimulating environment where technical excellence is paramount and your contributions directly influence production systems serving millions of users across numerous products. This role offers exceptional growth potential for engineers passionate about low-level optimization and high-impact systems work.

EXAMPLE INITIATIVES

You'll get to work on these types of projects as part of our Model Performance team:

- Baseten Embeddings Inference: The fastest embeddings solution available https://www.baseten.co/blog/introducing-baseten-embeddings-inference-bei/

- The Baseten Inference Stack https://www.baseten.co/resources/guide/the-baseten-inference-stack/

- Driving model performance optimization https://www.baseten.co/blog/driving-model-performance-optimization-2024-highlights/

RESPONSIBILITIES

Core Engineering Responsibilities

- Design and implement high-performance GPU kernels for key ML operations, including matrix multiplications, attention mechanisms, and mixture-of-experts routing

- Write and optimize code using CUDA, PTX assembly, and architecture-specific techniques

- Apply advanced performance optimization methods such as memory coalescing, warp-level programming, tensor core acceleration, and compute/memory overlap

Performance & Innovation

- Implement cutting-edge features like quantization (FP8/FP4), sparsity, and compute/communication overlap

- Identify and resolve performance bottlenecks using tools like Nsight Systems, Nsight Compute, and Torch Profiler

- Collaborate with research teams to productionize theoretical advancements

Impact & Collaboration

- Contribute to internal and open-source GPU libraries

- Present technical contributions at industry conferences (e.g., NVIDIA GTC, AWS re:Invent)

REQUIREMENTS

- Strong understanding of GPU architecture and programming paradigms:

- Memory hierarchy (global, shared, registers, L1/L2 cache)

- Thread/block/grid organizatio