B

Software Engineer - Model APIs

Baseten
7 months ago
Full-time
Remote
Worldwide
Remote Engineering
ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

THE ROLE:

Baseten’s Model Performance (MP) team is responsible for ensuring the models running on our platform are fast, reliable, and cost‑efficient. As part of this team, you’ll focus on Model API's — the infrastructure powering our hosted API endpoints for the latest open‑source models. This work spans distributed systems, model serving, and developer experience. You’ll join a small, high‑impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale.

RESPONSIBILITIES:

- Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving

- Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups

- Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving

- Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations

- Productionize performance improvements across runtimes (e.g.TensorRT, TensorRT‑LLM): speculative decoding, quantization, batching, and KV‑cache reuse.

- Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality.

- Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.

- Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.

REQUIREMENTS:

- 3+ years experience building and operating distributed systems or large‑scale APIs.

- Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).

- Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.

- Comfortable debugging complex systems, from runtime internals to GPU execu