Software Engineer - Model Performance
Baseten
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE
Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.
EXAMPLE INITIATIVES
You'll get to work on these types of projects as part of our Model Performance team:
- Baseten Embeddings Inference: The fastest embeddings solution available https://www.baseten.co/blog/introducing-baseten-embeddings-inference-bei/
- The Baseten Inference Stack https://www.baseten.co/resources/guide/the-baseten-inference-stack/
- Driving model performance optimization https://www.baseten.co/blog/driving-model-performance-optimization-2024-highlights/
RESPONSIBILITIES
- Implement, refine, and productionize cutting-edge techniques (quantization, speculative decoding, kv cache reuse, chunked prefill and LoRA) for ML model inference and infrastructure.
- Deep dive into underlying codebases of TensorRT, PyTorch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to debug ML performance issues.
- Apply and scale optimization techniques across a wide range of ML models, particularly large language models.
- Collaborate with a diverse team to design and implement innovative solutions.
- Own projects from idea to production.
REQUIREMENTS
- Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.
- Experience with one or more general-purpose programming languages, such as Python or C++.
- Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).
- Strong familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM.
- Demonstrated interest and experience in LLMβs.
- Deep understanding of GPU architecture.
- Bonus:
- Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).
- Experience with CUDA or similar technologies.
- Deep understanding of software engineering principles and a proven track