Engineering Manager - Model Performance
Baseten
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE
Are you passionate about advancing the frontiers of artificial intelligence while leading a team of exceptional engineers? We are looking for a Tech Lead Manager focused on ML performance and inference. This role is ideal for someone with a strong engineering background who is eager to lead and mentor a team while remaining hands-on with technology. If you thrive in a fast-paced startup environment and are excited about both leadership and technical challenges, we want to hear from you.
EXAMPLE INITIATIVES
You'll get to work on these types of projects as part of our Model Performance team:
- Baseten Embeddings Inference: The fastest embeddings solution available https://www.baseten.co/blog/introducing-baseten-embeddings-inference-bei/
- The Baseten Inference Stack https://www.baseten.co/resources/guide/the-baseten-inference-stack/
- Driving model performance optimization https://www.baseten.co/blog/driving-model-performance-optimization-2024-highlights/
RESPONSIBILITIES
- Lead, mentor, and manage a team of engineers focused on developing and optimizing ML model inference and performance.
- Oversee technical strategy and architecture decisions, driving improvements across our engineering organization.
- Collaborate with cross-functional teams to ensure seamless integration and scalability of ML models in production environments.
- Dive into the codebase of frameworks like TensorRT, PyTorch, CUDA, and others to identify and solve complex performance bottlenecks.
- Drive the development and deployment of large-scale optimization techniques for various ML models, especially large language models (LLMs).
- Own the full lifecycle of projects from inception through delivery, including planning, execution, and resource management.
- Foster a collaborative, inclusive team environment that encourages continuous learning and growth.
REQUIREMENTS
- Bachelorβs, Masterβs, or Ph.D. in Computer Science, Engineering, or a related field.
- 5+ years of professional experience in software engineering, with at least 2 years in a technical leadership role.
- Proven experience managing and mentoring teams of engineers.
- Expertise in one or more programming languages, such as Python, C++, or Go.
- In-depth understanding of ML model performance optimization, especially using libraries such as PyTorch