C

Member of Technical Staff, Model Efficiency

Cohere
6 months ago
Full-time
Remote
Worldwide
Remote Engineering
Who are we?

Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.

We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.

Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.

Join us on our mission and shape the future!

Why this role?

Our team is a fast-growing group of researchers and engineers focused on building reliable ML systems and pushing the boundaries of LLM inference efficiency. We develop techniques that improve how models execute in production, driving lower latency, higher throughput, and consistent quality across diverse workloads.

As an engineer on this team, you’ll work across the inference stack to improve core performance metrics by diving deep into model execution, identifying bottlenecks, and developing innovative optimizations. You’ll collaborate closely with modeling and systems teams to experiment, measure, and ship improvements that meaningfully accelerate inference. As the team evolves, you’ll have opportunities to build expertise in advanced performance techniques, including GPU/CUDA optimizations, kernel-level improvements, and model execution strategies for MoE and large-scale architectures.

Please Note: We have offices in Toronto, Montreal, San Francisco, New York, Paris, Seoul and London. We embrace a remote-friendly environment, and as part of this approach, we strategically distribute teams based on interests, expertise, and time zones to promote collaboration and flexibility. You'll find the Model Efficiency team concentrated in the EST and PST time zones, these are our preferred locations.

You may be a good fit for the Model Efficiency team if you have:

- 5+ years of experience writing high-performance, production-quality code

- Strong programming skills in C++ or Python (Rust/Go also welcome)

- Experience working with large language models and familiarity with the LLM inference ecosystem (e.g., vLLM, SGLang, etc.)

- Ability to diagnose and resolve performance bottlenecks across the model execution stack

- A strong bias for action — you ship fast, measure impact, and iterate

It’s a big plus if you have experience with:

- GPU programming, CUDA, or low-level systems optimization

- Language modeling with transformers (MoE, speculative decoding, KV-cache optimizations)

- Scaling performance-critical distributed systems (e.g., computation, search,