A

Engineering Manager, Model Inference

Abridge
18 days ago
Full-time
Remote
Worldwide
Remote Engineering
ABOUT ABRIDGE

Abridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.

Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.

We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh.


THE ROLE

Our generative AI-powered products are transforming the practice of medicine—and the inference systems that power them need to be fast, reliable, and world-class. We’re looking for an Engineering Manager to lead and grow our Model Inference team.

The Inference team owns the end-to-end technical direction of how our models are served: from architecting low-latency, high-throughput infrastructure to pushing the frontier of LLM serving techniques. You’ll lead a high-performing team of AI inference engineers, partner closely with ML Research and the broader AI Platform, and ensure the systems underpinning every clinician interaction are operating at peak efficiency and reliability.


WHAT YOU’LL DO

- Lead and grow a high-performing team of AI inference engineers focused on building and scaling infrastructure for Abridge’s products and APIs

- Own the technical direction of our inference systems—making key decisions around batching, throughput, latency, and GPU utilization

- Architect and scale inference infrastructure for reliability, efficiency, and observability; lead incident response

- Benchmark and eliminate bottlenecks throughout the inference stack

- Partner with ML Research teams on model optimization, quantization, and deployment

- Develop APIs for AI inference used by both internal teams and external customers

- Recruit, mentor, and develop engineering talent; establish team processes, engineering standards, and operational excellence

- Work closely with the GenAI Platform, Data, and Product teams to plan and execute projects that directly impact clinicians and patients


WHAT YOU’LL BRING

- 5+ years of engineering experience with 1+ years in a technical leadership or management role

- Deep, hands-on experience with ML systems and inference frameworks (e.g., PyTorch, TensorRT, vLLM, TensorFlow)

- Strong understanding of LLM architecture (eg. Mult