Anthropicβs mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Universes team within Research is responsible for training AI models to perform complex, difficult, long-horizon agentic tasks in ultra-realistic settings. We design and implement novel training environments that go far beyond what models can do today β environments where models learn to navigate ambiguity, handle interruptions, maintain context over extended interactions, and exercise judgment in open-ended scenarios.
We're looking for Research Engineers to help us build the next generation of training environments for capable and safe agentic AI.
This role blends research and engineering responsibilities, requiring you to both implement novel approaches and contribute to research direction. You'll work on fundamental research in reinforcement learning, designing training environments and methodologies that push the state of the art, and building evaluations that measure genuine capability.
Build the next generation of agentic environments
Build rigorous evaluations that measure real capability
Collaborate across research and infrastructure teams to ship environments into production training
Debug and iterate rapidly across research and production ML stacks
Contribute to research culture through technical discussions and collaborative problem-solving
Are highly impact-driven β you care about outcomes, not activity
Operate with high agency
Have good research taste or senior technical experience, demonstrating good judgment in identifying what actually matters in complex problem spaces
Can balance research exploration with engineering implementation
Are passionate about the potential impact of AI and are committed to developing s