Abuse Investigator (AI Self-Improvement Risk)
Openai
About the Team OpenAIβs mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn. The Intelligence and Investigations team supports this by identifying and investigating misuses of our products β especially new and emerging classes of risk. This enables our partner teams to develop data-backed policies and safety mitigations. Precisely understanding how models behave in real-world conditions allows us to safely enable users to build useful things with our products. About the Role As an Abuse Investigator focused on AI Self-Autonomy and Agentic Risk on the Intelligence and Investigations team, you will be responsible for identifying and investigating cases where models exhibit autonomous or agentic behavior, including chaining capabilities, acting with increasing independence, or demonstrating patterns that may introduce safety risk. This includes detecting behaviors that are not explicitly intended, understood, or covered by existing safeguards. This role requires deep domain-specific expertise in identifying, understanding, and mitiga