M

Senior AI Security Engineer

Menlosecurity
2 months ago
Full-time
Remote
Worldwide
Remote Engineering
Menlo Security's https://www.menlosecurity.com/ mission is enabling the world to connect, communicate and collaborate securely without compromise. COVID-19 has made our mission all the more real. We support customers across various enterprises including Fortune 500 companies, 9/10 of the largest global banks and the Department of Defense.

The world has fundamentally changed. We are growing from 400 employees into the next phase of our journey, and we need passionate talent filled with empathy and agility. The right candidate for the job is ethical, hyper-organized, fanatical about seeing things through to completion, service-oriented, and humble enough to take feedback and coaching yet confident enough to provide feedback and coaching.

Menlo is well-funded for growth and our investors are second to none. They include Vista Equity Partners (β€œVista http://www.vistaequitypartners.com/?utm_source=vistapressrelease&utm_campaign=menlosecurity”), General Catalyst, JPMC, American Express, HSBC, and Ericsson Ventures.

We are seeking a Senior AI Security Engineer to focus on the emerging security challenges of autonomous AI agents. In this role, you will research, design, and implement novel techniques to detect and mitigate prompt poisoning, context manipulation, malicious agent behaviors, and other adversarial threats targeting agentic AI systems.


You will work closely with engineering teams to translate cutting-edge security research into practical, deployable security controls that protect agents operating in real-world environments, particularly when interacting with untrusted web content.

Core Responsibilities

- Research Emerging Agentic Threats: Investigate new attack vectors against AI agents, including prompt injection, context poisoning, adversarial content embedding, and misuse of agent planning and reasoning mechanisms.

- Architect Scalable Agentic Workflows: Design and implement robust, high-performance pipelines that secure agent-to-web interactions.

- Develop Novel Detection & Mitigation Techniques: Design and prototype new approaches for identifying malicious prompts, unsafe contextual signals, and adversarial behaviors in LLM-powered agents.

- Agent Security Controls: Implement these techniques as security controls within agentic runtimes, ensuring agents can safely reason over and act on external data sources.

- Collaborative Engineering: Partner closely with applied engineers to integrate research-driven security mechanisms into production systems, balancing security effectiveness with agent performance.

- Proactive Threat Modeling: Continuously evaluate the evolving AI threat landscape and anticipate future risks as agent capabilities and autonomy increase.

- Adversarial Resilience: Build defensive mechanisms within the browser surrogate to detect and neutralize complex context poisoning and injection attempts embedded in web content.

Qualifications

- BSc in Computer Science or significant experience in high-scale