Research
AI Security Researcher
Remote (US / EU / Israel)•Full-time
Pioneer the future of AI agent security. You'll research attack vectors, develop novel detection methods, and help define the security standards for autonomous AI systems. This role shapes our long-term security roadmap.
What You'll Do
- Research emerging threats in AI agent systems: prompt injection, jailbreaks, tool misuse, and data exfiltration
- Develop threat models and security frameworks for agentic AI architectures
- Build detection algorithms for behavioral anomalies and adversarial attacks
- Create security benchmarks aligned with OWASP LLM Top 10 and emerging standards
- Publish research and contribute to the broader AI security community
- Collaborate with engineering to productionize research findings
What We're Looking For
- 5+ years in security research, AI/ML security, or adversarial machine learning
- Deep understanding of LLM architectures, transformer models, and their security implications
- Experience with AI red-teaming, prompt injection techniques, or model extraction attacks
- Strong programming skills in Python; familiarity with ML frameworks (PyTorch, HuggingFace)
- Track record of security research: papers, CVEs, blog posts, or open source tools
- Ability to communicate complex technical concepts to diverse audiences
Nice to Have
- Published research in AI security, adversarial ML, or AI safety
- Experience with agentic frameworks (LangChain, LangGraph, CrewAI) and protocols (MCP, A2A)
- Background in traditional application security or penetration testing
- Familiarity with authentication/authorization protocols (OAuth 2.0, OIDC, API security)
Interested in this role?
We'd love to hear from you. Apply now and tell us why you're excited about this opportunity.
Apply NowOther Open Positions
Explore other opportunities to join our team.