
Role Overview
Key Responsibilities
- Build and manage a cross-functional research team across AI security and threat intelligence.
- Publish influential research and represent the organization at top-tier conferences.
- Develop and prototype defenses against attacks such as adversarial inputs, data poisoning, and model leakage.
- Collaborate with product and engineering to integrate research into security tools and platforms.
- Partner with academic institutions for joint research, fellowships, and grants.
- Advise leadership on long-term risks, AI threat landscapes, and regulatory developments.
Qualifications
- 8+ years in AI/ML research, including 4+ years in adversarial ML or secure AI.
- Proven leadership in managing research teams and translating ideas into applied security.
- Expertise in threat modeling, adversarial testing, and secure ML pipelines.
- Strong communication and collaboration skills.
Preferred Experience
- Familiarity with privacy-enhancing technologies (e.g., differential privacy, federated learning).
- Contributions to open-source or published vulnerabilities in AI systems.
- Awareness of AI governance and compliance frameworks.
Didn’t find the job appropriate? Report this Job