Principal Product Manager (AI Model Security)
Мэтч & Сопровод
Для мэтча с этой вакансией нужен Plus
Описание вакансии
TL;DR
Principal Product Manager (AI Model Security): Owning the security hardening strategy for frontier models to make them resilient against adversarial attacks and purpose-built for security practitioners with an accent on LLM security threats and red-teaming frameworks. Focus on designing evaluation benchmarks, mitigating zero-day exploit risks, and aligning model capabilities with real-world security workflows.
Location: Mountain View, United States (Hybrid: must be in office at least 4 days a week)
Salary: $188,000 – $304,200
Company
AI’s Superintelligence Team (MAIST) is a startup-like unit within dedicated to creating ultra-capable AI systems that remain controllable, safety-aligned, and anchored to human values.
What you will do
- Define and prioritize the security hardening strategy for frontier models across the OWASP LLM threat surface.
- Evaluate and mitigate risks of models being used to generate zero-day exploits, malware, or novel attack vectors.
- Design and iterate adversarial testing programs and red-teaming frameworks, both automated and human-driven.
- Partner with Azure Security and Security Copilot teams to translate product requirements into model training priorities.
- Build benchmark suites and evaluation frameworks to measure real-world security usefulness for practitioners.
- Establish security criteria for model launches and influence model training, fine-tuning, and RLHF.
Requirements
- Bachelor’s Degree and 5+ years of experience in product management, security engineering, or software development.
- Hands-on experience building, evaluating, or shipping ML-powered products or security tools.
- Deep familiarity with LLM security threats including prompt injection, jailbreaking, and data exfiltration.
- Proven track record of building evaluation systems, security benchmarks, or adversarial testing frameworks.
- Must be based in or able to work from the Mountain View office at least four days a week.
Nice to have
- Postgraduate degree in Computer Science, Security, or AI/ML.
- Experience in offensive security, penetration testing, or red teaming applied to AI systems.
- Familiarity with security tooling such as SIEM, SOAR, EDR, and threat intelligence platforms.
- Understanding of the model lifecycle from pre-training and RLHF to deployment and monitoring.
- Previous experience working within enterprise security organizations.
Culture & Benefits
- Startup-like environment with high ownership and direct accountability for production outcomes.
- Collaborative culture based on a growth mindset, respect, integrity, and accountability.
- Opportunity to work on the frontier of AI Superintelligence and shape the security of global-scale models.
- Comprehensive corporate benefits provided by .
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →