Назад
Company hidden
3 дня назад

Principal Product Manager (AI Model Security)

139 900 - 304 200$
Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Principal Product Manager (AI Model Security): Developing strategies to harden frontier AI models against adversarial attacks and integrating them into security workflows with an accent on LLM security threats and red-teaming frameworks. Focus on mitigating prompt injection, jailbreaking, and data exfiltration while defining security benchmarks for model launches.

Location: Redmond, United States. Must work from a designated Microsoft office at least four days a week if living within 50 miles of the location.

Salary: USD $139,900 – $304,200 per year

Company

hirify.global's Superintelligence team (MAIST) is a startup-like unit dedicated to creating ultra-capable AI systems that remain controllable, safety-aligned, and anchored to human values.

What you will do

  • Own the model security roadmap and prioritize hardening strategies against the full OWASP LLM threat surface.
  • Drive zero-day and exploit defense by building evaluation datasets and defining risk thresholds for model capabilities.
  • Design, run, and scale automated and human-driven red-teaming frameworks to probe model vulnerabilities.
  • Partner with Azure Security and Security Copilot teams to align model training with real-world security workflows.
  • Develop security-specific benchmarks and evaluation frameworks to measure real-world utility for practitioners.
  • Define security criteria and manage go/no-go decisions for model launches.

Requirements

  • Bachelor's degree and 5+ years of experience in product management, security engineering, or software development.
  • Hands-on experience building, evaluating, or shipping ML-powered products or security tools.
  • Deep expertise in LLM security threats, including prompt injection, jailbreaking, and adversarial attacks.
  • Proven track record of building evaluation systems or adversarial testing frameworks.
  • Ability to operate autonomously and drive projects from ambiguity to delivery.

Nice to have

  • Technical background or postgraduate degree in Computer Science, Security, or AI/ML.
  • Experience in offensive security, penetration testing, or red teaming applied to AI/ML systems.
  • Familiarity with SIEM, SOAR, EDR, and threat intelligence platforms.
  • Understanding of the full model lifecycle (pre-training, fine-tuning, RLHF, deployment).
  • Experience within enterprise security organizations like CrowdStrike or Palo Alto Networks.

Culture & Benefits

  • Startup-like high-ownership environment within a global corporation.
  • Culture based on a growth mindset, respect, integrity, and accountability.
  • Focus on creating an inclusive workplace where employees can thrive.
  • Competitive compensation and comprehensive corporate benefits.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →