Назад
Company hidden
18 часов назад

Security Lead, Agentic Red Team (AI)

248 000 - 349 000$
Формат работы
onsite
Тип работы
fulltime
Грейд
lead
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Security Lead, Agentic Red Team (AI): Direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation, architecting complex, multi-turn attack scenarios. With an accent on influencing launch criteria and bridging manual exploration with automated regression pipelines. Focus on ensuring non-deterministic risks are identified, measured, and mitigated before deployment.

Location: This position requires working from Mountain View, California, US or New York City, New York, US.

Salary: $248,000–$349,000 USD (base salary) + bonus + equity + benefits

Company

hirify.global is a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence for widespread public benefit and scientific discovery.

What you will do

  • Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.
  • Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI.
  • Collaborate with Google teams to engineer "Auto RedTeaming" solutions for automated regression testing.
  • Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors.
  • Manage an evolving inventory of exploit primitives and agent-specific attack patterns.
  • Establish security scope focusing solely on agentic logic, model inference, and AI-centric exploits.

Requirements

  • Bachelor's degree in Computer Science, Information Security, or equivalent practical experience.
  • Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.
  • Deep technical understanding of LLM architectures and agentic workflows.
  • Proven ability to work in a consulting capacity with product teams, driving security improvements.
  • Experience managing or technically leading small, high-performance engineering teams.

Nice to have

  • Hands-on experience developing exploits for GenAI models.
  • Familiarity with AI safety benchmarks and evaluation frameworks.
  • Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.
  • Ability to communicate complex probabilistic risks to executive stakeholders.

Culture & Benefits

  • Value diversity of experience, knowledge, backgrounds, and perspectives.
  • Committed to equal employment opportunity regardless of sex, race, religion, or other protected basis.
  • Provides accommodations for disabilities or additional needs.
  • Full-time position offering base salary, bonus, equity, and benefits.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...