TL;DR
Head of Security Risk (AI): Owns the strategy, execution, and continuous improvement of hirify.global’s security risk management program, building a team of risk engineers and serving as the central point for risk intake, triage, quantification, and assessment. Focus on working with company leadership to build a risk governance structure that brings clarity and discipline to how hirify.global identifies, evaluates, escalates, and treats its most important security risks, particularly those at the frontier of AI risk.
Location: Hybrid in San Francisco, CA or New York City, NY. Staff are expected to be in one of our offices at least 25% of the time.
Salary: $345,000 - $410,000 USD
Company
hirify.global is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems for the benefit of society.
What you will do
- Own and steer the security risk management program end-to-end, including risk intake, assessment, quantification, and reporting.
- Manage and develop a team of risk engineers, setting priorities and coaching on assessment methodology.
- Design and operate risk intake and triage processes for vulnerabilities and risk submissions.
- Partner with leadership to facilitate an enterprise forum for risk escalation and strategic discussions.
- Lead risk quantification efforts through stress testing, scenario modeling, and deep dives into novel AI-specific risks.
- Oversee periodic and ad hoc security risk assessments across infrastructure, products, operations, and vendors.
- Collaborate with cross-functional teams to ensure risk assessments align with regulatory obligations (SOC 2, ISO 27001, HIPAA, EU AI Act, FedRAMP).
Requirements
- 15+ years of experience in security or risk management disciplines, with at least 5-7 years in a people leadership role.
- Built, transformed, or significantly scaled a security risk management program at a high-growth technology company.
- Hands-on experience with quantitative risk analysis (FAIR, scenario modeling, Monte Carlo simulation).
- Ability to engage executives on risk decisions, translating complex technical scenarios into clear business recommendations.
- Established risk governance structures (risk councils, steering committees, escalation frameworks).
- Bachelor's degree in a related field or equivalent experience.
Nice to have
- Deep expertise in risk assessment methodologies (NIST RMF, ISO 31000, FAIR, OCTAVE) and adapting them to novel risk domains.
- Experience assessing AI-specific risks (model security, adversarial attacks, data pipeline integrity, prompt injection).
- Background in stress testing methodologies from high-stakes environments.
- Experience presenting to boards, executive risk committees, or senior leadership.
- Experience with GRC platforms and risk management tooling (OneTrust, ServiceNow GRC, Archer, MetricStream).
Culture & Benefits
- Competitive compensation and benefits, including optional equity donation matching.
- Generous vacation and parental leave policies.
- Flexible working hours and a collaborative office space.
- Focus on high-impact AI research within a single cohesive team.
- Emphasis on empirical science approach, similar to physics and biology.
- Strong value placed on communication skills and frequent research discussions.
- Visa sponsorship available, with reasonable efforts to assist successful candidates.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →