TL;DR
Staff Red Team Engineer (AI Safeguards): Conduct comprehensive adversarial testing to uncover vulnerabilities in deployed AI systems and products with an accent on technical infrastructure vulnerabilities, emergent risks from advanced AI capabilities, and novel abuse unique to AI systems. Focus on simulating sophisticated threat actors, chaining multiple attack vectors, and developing automated testing frameworks for continuous assessment.
Location: Remote-Friendly with required travel to offices in San Francisco, CA or Washington, DC. Expect all staff to be in one of our offices at least 25% of the time. Visa sponsorship is available.
Salary: $300,000–$405,000 USD
Company
hirify.global is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems to be safe and beneficial for society.
What you will do
- Conduct comprehensive adversarial testing across hirify.global’s product surfaces, developing creative attack scenarios.
- Research and implement novel testing approaches for emerging AI capabilities (agent systems, tool use).
- Design and execute 'full kill chain' attacks emulating real-world threat actors.
- Build and maintain systematic testing methodologies and automated testing frameworks.
- Collaborate with Product, Engineering, and Policy teams to translate findings into improvements.
- Help establish metrics for measuring detection effectiveness of novel abuse.
Requirements
- Demonstrated experience in penetration testing, red teaming, or application security.
- Strong technical skills in web application security, including hands-on expertise with security testing tools.
- Track record of discovering novel attack vectors and chaining vulnerabilities.
- Public body of work such as CVEs, blog posts, or bug bounty reports.
- Adaptability to understand and build engagements around emerging threats.
- Strong written and verbal communication skills.
- Proven ability to think like an attacker.
- At least a Bachelor's degree in a related field or equivalent experience.
Nice to have
- Experience with AI/ML security or adversarial machine learning.
- Experience testing API security and rate limiting systems.
- Background in testing business logic vulnerabilities and authorization bypass.
- Familiarity with distributed systems and infrastructure security.
- Understanding of AI safety considerations beyond traditional security.
Culture & Benefits
- Work as a single cohesive team on a few large-scale AI research efforts, valuing impact over specific puzzles.
- Highly collaborative group with frequent research discussions.
- Competitive compensation and benefits.
- Optional equity donation matching.
- Generous vacation and parental leave.
- Flexible working hours.
Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →