TL;DR
Offensive Security Research Engineer (AI): Helping to mitigate risks associated with AI systems by understanding and preventing LLM misuse with an accent on exploitation, remediation, and developing defensive strategies. Focus on automating traditional attack techniques, identifying vulnerabilities at scale, and enacting a forward-looking security plan for AI.
Location: Must be based in San Francisco, CA, and required to work from the office at least 25% of the time. Visa sponsorship is offered.
Salary: $320,000 – $405,000 USD
Company
hirify.global is a public benefit corporation headquartered in San Francisco with a mission to create reliable, interpretable, and steerable AI systems that are safe and beneficial for users and society.
What you will do
- Triage discovered vulnerabilities and coordinate with the external and open-source community on remediation efforts.
- Write scaffolds to automate typical attack techniques to clarify defensive problem selection.
- Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale.
- Develop promising defensive strategies to mitigate the harmful misuse of models.
- Collaborate with a small, senior team to implement a forward-looking security plan.
Requirements
- 3+ years of experience in pentesting, vulnerability research, or other offensive security.
- Senior-level knowledge in at least one area: reverse engineering, network security, exploitation, or physical security.
- Demonstrated desire to perform thorough work leading to high-quality outputs.
- Software engineering experience.
- Proven ability to bring clarity and ownership to ambiguous technical problems.
- Demonstrated success in leading cross-functional security initiatives and navigating complex organizational dynamics.
- Bachelor's degree in a related field or equivalent experience.
Nice to have
- Published research papers on computer security, language modeling, or related topics, or given talks at conferences (e.g., Defcon, Blackhat, CCC).
- Familiarity with large language models and their workings, such as having written agent scaffolds.
- Reported CVEs or been awarded for bug bounty vulnerabilities.
- Contributed to open-source projects in LLM- or security-adjacent repositories.
Culture & Benefits
- Competitive compensation and benefits, including optional equity donation matching.
- Generous vacation and parental leave.
- Flexible working hours and a lovely office space for collaboration.
- Work as a single cohesive team on large-scale AI research efforts.
- Value impact, focusing on steerable, trustworthy AI and viewing AI research as an empirical science.
- Highly collaborative group with frequent research discussions.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →