TL;DR
Researcher, Preparedness (AI Safety): Identifying, tracking, and preparing for catastrophic risks related to frontier AI models with an accent on monitoring and predicting evolving capabilities, and mitigating these risks. Focus on designing new evaluations grounded in real threat models (including CBRN and cyber), maintaining existing evaluations, and producing auditable artifacts for high-stakes launches.
Location: Must be based in San Francisco, US. This role is restricted to individuals who are U.S. citizens, U.S. legal permanent residents, individuals granted asylum status, or admitted to the United States as refugees, due to U.S. Export Administration Regulations.
Salary: $310K – $460K
Company
hirify.global is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
What you will do
- Identify emerging AI safety risks and new methodologies for exploring their impact.
- Build and continuously refine evaluations of frontier AI models to assess identified risks.
- Design and build scalable systems and processes to support these kinds of evaluations.
- Contribute to the refinement of risk management and overall best practice guidelines for AI safety evaluations.
Requirements
- Passionate and knowledgeable about short-term and long-term AI safety risks.
- Demonstrate the ability to think outside the box and have a robust “red-teaming mindset”.
- Experience in ML research engineering, ML observability and monitoring, or creating large language model-enabled applications.
- Able to operate effectively in a dynamic and extremely fast-paced research environment, as well as scope and deliver projects end-to-end.
- Must be a U.S. person (citizen, legal permanent resident, asylee, or refugee) for regulatory compliance.
Nice to have
- First-hand experience in red-teaming systems.
- A good understanding of the societal aspects of AI deployment.
- Excellent communication skills and the ability to work cross-functionally.
Culture & Benefits
- Work to ensure that general-purpose artificial intelligence benefits all of humanity.
- Push the boundaries of AI systems capabilities and safely deploy them.
- Commitment to safety and human needs at the core of AI creation.
- An equal opportunity employer with a focus on diversity and inclusion.
- Committed to providing reasonable accommodations to applicants with disabilities.
Hiring process
- Background checks will be administered in accordance with applicable law, including the San Francisco Fair Chance Ordinance.
Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →