Назад
Company hidden
24 часа назад

Scientist/Sr. Scientist, AI Safety (AI)

Формат работы
onsite
Тип работы
fulltime
Грейд
middle/senior
Английский
b2
Страна
UK
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Scientist/Sr. Scientist, AI Safety (AI): Building and implementing bespoke safety strategies for scientific superintelligence integrated with automated physical labs with an accent on evaluations and safeguards for scientific risks. Focus on developing technical collateral, understanding model capabilities across scientific and non-scientific domains, and conducting research for capability evaluation.

Location: Onsite in London, UK

Company

hirify.global is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science, pioneering a new age of boundless discovery by applying AI to every aspect of the scientific method.

What you will do

  • Build evaluations to test for scientific risks (both known and novel) from cutting edge scientific models integrated with automated physical labs.
  • Develop initial proof-of-concept safeguards, such as ML models to detect and block unsafe behavior from scientific AI models and physical lab outputs.
  • Understand a range of model capabilities, across primarily scientific but also non-scientific domains, to inform Lila's broader safety strategy.
  • Conduct broader, high-quality research efforts for scientific capability evaluation and restriction as needed.

Requirements

  • Hold a Bachelor's degree in a technical field (e.g., computer science, engineering, machine learning, mathematics, physics, statistics), or have related experience.
  • Possess strong programming skills in Python and experience with ML frameworks (e.g., Inspect) for large-scale evaluation and scaffolded testing.
  • Have experience in building evaluations or conducting red-teaming exercises for CBRN/cyber risks or frontier model capabilities.
  • Demonstrate experience in designing and/or implementing AI safety frameworks for frontier AI companies.
  • Be able to communicate complex technical concepts and concerns effectively to non-expert audiences.

Nice to have

  • Hold a Masters or PhD in a field relevant to safety evaluations of AI models in scientific domains, or a technical field.
  • Have publications in AI safety / evaluations / model behavior in top ML / AI conferences (NeurIPS, ICML, ICLR, ACL) or model release system cards.
  • Possess experience researching risks from novel science (e.g., biosecurity, computational biology) or working with narrow scientific tools.

Culture & Benefits

  • hirify.global is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...