Назад
Company hidden
2 дня назад

Director of AI Quality & Safety (Legal & Regulatory)

Формат работы
hybrid
Тип работы
fulltime
Грейд
director
Английский
b2
Страна
UK/Netherlands
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Director of AI Quality & Safety (Legal & Regulatory): Establishing and operationalizing comprehensive quality and safety frameworks for AI-enabled products, content systems, and agentic workflows with an accent on evaluation, reliability, compliance, and governance. Focus on designing standardized evaluation pipelines for LLMs and RAG systems, enforcing SLAs for AI performance, ensuring content traceability, and aligning with regulatory standards like EU AI Act.

Location: Hybrid (8 days/month) in Alphen aan den Rijn, Netherlands or London, UK. Applicants may be required to appear onsite at a hirify.global office.

Company

hirify.global Legal & Regulatory is executing its North Star to become the Intelligent Orchestration Platform for legal and regulatory work.

What you will do

  • Design and implement evaluation frameworks for AI models, LLMs, RAG pipelines, and agentic systems, including benchmark datasets and continuous pipelines.
  • Extend software QA practices to AI outputs, define AI-specific SLAs/SLOs, and integrate quality gates into CI/CD.
  • Establish content validation frameworks with editorial experts, ensuring traceability to authoritative sources.
  • Define and enforce AI safety standards, bias mitigation, and compliance with regulations like EU AI Act.
  • Build dashboards, track KPIs across AI lifecycle, benchmark against industry standards, and drive continuous improvement.
  • Define operating model, introduce review boards, and lead a small team for AI quality and safety.

Requirements

  • 10+ years in product quality, AI/ML systems, or related, with 3–5 years in AI-focused roles.
  • Demonstrated experience designing AI/ML evaluation frameworks (e.g., LLM evaluation, model validation).
  • Strong understanding of AI architectures (LLMs, RAG, agents), software quality engineering, and content validation.
  • Experience in regulatory or compliance-heavy environments (legal, financial, healthcare preferred).
  • Proven cross-functional leadership at senior levels; systems thinking, analytical rigor, risk awareness.

Nice to have

  • Familiarity with AI governance standards (e.g., EU AI Act).
  • Experience with human-in-the-loop systems, experimentation platforms, or AI observability.
  • Advanced degree in Computer Science, Data Science, Law, or related.

Culture & Benefits

  • Matrixed organization emphasizing influence without authority and pragmatic execution.
  • Focus on metric-driven decisions, high standards for quality, and thought leadership.

Hiring process

  • Interviews without AI tools or external prompts; no virtual backgrounds; may include in-person onsite interviews.
  • Use of AI-generated responses or third-party support disqualifies candidates.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →