Назад
Company hidden
11 Π΄Π½Π΅ΠΉ Π½Π°Π·Π°Π΄

AI Safety Policy & Operations (AI)

Π€ΠΎΡ€ΠΌΠ°Ρ‚ Ρ€Π°Π±ΠΎΡ‚Ρ‹
remote (Global)
Π’ΠΈΠΏ Ρ€Π°Π±ΠΎΡ‚Ρ‹
fulltime
Английский
c1
Вакансия ΠΈΠ· списка Hirify.GlobalВакансия ΠΈΠ· Hirify Global, списка ΠΌΠ΅ΠΆΠ΄ΡƒΠ½Π°Ρ€ΠΎΠ΄Π½Ρ‹Ρ… tech-ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΠΉ
Для мэтча ΠΈ ΠΎΡ‚ΠΊΠ»ΠΈΠΊΠ° Π½ΡƒΠΆΠ΅Π½ Plus

ΠœΡΡ‚Ρ‡ & Π‘ΠΎΠΏΡ€ΠΎΠ²ΠΎΠ΄

Для мэтча с этой вакансиСй Π½ΡƒΠΆΠ΅Π½ Plus

ОписаниС вакансии

ВСкст:
/

TL;DR

AI Safety Policy & Operations (AI): Designing and evolving safety policies for AI audio, image, video, and agentic systems with an accent on global regulatory alignment and automation. Focus on building AI-powered systems for detection and moderation, and integrating safety into the product development lifecycle.

Location: This role is remote and can be executed globally. However, to facilitate working with the Safety team, we prefer candidates based in GMT to GMT+3 or UK. If you prefer, you can work from our offices in Dublin or London.

Company

hirify.global is a research and product company defining the frontier of audio AI, aiming to build the most important audio AI platform in the world.

What you will do

  • Design and evolve safety policies for audio AI, image/video AI and agentic safety, aligned with global regulatory developments (ISO42001, EU AI Act, DSA, US state laws).
  • Build scalable, AI-powered systems and workflows that dramatically reduce response times and increase policy coverage.
  • Partner with Safety Engineers to translate policy requirements into automated detection, moderation, and enforcement systems.
  • Drive cross-functional safety integration with product, engineering, legal, and operations teams.
  • Respond to safety policy escalations, partnering with moderation and investigations teams to resolve complex incidents.

Requirements

  • Broad experience across Trust & Safety: policy, operations, investigations, and content moderation.
  • Track record of owning and delivering safety outcomes end-to-end, ideally in fast-moving, engineering-first environments.
  • Deep familiarity with the global AI regulatory landscape: EU AI Act, DSA, US state laws, and emerging frameworks.
  • Technically conversant: comfortable with dashboards, SQL, ML concepts, and able to read Python automation.
  • Strong risk calibration to balance user safety with product velocity.
  • Exceptional communicator who can translate complex safety considerations for various stakeholders.

Nice to have

  • Audio or voice-specific Trust & Safety experience (voice cloning, synthetic media, audio deepfakes).
  • Experience in engineering-first organizations where safety shipped alongside product.
  • Background designing safety frameworks for enterprise customers or API platforms.
  • Familiarity with AI/ML pipelines and how to build guardrails into model deployment.

Culture & Benefits

  • Innovative culture pushing the boundaries of what’s possible in AI.
  • Opportunities to drive impact beyond your immediate role and responsibilities.
  • Annual discretionary stipend for professional development.
  • Annual discretionary stipend for social travel to meet up with colleagues.
  • Annual company offsite in new locations (e.g., Croatia and Italy).
  • Monthly co-working stipend if you’re not located near one of our main hubs.

Π‘ΡƒΠ΄ΡŒΡ‚Π΅ остороТны: Ссли Ρ€Π°Π±ΠΎΡ‚ΠΎΠ΄Π°Ρ‚Π΅Π»ΡŒ просит Π²ΠΎΠΉΡ‚ΠΈ Π² ΠΈΡ… систСму, ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡ iCloud/Google, ΠΏΡ€ΠΈΡΠ»Π°Ρ‚ΡŒ ΠΊΠΎΠ΄/ΠΏΠ°Ρ€ΠΎΠ»ΡŒ, Π·Π°ΠΏΡƒΡΡ‚ΠΈΡ‚ΡŒ ΠΊΠΎΠ΄/ПО, Π½Π΅ Π΄Π΅Π»Π°ΠΉΡ‚Π΅ этого - это мошСнники. ΠžΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ ΠΆΠΌΠΈΡ‚Π΅ "ΠŸΠΎΠΆΠ°Π»ΠΎΠ²Π°Ρ‚ΡŒΡΡ" ΠΈΠ»ΠΈ ΠΏΠΈΡˆΠΈΡ‚Π΅ Π² ΠΏΠΎΠ΄Π΄Π΅Ρ€ΠΆΠΊΡƒ. ΠŸΠΎΠ΄Ρ€ΠΎΠ±Π½Π΅Π΅ Π² Π³Π°ΠΉΠ΄Π΅ β†’

ВСкст вакансии взят Π±Π΅Π· ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ

Π˜ΡΡ‚ΠΎΡ‡Π½ΠΈΠΊ - Π·Π°Π³Ρ€ΡƒΠ·ΠΊΠ°...