Назад
Company hidden
5 часов назад

Software Engineer, Cloud Inference Safeguards (AI)

405 000 - 485 000$
Формат работы
hybrid
Тип работы
fulltime
Грейд
middle/senior
Английский
b2
Страна
US
Релокация
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Software Engineer, Cloud Inference Safeguards (AI): Building and operating safety, oversight, and intervention mechanisms to protect AI models on third-party cloud service provider platforms with an accent on monitoring for misuse, enforcing policy, and ensuring data residency and privacy. Focus on real-time safeguards infrastructure, telemetry pipelines, and enforcement hooks to maintain a high safety bar.

Location: San Francisco, CA or Seattle, WA. Expect to be in one of our offices at least 25% of the time.

Salary: $405,000 - $485,000 USD

Company

hirify.global’s mission is to create reliable, interpretable, and steerable AI systems, ensuring AI is safe and beneficial for users and society.

What you will do

  • Build, deploy, and operate real-time safeguards infrastructure, including classifiers, rate limits, enforcement actions, and intervention hooks, embedded directly in the third-party CSP inference serving path.
  • Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring abuse detection and model behavior monitoring while honoring regionalization boundaries and enterprise contractual commitments.
  • Develop telemetry, logging, and evaluation pipelines that provide Safeguards, Policy, and T&S operational teams with situational awareness over CSP traffic.
  • Identify the lowest-impact points in the CSP serving stack to gather signals or introduce interventions without degrading latency, stability, or overall architecture.
  • Own on-call responsibilities, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep AI models safe.
  • Collaborate with Safeguards research, Policy & Enforcement, the Cloud Inference team, and CSP partner contacts to translate detection research and policy decisions into production enforcement within a partner’s cloud.

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or comparable experience.
  • 4–10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust & safety, anti-abuse, fraud, or integrity systems.
  • Proficient in Python and comfortable working across the stack—from request-path services to data pipelines to internal tooling.
  • Think adversarially and can design defenses in depth rather than single points of enforcement.
  • Experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets.

Nice to have

  • Experience building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scale.
  • Experience with machine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in production.
  • Familiarity with major cloud platform internals—IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring—or experience shipping software that runs inside a partner’s cloud rather than your own.

Culture & Benefits

  • Competitive compensation and benefits.
  • Optional equity donation matching.
  • Generous vacation and parental leave.
  • Flexible working hours.
  • Lovely office space in which to collaborate with colleagues.

Hiring process

  • We encourage you to apply even if you do not believe you meet every single qualification.
  • We think AI systems like the ones we're building have enormous social and ethical implications.
  • We strive to include a range of diverse perspectives on our team.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →