Назад
Company hidden
2 часа назад

ML Ops Infrastructure Engineer (AI)

160 000 - 220 000$
Формат работы
remote (только USA)
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

ML Ops Infrastructure Engineer (AI): Building and maintaining the critical CI/CD pipelines, deployment systems, and testing infrastructure to operationalize voice-native foundation models at scale with an accent on CI/CD design, model evaluation, and production reliability. Focus on solving complex concurrency and scaling challenges to ensure that research-to-production transitions are automated, robust, and performant.

Location: Remote (USA). International candidates may be hired via an Employer of Record (EoR) model where supported.

Salary: $160,000 – $220,000

hirify.global is a leading platform for the voice AI economy, providing real-time speech APIs used by over 200,000 developers and major organizations to build production-grade voice agents.

What you will do

  • Design and build CI/CD pipelines for ML model development, validation, and production deployment.
  • Architect and maintain model deployment pipelines that facilitate secure and reliable transitions from staging to production.
  • Implement comprehensive monitoring for model performance, including accuracy, latency, drift detection, and regression alerts.
  • Develop automated retraining pipelines triggered by data changes or performance degradation.
  • Collaborate with research engineers to enforce model quality gates and optimize model serving infrastructure for throughput and cost.

Requirements

  • 4+ years of experience in MLOps, DevOps, or infrastructure engineering focused on ML systems.
  • Strong proficiency in Python with a focus on automation and workflow tooling.
  • Deep experience building CI/CD pipelines for software and model delivery.
  • Hands-on experience with Docker and Kubernetes for containerized workloads.
  • Proven track record of deploying and serving ML models in production environments.

Nice to have

  • Experience with model serving frameworks such as NVIDIA Triton, TensorRT, or ONNX Runtime.
  • Background in speech, audio, or real-time media ML systems.
  • Proficiency with Infrastructure as Code (Terraform or Pulumi).
  • Experience with GPU-accelerated inference optimization and profiling.

Culture & Benefits

  • Unlimited PTO and 12 paid US company holidays.
  • Holistic health benefits including medical, dental, vision, and mental health support.
  • 401(k) plan with company match.
  • Stipends for personal productivity, home office upgrades, and continuous learning.
  • Culture of high experimentation with AI tools and cross-team collaboration.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →