Назад
Company hidden
6 дней назад

Lead AI Engineer (ML Systems)

Формат работы
onsite
Тип работы
fulltime
Грейд
lead
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify RU Global, списка компаний с восточно-европейскими корнями
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Lead AI Engineer (ML Systems): Designing and building engineering systems for model inference, fine-tuning, and evaluation to deploy and evolve AI research models reliably in production. Focus on optimizing inference for latency and throughput, implementing best practices for model lifecycle management, and providing technical leadership for ML system design.

Location: Onsite in San Francisco or Palo Alto, California

Company

hirify.global is seeking a Lead AI Engineer to join its AI Research Incubation Team, focusing on production ML systems.

What you will do

  • Design, build, and maintain model inference and serving systems, including AI gateway integration.
  • Own and evolve fine-tuning pipelines (e.g., LoRA / PEFT) using internal model tooling.
  • Develop and maintain model evaluation, regression detection, and rollout workflows.
  • Collaborate with AI researchers to transition research models into production.
  • Optimize inference systems for latency, throughput, stability, and cost efficiency.
  • Implement best practices for model versioning, deployment, rollback, and monitoring.

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or related field.
  • 5+ years of experience in software engineering, with significant ownership of backend or distributed systems.
  • Strong proficiency in Python, with experience building production services.
  • Hands-on experience with AI/ML model serving, inference pipelines, or ML systems engineering.
  • Experience designing reliable, scalable systems for production environments.
  • Familiarity with cloud platforms (AWS, GCP) and containerized environments (Docker, Kubernetes).

Nice to have

  • Experience with fine-tuning techniques such as LoRA or PEFT.
  • Familiarity with model evaluation frameworks and regression testing.
  • Experience with GPU-based workloads or ML infrastructure.

Culture & Benefits

  • Opportunity to own systems that turn research models into production AI capabilities.
  • Work at the intersection of AI research and large-scale engineering systems.
  • Shape how models are trained, deployed, evaluated, and evolved.
  • Competitive compensation, benefits, and strong long-term growth opportunities.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...