Назад
Company hidden
2 часа назад

Staff + Senior Software Engineer (AI)

300 000 - 485 000$
Формат работы
hybrid
Тип работы
fulltime
Грейд
senior/principal
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Staff + Senior Software Engineer (AI): Scaling and optimizing the Claude AI model to serve massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers. Focus on navigating platform differences, building robust cross-provider abstractions, and making smart infrastructure decisions to ensure cost-effectiveness at massive scale.

Location: Hybrid, expected to be in San Francisco, New York City, or Seattle offices at least 25% of the time.

Salary: $300,000–$485,000 USD

Company

hirify.global's mission is to create reliable, interpretable, and steerable AI systems, aiming for safe and beneficial AI for users and society as a whole.

What you will do

  • Design and build infrastructure that serves Claude across multiple cloud service providers (CSPs).
  • Collaborate with CSP partner engineering teams to resolve operational issues and influence provider roadmaps.
  • Design and evolve CI/CD automation systems for reliable model version shipments.
  • Design interfaces and tooling abstractions across CSPs to enable cost-effective inference management.
  • Contribute to capacity planning and autoscaling strategies for dynamic supply-demand matching.
  • Optimize inference cost and performance across providers by designing workload placement and routing systems.

Requirements

  • Significant software engineering experience with high-performance, large-scale distributed systems.
  • Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.
  • Strong interest in inference.
  • Thrive in cross-functional collaboration with internal teams and external partners.
  • Fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems.
  • Highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work.
  • Location: Must be able to work from San Francisco, New York City, or Seattle offices (hybrid, 25% onsite).
  • English: B2 required.

Nice to have

  • Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms.
  • Background in building platform-agnostic tooling or abstraction layers across cloud providers.
  • Hands-on experience with capacity management, cost optimization, or resource planning at scale.
  • Strong familiarity with LLM inference optimization, batching, caching, and serving strategies.
  • Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators.

Culture & Benefits

  • Work as a single cohesive team on large-scale AI research efforts.
  • Value impact and advancing long-term goals of steerable, trustworthy AI.
  • Extremely collaborative group with frequent research discussions.
  • Competitive compensation and benefits, optional equity donation matching.
  • Generous vacation and parental leave, flexible working hours.
  • Lovely office space to collaborate with colleagues.
  • Visa sponsorship is available.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...