Назад
Company hidden
2 дня назад

Senior Solutions Architect (AI Infrastructure)

Формат работы
onsite
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior Solutions Architect (AI Infrastructure): Translating workload requirements into scalable, production-ready infrastructure designs for AI customers and hyperscale partners with an accent on GPU compute, backend fabric, frontend networking, storage systems, and cluster architecture. Focus on bridging customer ambitions with hirify.global’s capabilities and providing feedback into product and platform roadmaps based on customer needs.

Location: Onsite in Seattle, US

Company

hirify.global is building high-performance GPU infrastructure purpose-built for AI, partnering with AI-native companies and hyperscalers to deliver scalable, reliable, and performant compute environments for training and inference at scale.

What you will do

  • Engage with AI-native startups, enterprises, and hyperscalers to understand their training, inference workloads, model size, scaling strategy, and data pipeline requirements, translating these into infrastructure specifications.
  • Architect end-to-end GPU cluster solutions, including GPU selection, backend networking (InfiniBand / RoCE / Ethernet fabrics), frontend networking, storage systems (parallel file systems, object storage, NVMe tiers), and rack density considerations.
  • Produce HLD/LLD documentation and reference architectures.
  • Work with hyperscale partners to align on connectivity, interconnect, and hybrid deployments, integrating with public cloud networking and storage architectures.
  • Partner with infrastructure engineering, deployment, and operations teams to ensure designs are executable.

Requirements

  • 6–10+ years in solutions architecture, infrastructure engineering, or AI/HPC environments.
  • Strong knowledge of GPU-based systems and distributed training infrastructure.
  • Experience with backend networking (InfiniBand, RoCE, high-speed Ethernet).
  • Solid understanding of storage architectures for AI workloads.
  • Experience designing large-scale compute clusters.
  • Customer-facing experience with strong technical communication skills.

Nice to have

  • Experience working with hyperscalers (AWS, Azure, GCP) or large colocation providers.
  • Familiarity with NCCL, RDMA, CUDA, and distributed training frameworks.
  • Experience producing formal architecture documentation.
  • Understanding of cost modeling and capacity planning.

Culture & Benefits

  • Opportunity to work at the intersection of GPUs, high-speed networking, storage architecture, and production AI workloads.
  • Partner with AI-native companies and hyperscalers.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →