Назад
Company hidden
4 дня назад

AI Infrastructure Engineer

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
UK/Ireland/Germany
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

AI Infrastructure Engineer (AI/LLM): Implement and scale training pipelines for large transformer and LLM models from data ingestion through distributed training and evaluation with an accent on inference services, GPU-level performance, and production reliability. Focus on optimizing kernels, building low-latency autoscaling services, and collaborating with ML scientists on cutting-edge methods.

Dublin, Ireland. Hybrid working policy: in the office at least three days per week.

Company

hirify.global is the AI Customer Service company building advanced AI agents like Fin to transform customer experiences for global businesses.

What you will do

  • Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.
  • Build and optimize inference services for low-latency, high-reliability customer experiences, including autoscaling, routing, and fallbacks.
  • Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across training and inference stack.
  • Collaborate with ML scientists to implement cutting-edge training and inference methods and bring them to production.
  • Hire, mentor, and develop other engineers on the team.
  • Raise technical standards, reliability, and operational excellence across hirify.global’s AI platform.

Requirements

  • 5+ years of experience in software engineering with a track record of shipping high-quality products or platforms.
  • Degree in Computer Science, Computer Engineering, or related field (or equivalent experience).
  • Hands-on experience with model training (especially transformers and LLMs), model inference at scale, or low-level GPU work (e.g., CUDA, Triton).
  • Comfortable working in production environments at meaningful scale.
  • Deep knowledge of at least one programming language (e.g., Python, Ruby, Java, Go).
  • Strong communication skills and enjoyment of collaboration with engineers and non-engineers.

Nice to have

  • Experience at AI native companies training/running inference for own models.
  • Running training or inference workloads on Kubernetes.
  • Experience with AWS or other major cloud providers.
  • Production experience with Python in ML or infrastructure contexts.
  • Demonstrated passion for technology through personal projects, open source, or content.

Culture & Benefits

  • Competitive salary and equity.
  • Lunch every weekday, snacks, stocked kitchen.
  • Regular compensation reviews.
  • Unlimited access to Claude Code and AI tools; experimentation encouraged.
  • Pension scheme with match up to 4%, life assurance, comprehensive health and dental insurance.
  • Flexible paid time off, paid maternity/paternity leave, Cycle-to-Work Scheme.
  • MacBooks standard (Windows for some roles).

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →