Назад
Plusможно открыть ещё 3 в бесплатном тарифе
Company hidden
обновлено 4 часа назад

Senior Devops Engineer (Data Platform)

Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
Israel
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, нашего списка международных tech-компаний

Мэтч & Сопровод

Покажет вашу совместимость и напишет письмо

Описание вакансии

Текст:
/

TL;DR

Senior Devops Engineer (Data Platform): Driving data infrastructure strategy and establishing standardized patterns for AI/ML workloads with an accent on architectural decisions across data and engineering teams. Focus on building scalable analytics capabilities and ensuring performance and cost-effectiveness for critical analytics systems.

Location: Tel Aviv, Israel

Company

hirify.global is the leading platform to create and grow games and interactive experiences.

What you will do

  • Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power AI, ML, and analytics ecosystem
  • Shape the technical vision and roadmap for data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
  • Improve data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
  • Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
  • Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how data systems are built, deployed, and operated at scale

Requirements

  • 3+ years of hands-on DevOps experience building, shipping, and operating production systems.
  • Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
  • Deep experience with AWS, GCP, or Azure (core services, networking, IAM).
  • Strong understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
  • Design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation.
  • Experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
  • Experience with metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.

Nice to have

  • Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Prefect, Kubernetes operators) and implementing GitOps practices for data workloads
  • Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
  • Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems

Culture & Benefits

  • Committed to fostering an inclusive, innovative environment and celebrate employees across age, race, color, ancestry, national origin, religion, disability, sex, gender identity or expression, sexual orientation.

Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →