Назад
Company hidden
6 дней назад

Senior Software Engineer in Data Engineering (Game Analytics)

Формат работы
onsite
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior Software Engineer in Data Engineering (Game Analytics): Building and optimizing high-throughput streaming and batch data processing services for a cutting-edge game analytics platform with an accent on reliability and performance. Focus on designing event-driven architectures, evolving event schemas, and ensuring scalability and fault tolerance of ingestion pipelines under heavy load.

Location: This is a full-time, in-office position based out of Rockstar’s newly renovated game development studio in Andover, MA

Company

hirify.global creates world-class entertainment experiences.

What you will do

  • Design, build, and maintain high-throughput streaming and batch data processing services.
  • Develop and operate stream-based applications for real-time data transformation, enrichment, validation, and routing.
  • Own and evolve event schemas and data contracts, including Avro schemas and Schema Registry governance.
  • Ensure scalability, fault tolerance, and performance of streaming and ingestion pipelines under heavy load.
  • Contribute to platform-level concerns such as deployment automation, observability, operational tooling, and CI/CD.
  • Participate in the design and implementation of cloud-native data infrastructure supporting real-time and batch workloads.

Requirements

  • 5+ years of professional experience building production software systems, preferably in a distributed or data-intensive environment.
  • Strong experience with Java (and/or Scala) as well as Python in backend or data processing applications.
  • Experience designing and operating streaming systems using Kafka or Kafka Streams (or similar).
  • Experience working with event-driven architectures, including schema evolution and compatibility.
  • Experience building real-time and/or near-real-time data pipelines at scale.
  • Solid understanding of distributed systems concepts (partitioning, fault tolerance, backpressure, exactly-once/at-least-once semantics).
  • Familiarity with Avro, Protobuf, or similar serialization formats and schema governance practices.

Nice to have

  • Experience with Databricks, particularly for ingestion, bronze-layer processing, or structured streaming.
  • Experience deploying and scaling applications in containerized environments (e.g., Kubernetes, AKS).
  • Experience working with artifact repositories (e.g., Artifactory, ProGet, Maven repositories).
  • Experience with Infrastructure-as-Code (e.g., Terraform, Databricks Asset Bundles).
  • Familiarity with the Microsoft Azure cloud ecosystem.
  • Familiarity with Apache Spark.
  • Familiarity with CI/CD pipelines, automated testing, and deployment workflows.

Culture & Benefits

  • Become part of a team working on rewarding, large-scale creative projects.
  • Collaborate with talented people in an inclusive, highly-motivated environment.
  • Committed to creating a work environment that promotes equal opportunity, dignity, and respect.
  • Provides reasonable accommodations to qualified job applicants with disabilities during the recruitment process.
  • Encourages applications from all suitable candidates regardless of age, disability, gender identity, sexual orientation, religion, belief, race, or any other protected category.

Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →