Назад
Company hidden
1 день назад

Senior Distributed Systems Software Engineer (Big Data)

Формат работы
remote (только USA)
Тип работы
fulltime
Грейд
senior/lead/principal
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify RU Global, списка компаний с восточно-европейскими корнями
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior Distributed Systems Software Engineer (Big Data): Leading big data infrastructure projects and developing scalable data processing and analytics services using Spark, Trino, Airflow, and Kafka with an accent on resilient, scalable infrastructure. Focus on designing distributed systems that manage thousands of compute nodes across multiple data centers and resolving complex technical challenges.

Location: Must be based in the US

Company

hirify.global is a technology-driven company focused on enabling scalable, high-performance data processing and analytics.

What you will do

  • Build scalable data processing and analytics services utilizing a big data stack, including Spark, Trino, Airflow, and Kafka, to support real-time and batch data workflows.
  • Design, develop, and operate resilient distributed systems that manage thousands of compute nodes across multiple data centers, ensuring scalability and high availability.
  • Resolve complex technical challenges and drive innovations that enhance system resilience, availability, and performance.
  • Manage the full lifecycle of services, balancing live-site reliability, feature development, and technical debt retirement.
  • Participate in the team’s on-call rotation to address complex, real-time issues, keeping critical services operational and highly available.
  • Provide mentorship and technical guidance to junior engineers, fostering growth, collaboration, and knowledge-sharing within the team.

Requirements

  • Bachelor’s or Master’s in Computer Science, Engineering, or a related field with 5+ years of experience in distributed systems, big data, or similar roles.
  • Proficiency in cloud platforms (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and infrastructure-as-code (Terraform, Ansible).
  • Hands-on experience with Hadoop, Spark, Trino (or similar SQL query engines), Airflow, Kafka, and related ecosystems.
  • Strong skills in Python, Java, Scala, or other programming languages relevant to distributed systems.
  • Solid knowledge of distributed computing principles, data partitioning, fault tolerance, and performance tuning.
  • Proven ability to troubleshoot complex system issues, optimizing for speed, efficiency, and scale.

Culture & Benefits

  • Technology-driven team focused on enabling scalable, high-performance data processing and analytics.
  • Use cutting-edge tools like Spark, Trino, Airflow, and Kafka to build resilient, scalable infrastructure for a variety of big data applications.
  • Opportunity to work on the Big Data Infrastructure team.

Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →