Назад
Company hidden
23 часа назад

Big Data Software Engineer

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
Israel
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Big Data Software Engineer: Building and maintaining high-throughput streaming systems processing 100B+ daily events with an accent on real-time data processing pipelines and performance optimization. Focus on designing and implementing solutions using Kafka, Databricks/Spark, and distributed computing within a massive scale environment.

Location: Hybrid (Tel Aviv, Israel)

Company

hirify.global is an Israeli-founded big data analytics company that tracks and analyzes tens of billions of ads daily for major global brands, operating at a massive scale.

What you will do

  • Join the Traffic Team, a core engineering team operating at the heart of the company's measurement system.
  • Build and maintain high-throughput streaming systems processing 100B+ daily events.
  • Tackle performance and optimization challenges that make interview questions actually relevant.
  • Design and implement real-time data processing pipelines using Kafka, Databricks/Spark, and distributed computing.
  • Lead projects end-to-end: design, development, integration, deployment, and production support.

Requirements

  • 5+ years of software development experience with JVM-based languages (Scala, Java, Kotlin) with strong functional programming skills.
  • Strong grasp of Computer Science fundamentals: functional programming paradigms, object-oriented design, data structures, concurrent/distributed systems.
  • Proven experience with high-scale, real-time streaming systems and big data processing.
  • Experience and deep understanding of a wide array of technologies, including: Stream processing (Kafka, Kafka Streams, Flink, Spark Streaming, Pulsar), Concurrency frameworks (Akka, Pekko), Data platforms (Databricks, Spark, Delta Lake), Microservices & containerization (Docker, Kubernetes), Modern databases (ClickHouse, Snowflake, BigQuery, Cassandra, MongoDB), and Cloud infrastructure (GCP or AWS).
  • Hands-on experience developing with AI tools.
  • Strong DevOps mindset: CI/CD pipelines (GitLab preferred), infrastructure as code, monitoring/alerting.
  • BSc in Computer Science or equivalent experience.
  • Excellent communication skills and ability to collaborate across teams.

Nice to have

  • Previous experience in ad-tech.
  • Experience with schema evolution and data serialization (Avro, Protobuf, Parquet).

Culture & Benefits

  • Work in a fast-paced, high-scale environment solving complex challenges.
  • Opportunity to build products that have a huge impact on the industry and the web.
  • Operate at a massive scale, handling over 100B events per day and over 1M RPS at peak.
  • Process events in real-time at low latencies.
  • Join a global company with R&D centers in Tel Aviv, New York, Finland, Berlin, Belgium and San Diego.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...