Назад
Company hidden
14 часов назад

Staff Software Engineer (Data Infrastructure)

200 000 - 275 000$
Формат работы
onsite
Тип работы
fulltime
Грейд
senior
Английский
c1
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Staff Software Engineer (Data Infrastructure): Architecting and building systems to ingest, store, and serve massive volumes of real-time operational data with an accent on scalability and reliability at petabyte scale. Focus on designing high-throughput data integration platforms and optimizing distributed processing pipelines using Apache Spark and Iceberg.

Location: Must be based in San Francisco, New York, or Washington DC and work onsite

Salary: $200,000 - $275,000 Annually

Company

AI-enabled platform providing operational intelligence to help public safety and government agencies make critical decisions with speed and accuracy.

What you will do

  • Architect and build the data layer for ingesting, storing, and serving petabyte-scale real-time operational data.
  • Design and operate a high-throughput, real-time data integration platform across diverse customer environments.
  • Build and optimize distributed data processing pipelines using Apache Spark and adjacent streaming technologies.
  • Drive performance, reliability, and cost efficiency across the full data infrastructure stack.
  • Collaborate with platform and product engineering teams to define data contracts, schemas, and integration patterns.
  • Establish best practices and tooling to raise the overall quality bar for data infrastructure.

Requirements

  • 8+ years of experience architecting and operating large-scale data infrastructure systems in production.
  • Deep expertise with open table formats, specifically Apache Iceberg (schema evolution, partitioning, compaction).
  • Extensive hands-on experience with Apache Spark for batch and streaming data processing.
  • Strong background in real-time data integration using Apache Kafka, Apache Flink, or equivalents.
  • Software engineering fundamentals in Python and/or Scala with a track record of production-quality code.
  • Experience with AWS (S3-based data lakes) and Kubernetes for containerized data workloads.

Culture & Benefits

  • Strong emphasis on empathy and ownership, prioritizing direct user feedback to improve solutions.
  • Collaborative environment with direct interaction with deployment teams and end-users.
  • Comprehensive compensation package including benefits, equity, and bonuses.
  • Opportunity to solve complex technical challenges with significant real-world impact on public safety.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →