Назад
Company hidden
3 дня назад

Senior Software Engineer (Distributed Systems)

200 000 - 287 500$
Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior Software Engineer (Distributed Systems): Building and scaling high-throughput data ingestion and processing pipelines for an AI-powered observability platform with an accent on petabyte-scale telemetry and distributed system reliability. Focus on developing performance-critical components in Go and C++, driving OpenTelemetry contributions, and architecting low-latency solutions for AWS and Azure.

Location: Hybrid in Menlo Park, California (US)

Salary: $200,000 – $287,500

Company

hirify.global is a leading AI Data Cloud platform powering agentic enterprises with scalable data management and AI-powered observability tools.

What you will do

  • Design and scale high-throughput pipelines handling petabyte-scale telemetry, including logs, metrics, traces, and events.
  • Develop performance-critical distributed systems components in Go and/or C++ that operate across AWS and Azure.
  • Contribute to OpenTelemetry and drive the company's open-source strategy and external community engagement.
  • Architect solutions to ensure enterprise-grade availability and low latency under extreme data volumes.
  • Collaborate with SRE, product, and platform teams to define data reliability standards and improve detection-to-resolution times.
  • Mentor engineers across the organization and help shape the technical roadmap for the Data Management team.

Requirements

  • 5+ years of software engineering experience with deep expertise in distributed systems.
  • Proficiency in Go and/or C++ for writing high-performance, production-grade systems code.
  • Demonstrated experience designing and operating large-scale data ingestion or stream processing pipelines.
  • Strong fundamentals in systems programming: concurrency, memory management, networking, and I/O.
  • Hands-on experience building and running services across major cloud providers (AWS and/or Azure).
  • B.S. in Computer Science, Engineering, or equivalent practical experience.

Nice to have

  • Experience with OpenTelemetry SDKs, instrumentation, or ecosystem tooling.
  • Prior open-source contributions or project maintainership.
  • Familiarity with Apache Iceberg or other open table formats and data lakehouse architectures.
  • Background in observability, monitoring, or SRE.
  • Experience with multi-cloud data infrastructure or telemetry platforms at petabyte scale.

Culture & Benefits

  • Opportunity to work on a massive scale with genuine ownership over systems processing a petabyte of data per day.
  • Combination of startup-style velocity and the global reach and operational excellence of a leading data platform.
  • Experimental mindset and AI-native culture, treating AI as a high-trust collaborator.
  • Low-ego environment that values curiosity and rapid testing of emerging capabilities.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →