Назад
Company hidden
обновлено 3 дня назад

Distributed Systems Engineer (Data Platform)

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
UK, Portugal
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Distributed Systems Engineer (Data Platform): Designing, building, and operating robust logging and audit log platforms within hirify.global’s global network with an accent on secure data transfer, scalability, and compliance. Focus on developing high-performance data connectors, optimizing delivery pipelines for massive data volumes, and collaborating on architectural evolution.

Location: Hybrid, available in London (UK) or Lisbon (Portugal)

Company

hirify.global is a large-scale technology company focused on building a better Internet by providing a global network that protects and accelerates millions of websites and online applications.

What you will do

  • Design, build, and operate a robust logging platform, ensuring reliable logging and secure data transfer to various customer destinations and third-party integrations.
  • Develop and maintain high-performance data connectors and integrations for log-shipping products, focusing on usability, scalability, and data integrity.
  • Create and manage systems for handling comprehensive audit logs, ensuring secure delivery and adherence to strict compliance and performance standards.
  • Scale and optimize the data delivery pipeline to handle massive data volumes with low latency, identifying and removing bottlenecks.
  • Work closely with Product and other engineering teams to define requirements for new logging platforms and integrations.
  • Maintain the operational health of the log delivery platform through monitoring and participation in an on-call rotation.

Requirements

  • 3+ years of experience working in software development covering distributed systems and data pipelines.
  • Strong programming skills in Go, with a deep understanding of software development best practices for building resilient, high-throughput systems.
  • Hands-on experience with modern observability stacks, including Prometheus and Grafana, and understanding of high-cardinality metrics.
  • Strong knowledge of SQL, including experience with query optimization.
  • A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.
  • Ability to work collaboratively in a team environment and communicate effectively.

Nice to have

  • Experience with data streaming technologies (e.g., Kafka, Flink).
  • Experience with various logging platforms or SIEMs (e.g., Splunk, Datadog, Sumo Logic) and storage destinations (e.g., S3, R2, GCS).
  • Experience with Infrastructure as Code tools like SALT or Terraform.
  • Experience with Linux container technologies, such as Docker and Kubernetes.

Culture & Benefits

  • Contribute to the mission of building a better Internet, protecting the free and open Internet.
  • Join a diverse and inclusive team, committed to individual development and learning new skills.
  • Participate in initiatives like Project Galileo, Athenian Project, and 1.1.1.1.
  • Work with cutting-edge technologies and contribute to solutions at the edge of internet infrastructure.
  • Work in an ambitious technology company with a focus on impact.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...