Назад
Company hidden
1 день назад

Lead Data Engineer

Формат работы
remote (Global)
Тип работы
fulltime
Грейд
lead
Английский
b2
Страна
Argentina
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Lead Data Engineer (Data): Lead the migration of large-scale logs and distributed traces from existing Analytics Databases to ClickHouse with an accent on designing and tuning distributed ClickHouse clusters, including sharding, replication, partitioning, and storage layout. Focus on architecting high-throughput ingestion pipelines and scalable schemas optimized for real-time telemetry workloads.

Location: Remote

Company

hirify.global is a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges.

What you will do

  • Lead the migration of large-scale logs and distributed traces from existing Analytics Databases to ClickHouse.
  • Design and tune distributed ClickHouse clusters, including sharding, replication, partitioning, and storage layout.
  • Architect high-throughput ingestion pipelines and scalable schemas optimized for real-time telemetry workloads.
  • Establish monitoring, alerting, and operational best practices, including Kubernetes deployment and TTL policies.
  • Partner with platform and SRE teams to ensure production readiness, reliability, and security of the platform.
  • Document architecture decisions, performance tuning approaches, and operational runbooks for the team.

Requirements

  • 12+ years of experience in backend, infrastructure, or data platform engineering.
  • Strong understanding of distributed systems and high-ingestion telemetry architectures.
  • Experience designing schemas for billion-row-scale analytical or telemetry datasets.
  • Hands-on production experience with ClickHouse (MergeTree engines, indexing, and compression).
  • Expertise in SQL performance tuning and migrating data between large-scale analytical systems.
  • Experience deploying and operating stateful systems on Kubernetes using Terraform and Helm.
  • Practical experience with OpenTelemetry logs and traces.
  • Familiarity with ingestion pipelines like Kafka, Azure Event Hub, or Azure Data Explorer (Kusto).
  • Proven ability to optimize query latency and cost for high-cardinality datasets.
  • Proficiency in at least one major cloud provider: Azure, GCP or AWS.
  • Experience implementing backup/restore strategies and resolving complex performance bottlenecks.
  • Ability to communicate clearly in global environments and understand complex technical documentation

Culture & Benefits

  • Comprehensive company-paid medical insurance and mental health programs, 5 undocumented sick-leave days per year.
  • Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy license, language courses and company-paid certifications.
  • Long-term employment with 20 working-days paid vacation and local bank holidays.
  • 100% remote work mode.
  • Welcoming environment with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...