Назад
Company hidden
23 часа назад

Senior Data Engineer (GCP)

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
UK
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior Data Engineer (GCP): Building and optimizing high-scale, secure, automated data processing pipelines on GCP with an accent on cloud compute efficiency, data governance, quality, and consumption. Focus on designing and building production-level pipeline code, solving challenges across data science disciplines like classification and optimization, and ensuring data stability under heavy traffic.

Location: Hybrid in Edinburgh or London, UK

Company

hirify.global is an an award-winning global leader and technology innovator in big data analytics and advertising, helping brands understand and effectively reach their best audiences.

What you will do

  • Design, build, monitor, and support large-scale data processing pipelines.
  • Support, mentor, and pair with other team members to advance team capabilities.
  • Help hirify.global explore and exploit new data streams to support commercial and technical growth.
  • Work closely with Product and deliver against fast-paced decisions for customers.

Requirements

  • 5+ years of direct experience delivering robust, performant data pipelines within commercial financial footprints.
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices.
  • Mastery of building pipelines in GCP, maximizing the use of native and native-supporting technologies (e.g., Apache Airflow).
  • Mastery of Python for data and computational tasks, with fluency in data cleansing, validation, and composition techniques.
  • Hands-on implementation and architectural familiarity with all forms of data sourcing (e.g., streaming data, relational/non-relational databases, distributed processing like Spark).
  • Advanced knowledge of cloud-based services specifically GCP and excellent working understanding of server-side Linux.

Nice to have

  • Experience optimizing both code and config in Spark, Hive, or similar tools.
  • Practical experience working with relational databases, including advanced operations such as partitioning and indexing.
  • Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems.
  • Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures.
  • Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab.

Culture & Benefits

  • Work on fantastically high-scale systems processing over 350GB of data an hour and responding to 400,000 decision requests per second.
  • Join a growing team with big responsibilities, significant freedom, and ambitious goals, adhering to Lean Development.
  • Values centered on being Brave (innovation and growth mindset), loving clients (client obsessed, integrity), being Inclusive (empathetic, diverse, mutual respect), and Solutions driven (action-oriented, agile).
  • Opportunity to take ownership and be accountable for outcomes, solving everyday challenges and achieving breakthroughs.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...