Назад
Company hidden
8 часов назад

Data Engineer (Big Data)

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
UK
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Data Engineer (Big Data): Building and optimizing secure, automated, scalable data processing pipelines on GCP with an accent on maximising cloud compute efficiency and data governance. Focus on designing, building, monitoring, and supporting large-scale data processing pipelines and exploring new data streams.

Location: Hybrid role, based in Edinburgh or London, UK

Company

hirify.global is an award-winning, global leader and technology innovator in big data analytics and advertising.

What you will do

  • Design, build, monitor, and support large-scale data processing pipelines.
  • Support, mentor, and pair with team members to advance team capabilities.
  • Explore and exploit new data streams for commercial and technical growth.
  • Work closely with Product to deliver against fast-paced decisions.

Requirements

  • 3+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints.
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms.
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow.
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques.
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark).
  • Advanced knowledge of cloud-based services specifically GCP.
  • Excellent working understanding of server-side Linux.

Nice to have

  • Experience optimizing both code and config in Spark, Hive, or similar tools.
  • Practical experience working with relational databases, including advanced operations such as partitioning and indexing.
  • Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems.
  • Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures.
  • Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab to analyze, prototype, and visualize data and algorithmic output.

Culture & Benefits

  • Work on fantastically high-scale systems with over 350GB of data an hour and 400,000 decision requests per second.
  • Join a growing team with big responsibilities and exciting challenges, aiming for 10x level of scale and intelligence.
  • Adhere to Lean Development principles, working with significant freedom and ambitious goals.
  • Be part of an innovative company with over 300 global employees across 14 offices in 11 countries.
  • Values brave, client-obsessed, inclusive, and solutions-driven approaches.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...