Назад
Company hidden
1 день назад

Data Engineer (Azure Databricks)

Формат работы
remote (Global)
Тип работы
fulltime
Грейд
middle/senior
Английский
b2
Вакансия из списка Hirify.GlobalВакансия из Hirify RU Global, списка компаний с восточно-европейскими корнями
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Data Engineer (Azure Databricks): Building and maintaining scalable, high-performance data pipelines using Azure Databricks, Confluent, DLT, Spark, and Delta Lake to power real-time and batch analytics for trading, risk, and operational use cases with an accent on capital markets data and agile methodologies. Focus on production pipeline support services, including job monitoring, incident resolution, and performance tuning in production environments.

Company

hirify.global provides individuals in the world’s fast-developing economies with guidance, tools, and easy market access so they can trade and invest with confidence.

What you will do

  • Design, develop, and maintain robust data pipelines using Azure Databricks, Confluent, DLT, Spark pipeline, and Delta Lake to support trading and market data workflows.
  • Enhance existing data pipelines, ensuring continuity, scalability, and performance improvements.
  • Provide production pipeline support services, including job monitoring, incident resolution, and performance tuning in production environments.
  • Administer Databricks workspaces, unity catalog, including cluster configuration, job scheduling, access control, and workspace optimization.
  • Build and maintain CI/CD pipelines using GitLab, enabling automated testing, deployment, and versioning of data engineering code.
  • Collaborate with fellow team members, business analysts, and data architect to understand data requirements and deliver high-quality solutions.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 5+ years of experience in data engineering, with at least 2 years working with Azure Databricks.
  • Strong proficiency in PySpark, SQL, and Python.
  • Experience supporting production pipelines, including monitoring, alerting, and troubleshooting.
  • Experience with GitLab CI/CD, including pipeline configuration, runners, and integration with cloud services.
  • Familiarity with financial capital markets domain, such as market data feeds, order books, trade execution, and risk metrics.

Nice to have

  • Azure certifications (e.g., Azure Data Engineer Associate).
  • Experience with real-time data processing using Kafka or Event Hubs.

Culture & Benefits

  • Freedom to succeed is the core belief.
  • Learn from each other and from new projects.
  • Exchange information and best practices in an open-minded environment.
  • Advance by developing skills and accepting greater responsibilities and career progression and diversification.
  • Prosper by acquiring skills and by nurturing a team.

Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →