Назад
Company hidden
5 часов назад

Mid-Level Data Engineer (Azure)

100 000 - 115 000$
Формат работы
hybrid
Тип работы
fulltime
Грейд
middle
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Mid-Level Data Engineer (Azure/PySpark): Building and scaling an enterprise data platform to support analytics, reporting, and AI-enabled use cases with an accent on reliable data pipelines and curated datasets. Focus on designing ETL/ELT workflows, optimizing Spark performance, and implementing scalable transformations on Microsoft Azure.

Location: Must be based in Denver, CO (Hybrid: 3 days on-site required)

Salary: $100,000–$115,000 + Bonus

Company

hirify.global Data Centers provides high-scale data center infrastructure, power, and connectivity for hyperscalers, cloud providers, and large enterprises globally.

What you will do

  • Design, build, and maintain scalable data pipelines using Python and PySpark on the Microsoft Azure platform.
  • Develop batch and incremental data pipelines leveraging Azure Data Factory and Azure Data Lake Storage Gen2.
  • Implement SQL- and Spark-based transformations to produce curated datasets for enterprise reporting and analytics.
  • Operate and optimize analytical workloads within Azure Synapse.
  • Collaborate with business analysts to translate requirements into practical, production-ready data solutions.
  • Structure and prepare data to support advanced analytics and AI-enabled use cases.

Requirements

  • Bachelor’s degree in Engineering, Computer Science, Data Analytics, or equivalent experience.
  • 3–5 years of experience in data engineering or analytics engineering.
  • Proficiency in Python (including PySpark) and SQL for data processing and transformation.
  • Hands-on experience with Azure Data Factory, Azure Synapse, and Azure Data Lake Storage Gen2.
  • Solid understanding of ETL/ELT pipelines, data modeling (fact/dimension tables), and CI/CD workflows via GitHub or Azure DevOps.
  • Must be based in or able to work on-site in Denver, CO (3 days per week).

Nice to have

  • Experience with distributed data processing frameworks like Apache Spark.
  • Exposure to preparing data for machine learning or AI applications.
  • Familiarity with Azure Functions or Logic Apps for data workflows.
  • Experience in scaling or fast-paced organizations with evolving priorities.

Culture & Benefits

  • Competitive total compensation package with performance bonuses.
  • Comprehensive health benefits including medical, dental, and vision coverage.
  • 401k program with company match.
  • Paid time off, short/long-term disability, and employee assistance programs.
  • Professional environment based on a "No Ego and No Arrogance" culture, emphasizing mutual support and growth.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →