Назад
Company hidden
1 день назад

MTS – Data Engineering (AI)

139 900 - 331 200$
Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

MTS – Data Engineering (AI): Architecting and implementing data backbones for Copilot, focusing on building and optimizing high-scale ETL pipelines and experimentation frameworks. Focus on solving complex data challenges, ensuring data quality, and designing scalable data architectures for machine learning model training and inference.

Location: Redmond or San Francisco area, United States. Required to be in office 3 days a week. Starting January 26, 2026, employees within 50 miles of a Microsoft office are expected to work from that office at least four days a week.

Salary: USD $139,900 – $331,200 per year

Company

hirify.global is a global technology corporation focused on empowering individuals and organizations to achieve more through innovative AI solutions, including Copilot.

What you will do

  • Build, maintain, and enhance data ETL pipelines for processing large-scale data with low latency and high throughput to support Copilot operations.
  • Design and maintain high throughput, low latency experimentation reporting pipelines for measuring model performance and user engagement.
  • Own data quality initiatives including monitoring, alerting, validation, and remediation processes.
  • Implement robust schema management solutions for quick and seamless schema evolution.
  • Develop and maintain data infrastructure supporting real-time and batch processing for machine learning model training and inference.
  • Collaborate with ML engineers and data scientists to optimize data access patterns and improve pipeline performance.

Requirements

  • Master’s Degree in Computer Science or related field AND 4+ years of experience in data engineering, OR Bachelor’s Degree AND 6+ years of experience.
  • Experience building and maintaining production data pipelines at scale.
  • Proficiency in writing production-quality Python, Scala, or Java code for data processing.
  • Demonstrated experience with data quality frameworks and monitoring solutions.
  • Ability to design scalable data architectures that handle growing data volumes.
  • Strong collaboration skills with cross-functional teams to translate data requirements into technical solutions.

Nice to have

  • Experience with technologies like Apache Spark, Kafka, or similar distributed processing frameworks.
  • Experience building and scaling experimentation frameworks.
  • Familiarity with cloud data platforms (Azure, AWS, or GCP) and their data services.
  • Experience with data orchestration frameworks such as Airflow, Prefect, or Dagster.
  • Experience with containerization technologies (Docker, Kubernetes) for data pipeline deployment.

Culture & Benefits

  • Join a team dedicated to a growth mindset, innovation, and collaboration to achieve shared goals.
  • Work in a culture of inclusion built on values of respect, integrity, and accountability.
  • Opportunity to make a real impact on millions of users worldwide through world-class data products.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...