Назад
Company hidden
4 часа назад

Member Of Technical Staff – Data Platform (AI)

119 800 - 304 200$
Формат работы
hybrid
Тип работы
fulltime
Грейд
middle/senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Member of Technical Staff – Data Platform (AI): Designing and building data platforms to process data for AI models with an accent on stream processing, lakehouse architecture, and developer experience. Focus on modernizing the data stack by utilizing modern streaming architecture to reduce latency for AI inference.

Location: By applying to this U.S. _Mountain View, CA, Redmond WA_ position, you are required to be local to the San Francisco area , Redmond area and in office 3 days a week.

Salary: USD $119,800 – $304,200 per year

Company

hirify.global’s mission is to empower every person and every organization on the planet to achieve more.

What you will do

  • Design and build frameworks based on Spark/Databricks to process datasets efficiently.
  • Modernize the data stack by moving from batch-heavy patterns to event-driven architectures.
  • Architect high-throughput pipelines capable of processing complex, non-tabular data for LLM pre-training, fine-tuning, and evaluations datasets.
  • Engineer high-throughput telemetry systems that capture user interactions with Copilot.
  • Define and deploy all storage, compute, and networking resources using IaC (Bicep/Terraform).
  • Build automated governance and observability systems that detect anomalies in the data mesh.
  • Optimize shuffle operations, partition strategies, and resource allocation to ensure platform cost-efficiency.

Requirements

  • Master’s or Bachelor’s Degree in Computer Science, Math, Software Engineering, or related field.
  • 3+ years experience with Master's degree or 4+ years with Bachelor's degree in business analytics, data science, software development, data modeling, or data engineering.

Nice to have

  • 4+ years of experience in Software Engineering or Data Infrastructure.
  • Proficiency in Python, Scala, Java, or Go.
  • Understanding of massive-scale compute engines (e.g., Apache Spark, Flink, Ray, Trino, or Snowflake).
  • Experience architecting Lakehouse environments at scale (using Delta Lake, Iceberg, or Hudi).
  • Experience building internal developer platforms or “Data-as-a-Service” APIs.
  • Strong background in streaming technologies (Kafka, Azure EventHubs, Pulsar) and stateful stream processing.
  • Experience with container orchestration (Kubernetes) for deploying data applications.
  • Experience enabling AI/ML workloads (Feature Stores, Vector Databases).

Culture & Benefits

  • Growth mindset, innovate to empower others, and collaborate to realize shared goals.
  • Values of respect, integrity, and accountability to create a culture of inclusion.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...