TL;DR
Member of Technical Staff – Data Platform (AI): Building the "Paved Road" for AI, transforming raw, massive-scale signals into the fuel that powers training, inference, and evaluation for millions of users with an accent on stream processing, lakehouse architecture, and developer experience. Focus on solving hard problems in data reliability engineering and compute optimization.
Location: Must be local to the San Francisco area or Redmond area and work in office 3 days a week, and starting January 26, 2026, work from a designated Microsoft office at least four days a week if living within 50 miles (U.S.) or 25 miles (non-U.S.) of that location.
Salary: USD $119,800 – $274,800 per year
Company
Microsoft’s mission is to empower every person and every organization on the planet to achieve more.
What you will do
- Design and build underlying frameworks (based on Spark/Databricks) that allow internal teams to process massive datasets efficiently.
- Modernize data stack by moving from batch-heavy patterns to event-driven architectures, utilizing modern streaming architecture to reduce latency for AI inference.
- Architect high-throughput pipelines capable of processing complex, non-tabular data for LLM pre-training, fine-tuning and evaluations datasets.
- Engineer the high-throughput telemetry systems that capture user interactions with Copilot, creating the critical data loops required for Reinforcement Learning and model evaluation.
- Define and deploy all storage, compute, and networking resources using IaC (Bicep/Terraform) rather than manual configuration.
- Optimize shuffle operations, partition strategies, and resource allocation to ensure platform is as cost-efficient as it is fast.
Requirements
- Master’s or Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field.
- 3+ years experience with Master's Degree OR 4+ years with Bachelor's Degree in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.
- Proficiency in Python, Scala, Java, or Go.
- Demonstrated technical understanding of massive-scale compute engines (e.g., Apache Spark, Flink, Ray, Trino, or Snowflake).
- Experience architecting Lakehouse environments at scale (using Delta Lake, Iceberg, or Hudi).
- Experience building internal developer platforms or “Data-as-a-Service” APIs.
Culture & Benefits
- Employees come together with a growth mindset, innovate to empower others, and collaborate to realize shared goals.
- Build on values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →