Engineering Manager, Data Infrastructure
Мэтч & Сопровод
Для мэтча с этой вакансией нужен Plus
Описание вакансии
TL;DR
Engineering Manager, Data Infrastructure: Leading a team responsible for the performance, reliability, scalability, and security of ’s data platform with an accent on building and operating foundational systems that power ingestion, transformation, analytics, and AI workloads at scale. Focus on ensuring the platform is robust, secure, and easy to operate, balancing people leadership with deep technical ownership.
Location: New York, NY/Bellevue, WA. While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.
Salary: $165,000 to $242,000
Company
delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence.
What you will do
- Lead a team of Software Engineers and Site Reliability Engineers responsible for the infrastructure that powers ’s data platform.
- Own the reliability, scalability, and performance of core systems such as compute engines, orchestration frameworks, and storage layers.
- Partner closely with Data Engineering teams, as well as cross-functional groups including Production Engineering, Developer Experience, Security Engineering, and IT Operations to ensure the platform is robust, secure, and easy to operate.
- Run and evolve engineering processes (e.g., agile development, backlog management) to drive predictable execution and continuous improvement.
- Set team goals and metrics (e.g., OKRs) and holding teams accountable to outcomes.
Requirements
- 7+ years of experience in software engineering, infrastructure engineering, or data platform engineering roles.
- 2+ years of experience managing engineering teams, including hiring, coaching, performance management, and career development.
- Experience leading teams through the full software development lifecycle (SDLC), including planning, execution, and delivery of complex technical initiatives.
- Strong hands-on experience operating and scaling data platform infrastructure (e.g., Spark, Airflow, Iceberg, StarRocks) in production environments.
- Deep expertise in Kubernetes and containerized software development, including cluster design, operations, and scaling in production environments.
- Ability to contribute code and technical solutions when needed, with proficiency in at least one programming language (Python, Java, Go, Rust).
- Applicant must either be (A) a U.S. person, or (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency.
Nice to have
- Experience supporting high-scale data workloads (e.g., large-scale Spark clusters, real-time ingestion platforms).
- Experience working in environments with strict uptime and reliability requirements (e.g., ≥99.99% uptime).
- Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX.
- Experience building internal platforms that enable self-service analytics or developer productivity.
Culture & Benefits
- Medical, dental, and vision insurance - 100% paid for by .
- Flexible Spending Account and Health Savings Account.
- Tuition Reimbursement and Employee Stock Purchase Program (ESPP).
- Mental Wellness Benefits through Spring Health and Family-Forming support provided by Carrot.
- Flexible PTO and a casual work environment.
- 401(k) with a generous employer match.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →