Machine Learning & Data Engineer
Мэтч & Сопровод
Для мэтча с этой вакансией нужен Plus
Описание вакансии
TL;DR
Principal Machine Learning & Data Engineer (AI/AWS): Leading the design, build, and operation of an internal ML-and-data platform that powers customer interactions with an accent on cloud-native pipelines, model-serving infrastructure, and MLOps best practices. Focus on architecting reliable, secure, and cost-efficient systems, implementing automated testing and CI/CD for high-volume workloads, and ensuring compliance with stringent privacy requirements.
Location: Remote (US), not eligible to be hired in CA, CT, NJ, NY, PA, WA
Salary: $184,500 - $271,300
Company
is a product company shaping the future of communications, delivering innovative solutions and empowering developers worldwide to craft personalized customer experiences.
What you will do
- Architect and evolve ’s end-to-end ML and real-time data platforms for reliability, security, and cost efficiency.
- Design scalable feature stores, streaming and batch pipelines, and low-latency model-serving layers on AWS.
- Implement MLOps best practices—automated testing, CI/CD, monitoring, and rollback—for hundreds of daily deployments.
- Own system design reviews, threat modeling, and performance tuning for high-volume communications workloads.
- Lead cross-functional engineering efforts, breaking down complex initiatives into executable roadmaps.
- Mentor staff and senior engineers, raising the technical bar through code reviews and pair programming.
Requirements
- Bachelor’s or higher in Computer Science, Engineering, Mathematics, or equivalent practical experience.
- 7+ years building and operating production data or machine-learning systems at scale.
- Expert fluency in Python and one compiled language (Java, Scala, Go, or C++).
- Hands-on mastery of distributed data frameworks (Spark/Flink), SQL/NoSQL stores, and streaming platforms (Kafka/Kinesis).
- Demonstrated success designing cloud-native architectures on AWS, including Terraform-managed infrastructure.
- Deep knowledge of container orchestration (Kubernetes/EKS), service-mesh networking, and autoscaling strategies.
- Practical experience implementing MLOps tooling such as MLflow, Kubeflow, SageMaker, or Vertex AI.
Nice to have
- Graduate degree focused on machine learning, distributed systems, or applied statistics.
- Contributions to open-source ML or data infrastructure projects.
- Experience with privacy-enhancing technologies (differential privacy, homomorphic encryption) or on-device inference.
- Background in conversational AI, real-time communications, or large-language-model deployment at scale.
Culture & Benefits
- Remote-first work culture with a strong emphasis on connection and global inclusion.
- AI used to make the hiring process efficient, fair, and transparent, with all final decisions made by humans.
- Competitive pay, generous time off, and ample parental and wellness leave.
- Comprehensive healthcare and a retirement savings program.
- Empowerment to build positive change in communities through volunteering and donation efforts.
Hiring process
- Applications for this role are intended to be accepted until May 21st 2026.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →