Вакансия из Telegram канала - Название доступно после авторизации
Пожаловаться
75
Хорошая вакансия
развернуть
Роль четко определена с акцентом на современные практики ML, но зарплата не указана, что может вызвать неопределенность.
Кликните для подробной информации
Зарплата не указанаСовременный стек технологийСтартап-средаЧеткие обязанности
Оценка от Hirify AI
Мэтч & Сопровод
Покажет вашу совместимость и напишет письмо
Создать профиль и узнать мэтч
Описание вакансии
Senior ML Engineer – ML/Inference
Локация: Удаленно Компания: MARA ЗП: обсуждается на собеседовании Занятость: Полная
MARA is redefining the future of sovereign, energy-aware AI infrastructure. We’re building a modular platform that unifies IaaS, PaaS, and SaaS which will enable governments, enterprises, and AI innovators to deploy, scale, and govern workloads across data centers, edge environments, and sovereign clouds.
ESSENTIAL DUTIES AND RESPONSIBILITIES:
- Own the end-to-end lifecycle of ML model deployment—from training artifacts to production inference services.
- Design, build, and maintain scalable inference pipelines using modern orchestration frameworks (e.g., Kubeflow, Airflow, Ray, MLflow).
- Implement and optimize model serving infrastructure for latency, throughput, and cost efficiency across GPU and CPU clusters.
- Develop and tune Retrieval-Augmented Generation (RAG) systems, including vector database configuration, embedding optimization, and retriever–generator orchestration.
- Collaborate with product and platform teams to integrate model APIs and agentic workflows into customer-facing systems.
- Evaluate, benchmark, and optimize large language and multimodal models using quantization, pruning, and distillation techniques.
- Design CI/CD workflows for ML systems, ensuring reproducibility, observability, and continuous delivery of model updates.
- Contribute to the development of internal tools for dataset management, feature stores, and evaluation pipelines.
- Monitor production model performance, detect drift, and drive improvements to reliability and explainability.
- Explore and integrate emerging agentic and orchestration frameworks (LangChain, LangGraph, CrewAI, etc.) to accelerate development of intelligent systems.
QUALIFICATIONS:
- 5+ years of experience in applied ML or ML infrastructure engineering.
- Proven expertise in model serving and inference optimization (TensorRT, ONNX, vLLM, Triton, DeepSpeed, or similar).
- Strong proficiency in Python, with experience building APIs and pipelines using FastAPI, PyTorch, and Hugging Face tooling.
- Experience configuring and tuning RAG systems (vector databases such as Milvus, Weaviate, LanceDB, or pgvector).
- Solid foundation in MLOps practices: versioning (MLflow, DVC), orchestration (Airflow, Kubeflow), and monitoring (Prometheus, Grafana, Sentry).
- Familiarity with distributed compute systems (Kubernetes, Ray, Slurm) and cloud ML stacks (AWS Sagemaker, GCP Vertex AI, Azure ML).
- Understanding of prompt engineering, agentic frameworks, and LLM evaluation.
- Strong collaboration and documentation skills, with ability to bridge ML research, DevOps, and product development.
Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →
Текст вакансии взят без изменений
Источник - Telegram канал. Название доступно после авторизации