TL;DR
Staff Research Engineer (Pre-training): Developing Large Language Models from scratch and deploying them into production environments for coding tasks with an accent on training foundation models. Focus on converting business requirements into technical specifications and improving existing subsystems.
Location: Amsterdam, Netherlands; Berlin, Germany; Limassol, Cyprus; London, United Kingdom; Munich, Germany; Paphos, Cyprus; Prague, Czech Republic; Remote, Germany; Warsaw, Poland; Yerevan, Armenia
Company
hirify.global is dedicated to creating robust and effective developer tools that automate routine checks and corrections, speeding up production and freeing developers to grow, discover, and create.
What you will do
- Work with stakeholders to convert business requirements into technical specifications.
- Train LLMs from scratch on a large GPU cluster.
- Collect and process pre-training and fine-tuning datasets.
- Support and improve existing subsystems.
Requirements
- Experience in design, deployment, and support of production ML systems.
- A strong theoretical background in NLP and transformer-based approaches.
- Proficiency with modern deep learning frameworks such as PyTorch and common libraries for NLP.
- Experience in distributed training of multi-billion parameter models.
- Attention to detail in everything you do and great communication skills.
Nice to have
- LLM inference frameworks such as vLLM, DeepSpeed, TensorRT.
- LLM alignment techniques such as RLHF/RLAIF.
- MLOps tools and practices, including CI/CD for ML.
- K8s and Kubeflow.
- Scientific publications in the NLP field.
Culture & Benefits
- A cluster of hundreds of NVIDIA GPUs as training infrastructure.
- Git for source control management.
- Python, PyTorch, and HuggingFace as an ML stack.
- Kubeflow and Weights & Biases for experiment tracking.
- TeamCity as a CI Automation system.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →