Назад
Company hidden
6 дней назад

Member Of Technical Staff, Applied Research (AI)

Формат работы
onsite
Тип работы
fulltime
Грейд
middle/senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Member Of Technical Staff, Applied Research (AI): Building and deploying advanced generative AI models for customer products with an accent on fine-tuning techniques and model evaluation. Focus on bridging the gap between customer data and inference infrastructure to solve real-world scalability and performance challenges.

Location: San Mateo, CA

Company

A fast-growing, Series C generative AI infrastructure company dedicated to high-performance model serving and LLM innovation.

What you will do

  • Collaborate directly with customers to understand their unique data and product requirements.
  • Tune and deploy models using SFT, DPO, and RL techniques tailored to specific use cases.
  • Develop and implement robust evaluation methodologies for LLMs, including benchmarks and custom evals.
  • Bridge the gap between customer-facing problems and internal model-serving infrastructure.
  • Diagnose and resolve system-wide performance and modeling issues to ensure production success.

Requirements

  • Strong experience with PyTorch and modern Transformer architectures.
  • Solid foundation in computer science, including concurrency, distributed systems, and data structures.
  • Hands-on experience in training, fine-tuning, or evaluating machine learning models.
  • Familiarity with current LLM research, model architectures, and training methodologies.
  • Proven ability to partner with customers to iterate on solutions based on real-world feedback.
  • Ability to operate effectively in a fast-paced, ambiguous environment.

Nice to have

  • Deep expertise in tuning techniques like SFT, DPO, and RL.
  • Experience with cloud-native infrastructure including Docker, Kubernetes, and enterprise data storage.
  • Proficiency with LLM benchmarking and error analysis.
  • Knowledge of enterprise infrastructure components like Databricks or SageMaker.

Culture & Benefits

  • Direct impact on global AI infrastructure development.
  • Collaborative environment focused on results and innovation without bureaucracy.
  • Opportunity to work with leading experts in ML and systems engineering.
  • Exposure to bleeding-edge AI models and inference speed technologies.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →