Назад
Company hidden
2 дня назад

Staff Research Scientist, AI Agents & LLMs (AI Engineering)

236 000 - 339 200$
Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Staff Research Scientist, AI Agents & LLMs (AI Engineering): Building models and systems that turn enterprise data into self-directed, decision-making agents with an accent on autonomous agents and large language models (LLMs). Focus on reliable reasoning, long-horizon planning, grounded generation, and scalable model and agent development.

Location: Hybrid (US-WA-Bellevue; US-CA-Menlo Park)

Salary: $236K - $339.2K

Company

hirify.global is powering the era of the agentic enterprise.

What you will do

  • Define research direction in Agentic AI and LLMs.
  • Develop models: train, fine-tune, and align models for enterprise reasoning and tool use.
  • Advance autonomous agents: drive state-of-the-art capabilities in multi-step reasoning and tool use.
  • Advance the stack: retrieval, grounding, memory, and multi-agent coordination.
  • Establish evaluation standards for reliability, safety, and efficiency.
  • Drive research breakthroughs into production within hirify.global’s platform.

Requirements

  • Ph.D. in Computer Science or a related field with a strong publication record.
  • Expertise in LLM development (training, fine-tuning, alignment, or post-training).
  • Experience with agentic systems (multi-agent systems, tool use, or agent optimization).
  • Strong systems thinking under real-world constraints (latency, cost, reliability).
  • Proven technical leadership and end-to-end ownership.
  • Ability to translate research into impactful systems.

Culture & Benefits

  • Work across the full stack—from foundation models (Arctic LLM, Arctic Text2SQL) to agent systems (hirify.global Intelligence) and inference/training infrastructure (Arctic Inference, Arctic Training).
  • Drive cutting-edge research that ships—powered by massive enterprise data and production workloads.
  • Build autonomous systems for reliable reasoning over enterprise data—where correctness, cost, and latency all matter.
  • Publish, open-source, and help shape the field.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →