TL;DR
Researcher, Interpretability (AI): Developing and publishing research on techniques for understanding deep networks and engineering infrastructure for studying model internals at scale. Focus on applying understanding to ensure AI safety and guiding research toward demonstrable usefulness and long-term scalability.
Location: Onsite in San Francisco
Salary: $310K – $460K + Offers Equity
Company
hirify.global is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
What you will do
- Develop and publish research on techniques for understanding representations of deep networks.
- Engineer infrastructure for studying model internals at scale.
- Collaborate across teams to work on projects that hirify.global is uniquely suited to pursue.
- Guide research directions toward demonstrable usefulness and/or long-term scalability.
Requirements
- Excitement about hirify.global’s mission of ensuring AGI benefits all of humanity, and alignment with hirify.global’s charter.
- Enthusiasm for long-term AI safety, and deep thought about technical paths to safe AGI.
- Experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines.
- Hold a Ph.D. or have research experience in computer science, machine learning, or a related field.
- Thrive in environments involving large-scale AI systems, and are excited to make use of hirify.global’s unique resources.
- Possess 2+ years of research engineering experience and proficiency in Python or similar languages.
Culture & Benefits
- Collaborative and curiosity-driven working style.
- Committed to providing reasonable accommodations to applicants with disabilities.
- Equal opportunity employer.
- Offers equity.
Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →