TL;DR
Safeguards Analyst, Account Abuse: Building and scaling detection, enforcement, and operational capabilities to protect the platform against scaled abuse with an accent on developing account signals, fraud detection, and integrating third-party data. Focus on optimizing identity and account-linking signals, managing payment fraud, and ensuring policy compliance across products.
Location: Hybrid, requiring 25% office attendance in San Francisco, CA or New York City, NY. Visa sponsorship is available.
Salary: $230,000–$310,000 USD
Company
hirify.global is an AI safety and research company working to build reliable, interpretable, and steerable AI systems, dedicated to ensuring AI is safe and beneficial for users and society.
What you will do
- Develop and iterate on account signals and prevention frameworks for abuse detection.
- Optimize identity and account-linking signals using graph-based data infrastructure.
- Evaluate, integrate, and operationalize third-party vendor signals to improve detection.
- Build and maintain processes to assess new product launches for scaled abuse risks.
- Operationalize and iterate on enforcement tooling, including appeals workflows and user communications.
- Manage payment fraud and dispute operations to protect revenue and maintain payment partner standing.
Requirements
- 2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement.
- Hands-on experience building detection systems, risk models, or enforcement processes and workflows.
- Experience evaluating and integrating third-party data sources into detection pipelines.
- Strong SQL and Python skills for complex, multi-table data analysis.
- Familiarity with identity signals (device fingerprinting, account linking) or experience with appeals processes.
- Bachelor's degree in Computer Science, Data Science, or a related field, or equivalent practical experience.
- Ability to engage with explicit content spanning sexual, violent, or psychologically disturbing nature, with on-call responsibility.
Nice to have
- Experience with graph-based data, account-linking problems, or cross-functional process design.
- Comfort working with ambiguous, noisy data and extracting meaningful signals.
- Experience leveraging generative AI tools to support analytical, detection, or enforcement workflows.
- Background or interest in cybersecurity or threat intelligence.
Culture & Benefits
- Work as part of a single cohesive team on large-scale AI research efforts.
- Focus on high-impact work advancing steerable, trustworthy AI.
- Competitive compensation and comprehensive benefits package.
- Optional equity donation matching and generous vacation and parental leave.
- Flexible working hours and a collaborative office environment.
- Value empirical science, collaboration, and strong communication skills.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →