Loading...
Loading...
Posted 1h ago
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment. As a member of the user well-being team, your initial focus will be on standing up detection, review, and escalation workflows for this domain — from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways. This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you'll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.
In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.
Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy
Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas
Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces
Conduct deep-dive investigations into suspected exploitation activity — using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets — then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team
Study trends internally and in the broader ecosystem — including evolving trafficking and sextortion tactics — to anticipate how AI systems could be misused for exploitation as capabilities advance
Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material
Build and maintain relationships with external intelligence partners — including hotlines, NGOs, and industry hash-sharing consortia — to inform our approach and enable appropriate real-world escalation
3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field
Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation
Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization
Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations
Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure
Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning
Strong attention to detail and ability to maintain accurate documentation
Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams
Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)
Experience conducting open-source investigations or threat actor profiling in a trust & safety, intelligence, or law enforcement context
Experience working with generative AI products, including writing effective prompts for content review and enforcement
A deep interest in AI safety and responsible technology development
Experience standing up real-world harm escalation pathways or working with law enforcement referral processes
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.