Loading...
Loading...
Posted 1h ago
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic’s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation.
You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.
Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.