AI

Artificial Intelligence

A
1
Posts
Space menu
Profile picture of Beltribe
AI · ·
Last updated Feb 12, 2026 - 6:54 AM Visible also to unregistered users
**“The world is in peril,” Anthropic’s Safeguards Research Team lead wrote in his resignation letter.** Mrinank Sharma, a leading artificial intelligence safety researcher, has resigned from Anthropic, issuing an enigmatic warning about global “interconnected crises” and announcing his intention to become “invisible for a period of time.” Sharma, an Oxford graduate who led the Safeguards Research Team at the Claude chatbot maker, posted his resignation letter on X (formerly Twitter) on Monday, expressing a growing personal reckoning with “our situation.” ![>](file-guid:d60859a9-abda-4eed-8c2c-87933d7fcd7e "image.png" =300x) “The world is in peril. And not just from AI or bioweapons, but from a whole series of interconnected crises unfolding right now,” Sharma wrote to his colleagues. His departure comes amid escalating tensions at the San Francisco-based AI lab, which is simultaneously racing to develop increasingly powerful systems while its own executives warn that these very technologies could pose significant risks to humanity. “I'll be moving back to the UK and letting myself become invisible for a period of time.” — mrinank (@MrinankSharma) February 9, 2026 [![>](file-guid:f9ecf57a-0d21-428b-b5c9-4677ef033277 "image.png" =320x)](https://a.co/d/0j2nF27y) The resignation also follows reports of a widening rift between Anthropic and the Pentagon over the military’s desire to deploy AI for autonomous weapons targeting—without the safeguards the company has sought to impose. Sharma’s resignation, which came just days after Anthropic released Opus 4.6—a more powerful iteration of its flagship Claude tool—hinted at internal friction regarding safety priorities. “Throughout my time here, I’ve repeatedly seen how difficult it is to truly let our values govern our actions,” he wrote. “I’ve witnessed this struggle within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society as well.” Sharma’s team was established just over a year ago with a mandate to address AI security threats, including “model misuse and misalignment,” bioterrorism prevention, and “catastrophe prevention.” He expressed pride in his work developing defenses against AI-assisted bioweapons and in his “final project on understanding how AI assistants could make us less human or distort our humanity.” Now, he plans to return to the UK to “explore a poetry degree” and “become invisible for a period of time.” Anthropic’s CEO, Dario Amodei, has repeatedly warned about the dangers posed by the very technology his company is commercializing. In a nearly 20,000-word essay last month, he cautioned that AI systems of “almost unimaginable power” are “imminent” and will “test who we are as a species.” Amodei warned of “autonomy risks,” where AI could “go rogue and overpower humanity,” and suggested the technology could enable “a global totalitarian dictatorship” through AI-powered surveillance and autonomous weapons.