Berk Kutay Gokmen
22 April 2026•Update: 22 April 2026
A group of unauthorized users has managed to gain access to Anthropic’s artificial intelligence (AI) model, a powerful technology that can enable dangerous cyberattacks, Bloomberg reported on Wednesday.
A small group of users in a private online forum obtained access to Mythos on the same day Anthropic announced plans to release the model to a limited number of companies for testing, the report said, citing sources.
Since then, the group has been using Mythos regularly, though not for cybersecurity purposes. The source supported this account with screenshots and a live demo of the model.
Anthropic has said that Mythos can identify and exploit vulnerabilities “in every major operating system and every major web browser when directed by a user to do so.”
Because of these capabilities, the company has restricted access to a select group of software providers so they can test and strengthen their systems against potential cyber threats.
The users reportedly used a combination of methods to gain entry, including leveraging access through a third-party contractor and employing common investigative tools used by cybersecurity researchers.
In response, an Anthropic spokesperson said the company is investigating claims of unauthorized access through a third-party vendor environment.
The company added that it has no evidence that the access reported by Bloomberg went beyond a third-party vendor’s environment or that it is impacting any of Anthropic’s systems.
The group is interested in playing around with new models, not wreaking havoc with them, Bloomberg said, citing sources, who added that it has not run cybersecurity-related prompts on the Mythos model.