
Cybercriminals weaponize AI agents against healthcare
Cybercriminals are weaponizing AI agents to attack industries including healthcare, AI company Anthropic found.
Hackers are using agentic AI to carry out sophisticated cyberattacks that would have previously required years of training, according to the Aug. 27 report.
For instance, a cybercriminal employed Claude, Anthropic’s large language model, to extract data from 17 organizations over the previous month, including in healthcare, the company determined. Using the “vibe hacking” technique, the hacker interacted with the chatbot in real time while carrying out the attacks.
“The actor’s systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials, and other sensitive information, with direct ransom demands occasionally exceeding $500,000,” the report found.
In response, Anthropic said it banned the associated accounts, instituted additional security controls and “shared technical indicators with key partners to help prevent similar abuse across the ecosystem.”
The post Cybercriminals weaponize AI agents against healthcare appeared first on Becker’s Hospital Review | Healthcare News & Analysis.