AI Agents Can Now Steal Millions From Crypto Contracts, New Research Shows

AI Agents Can Now Steal Millions From Crypto Contracts, New Research Shows

Table of Contents

The study shows that advanced AI models such as Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 successfully extracted $4.6 million from simulated attacks on real smart contracts.

Artificial intelligence has reached a dangerous new stage. AI systems can now find and exploit vulnerabilities in smart contracts worth millions of dollars, according to groundbreaking research published by Anthropic.

These contracts were compromised after March 2025, meaning the AI ​​could not have been aware of these specific vulnerabilities during training.

What makes this discovery alarming?

The research team created a benchmark called SCONE-bench using 405 smart contracts that were already hacked between 2020 and 2025. When they tested 10 leading AI models, the results were amazing. The AI ​​agents successfully broke 207 contracts, more than half of them, and stole $550.1 million in virtual funds.

But the real shock came when the researchers only tested contracts that were compromised after March 2025. Even without prior knowledge of these specific attacks, the AI ​​agents successfully exploited 19 out of 34 contracts. Cloud Opus 4.5 alone accounts for $4.5 million of the total.

The speed of improvement is equally alarming. The research found that AI exploit capabilities doubled every 1.3 months throughout 2025. At the same time, the cost of running these attacks fell by 70% in just six months.

Artificial intelligence discovers new vulnerabilities

The study went beyond recreating ancient hacks. The researchers tested the AI ​​agents on 2,849 smart contracts recently deployed on the Binance Smart Chain that had no known security issues. Sonnet 4.5 and GPT-5 found two brand new vulnerabilities worth $3,694 in potential theft.

One vulnerability involved a token contract with a calculator function that was supposed to be read-only. The developers forgot to add the proper code flag, allowing anyone to call the function and mint an unlimited number of codes. The AI ​​repeatedly called this function, inflated its token balance, and then sold the tokens for real money.

source: @AnthropicAI

The second glitch affected the token launch service. When token creators do not designate a fee recipient, anyone can claim to be the intended recipient and steal the accumulated trading fees. Four days after the AI ​​discovered the bug, a real hacker used the same method to steal $1,000.

Real World Impact: Balancer Attack

The timing of this research is important. In November 2025, hackers exploited the Balancer protocol for more than $120 million using similar attack methods. The attack demonstrated that even well-established and well-vetted DeFi protocols remain vulnerable to sophisticated exploits.

Balancer has undergone multiple security audits and has operated for years without major incidents. However, attackers discovered a weakness in the protocol’s access control system and drained funds across multiple blockchain networks.

Economics of AI-powered attacks

The cost structure of these AI attacks is remarkably efficient. Running GPT-5 across all 2,849 contracts costs just $3,476 in API fees. The average cost of scanning a single contract was just $1.22, while the cost of finding each vulnerability was about $1,738.

This creates a lucrative scenario for attackers. With an average exploit value of $1,847, hackers can make a profit of approximately $109 per successful attack. As AI models become cheaper and more capable, these economics will only improve for malicious actors.

The research also revealed that the success of exploiting vulnerabilities does not depend on the complexity of the code. Instead, the amount of money locked in the contract determines how profitable the attack is. This means that attackers will more likely target high-value protocols rather than looking for more complex bugs.

Beyond DeFi: Broader Security Implications

The researchers warn that these AI capabilities are not limited to blockchain systems. The same logical thinking skills that allow AI agents to manipulate token balances and redirect fees could apply to traditional software. Artificial intelligence browser systemsAnd the infrastructure that supports digital assets.

As scanning becomes cheaper and more automated, the window between the deployment of new software and potential exploitation will continue to shrink. Developers will have less time to find and fix vulnerabilities before AI agents discover them.

The study’s authors stress that this technology cuts both ways. The same AI systems that are able to detect vulnerabilities can also help developers review their code and fix vulnerabilities before deployment. Organizations should adopt Artificial intelligence-powered defense systems To match the capabilities of potential attackers.

The security arms race begins

For the cryptocurrency industry, this means fundamental changes in how security is approached. Traditional auditing practices may not be sufficient when AI can comprehensively scan code for vulnerabilities at minimal cost. Projects will need constant monitoring and AI-powered defense systems to stay ahead of automated threats.

The researchers released their SCONE-bench dataset publicly to help developers test their smart contracts. While this creates some risk by providing attack tools, it also gives defenders the same capabilities to harden their systems before malicious actors strike.

The race between AI-powered attack and defense has begun. Organizations that quickly adapt to this new reality will survive, while those that do not may become the next headlines in an increasingly dangerous digital landscape.

Our offer on Sallar Marketplace