It seems that the use of artificial intelligence models to create exploits of encrypted nodes is a promising business model, albeit not necessarily legally.
Researchers at the University of London College (UCL) and the University of Sydney in Australia, the agent of Amnesty International, can discover and exploit weaknesses in the so -called smart contracts.
Smart contracts, which have never been promoted to their name, are subjective implementation programs on various banks that implement decentralized financing transactions (Defi) when fulfilling certain conditions.
A system like A1 can convert profit
Like most adequate complexity programs, smart contracts have errors, and these errors can be used to steal money a reward. Last year, the cryptocurrency industry lost approximately $ 1.5 billion for piracy attacks, according to the seller of the WEB3 safety platform IMMUNEFI [PDF]. Since 2017, the poes are stolen around them 11.74 billion dollars From Defi platforms.
It seems that artificial intelligence agents can make this money easier.
Arthur Jerefis, Associate Professor of Information Security at the University of California, Wali Cho, a computer in computer science, has developed an AI’s agent system called A1 uses various models of artificial intelligence from Openai, Google, Deepseek and Alibaba (QWEN) Smart contract hardness.
They describe the system in a Preprint paper Entitled “Ai Agent Smart Contract Exploit Generation.”
Looking at a group of targeted parameters – the address of the contract and the block number – the agent chooses tools and collects information to understand the behavior of the contract and the weaknesses. Then it is generated in the form of assembly solid contracts, which it experiences against the historical Blockchain cases.
If the weaknesses of the code are asked, LLMS can find errors – but it is often Prohibited Providing weakness reports generated by artificial intelligence.
So the A1 Agent system consists of a set of tools to make its exploits more reliable. These include: source coding problems can be solved Agent contractsIndividual tools for preparing parameters, reading communication functions, cleaning code, implementing the test code, and calculating revenue.
“A1 performs a complete exploitation generation,” said Zhu Record In an email. “This is important. This is unlike other LLM safety tools. The result is not just a report, but the actual executive code. A1 is really close to the human infiltrator.”
It was tested on 36 decades in the weak real world on the ETHEREUM and Binance Series, the A1 showed a success rate of 62.96 percent (17 of 27) on fact standard.
According to the authors, the A1 also monitored nine additional contracts, five of which occurred after the training of the best performance model, Openai’s O3-PRO. This is suitable because it indicates that the model is not limited to re -renewing the available weak information during training.
“In all 26 successful cases, the A1 extracts up to 8.59 million US dollars per case and a total of 9.33 million US dollars,” paper reports. “Through 432 experiments across six LLMS, we analyze the repetitive performance that shows decreased returns with average marginal gains of +9.7 percent, +3.7 per cent, +5.1 per cent, +2.8 per cent for repetitions 2-5, respectively, with improved costs ranging from $ 0.01 -3.59 dollars.”
The researchers tested the A1 with different LLMS: O3-PRO (Openai O3-PRO, O3-PRO-2025-06-10), O3 (Openai O3, O3-2025-04-16), Gemini Pro (Google Gemini 2.5 Pro Preview, Gemini-2.5-PRO) Gemini Flash Gemini-2.5-Flash-Preview 04-17), R1 (Deepseek R1-0528), and QWEN3 MEE (QWEN3-235B-A2B).
Openai’s O3-PRO and O3 have achieved the highest success rates, 88.5 percent and 73.1 percent, respectively, given the five-transformation budget for the model to interact with himself in the agent episode. The O3 models did this while maintaining the improvement of high revenues, as it got 69.2 percent and 65.4 percent of the maximum revenue from the exploited contracts.
The exploits of this type can also be determined by using handy code analysis along with fixed and dynamic ambiguity tools. But the authors notice that handicrafts have limits, due to the size and complexity of smart contracts, the slow and scarcity of human security experts, and highly positive positive rates of current automated tools.
In theory, the A1 can be published and earned more than more than operating costs, assuming that the law application has not interfered.
“A system like A1 can make a profit,” Zhou explained. To give a tangible example [from the paper]Figure 5 shows that O3-PRO is still profitable even if 1 out of every 1,000 examination operations lead to real weakness-as long as this weakness is presented in the last 30 days. “

Programmed or “money” tied to the goal “is the next, perhaps as a feature in the central bank’s digital currencies
Zhou said that the time window is important because researchers are likely to find old weaknesses and users may have corrected it.
“Finding such fresh errors is not easy, but it is possible, especially on a large scale. Once you discover some valuable exploits, they can easily pay the cost of operating thousands of surveying operations. With the continued improvement of artificial intelligence models, we expect an opportunity to find these weak points and the scope of covered contracts – which makes the system more effective over time.”
When asked if the A1 had monitored any zero day gaps in the wilderness, Zhu replied, “There are no zero days for this paper (so far).”
The paper concludes with a warning against the lack of consistency of the attack between the attacks compared to the defense rewards – if the attackers use artificial intelligence tools and use traditional tools. Basically, the authors argue that either insect payments need to approach the value of exploitation, or that the cost of defensive survey must decrease in an arrangement in terms of size.
“A single security vulnerability requires approximately 1000 tests, costs $ 3,000,” says the paper. “The exploitation of $ 100,000 has been funded by 33 kV in the future for the attacker, while the defender’s bonus worth $ 10,000 covers only 3.3 kilos.
The risk of imprisonment may change the accounts some extent. But looking at the current organizational climate in the United States The estimated electronic crime enforcement rate of 0.05 percentThe small risk adjustment will be.
Zhu argues that the cost gap between attack and defense is a serious challenge.
“My recommendation is that the project teams must use tools like A1 themselves to monitor their continuous protocol, instead of waiting for third parties to find issues,” he said. “The utility of the project teams and the attackers is the entire TVL [Total Value Locked of the smart contract]While Whitehat bonuses are often covered by 10 percent. “
“This contrast makes it difficult to compete without proactive safety. If you depend on the third party’s difference, you are essentially trusting that they will act in good faith and remain within a 10 percent bonus-which, from a security perspective, is a very strange assumption. I usually assume that all players are financially financially when providing security problems.”
The researchers indicated in the draft of July 8 of their paper that they were planning to release A1 as an open source symbol. But Zhu said otherwise when asked about the availability of the source code.
“We have removed the open source (ARXIV will appear tomorrow) because we are not sure whether this is the right step, due to the strength of the A1 and the above concerns.” ®