AI Agents and Cyber Crime: Why Identity Must Come First

We’re shaping AI and cybercrime the wrong way. Most discussions still treat AI as a tool used by fraudsters, but the biggest threat is that AI is starting to do critical parts of fraud itself. This shift is happening in line with a major industry push towards autonomous AI agents, systems designed to plan, execute and execute tasks with minimal human supervision, making the distinction between tool and actor clearer.
Cybercrime takes it into account that it carries a human signature, even when the code was complex. Someone wrote the malware, someone invented the phishing scheme, someone decided when to move aside, when to extract data and when to extract it. Security groups can follow this series of targets, even if they fail to stop it in time.
Recent AI-powered attacks suggest that this streak is starting to break. In February, an anonymous hacker was used Anthropic’s chatbot for automating cyber attacks against Mexican government agencies. The reported theft included 150GB of leaked data, including voter records, civil registration files and employee information and exposed 195 million identities. Of particular concern was the manner in which the breach proceeded.
The system can scan government systems, find weak points and choose what to exploit on its own. It did not appear to require specific human instructions in each section. Once inside, it reportedly produced parallel operations in real-time, changed as defenses changed and moved fast enough to change access to more people. Although the characteristics of this behavior are similar to well-known automated penetration testing tools, the level of autonomy described shows a significant increase.
In the end, the investigators are left with a series of spontaneous actions and no clear actor behind them. Traditional investigative methods have produced no leads to a known assailant or a clear suspect. What remained was a pattern of attacks coupled with AI-assisted execution. That is a strategic warning that is included in the breach.
The attacker fades from view
The Mexican case is important because it compresses several disturbing trends into one incident. AI has reduced the work required to identify vulnerabilities and generate attack code, speeding up execution once access is gained and making attribution difficult afterwards. This coincides with widespread warnings from cybersecurity firms and government agencies that AI is compressing the attack lifecycle from weeks to minutes.
Fraud goes the same way. Deepfakes are no longer a novelty reserved for election clips or celebrity hoaxes—they’re becoming a viable crime. In another prominent case in early 2024, a intensive video conferencing convinced an employee of UK engineering firm Arup to transfer $25 million. Insurers are also starting damage price caused by impersonation and reputational damage.
The same pattern now extends to regular users in personal settings. Fake celebrity endorsements continue driving investment and consumer scams. High-profile figures such as Taylor Swift and Elon Musk have been used repeatedly in AI-generated fraud campaigns, emphasizing how virtual identity is empowered by scale. Artificial voices and artificial people are growing more convincing. The threat is no longer a theory.
People, let’s run a a controlled experiment to see how readily available AI tools can create convincing dating profiles and earn the trust of real users. Profiles cleared Tinder checks, used 296 users and confirmed 40 to agree to personal meetings. The most important lesson came after that first pass. If the profile seems trustworthy, the system can keep the conversation going with quick responses and enough consistency to feel like a human. At one point, the experiment was handling about 100 conversations at once. That’s what change agencies need to pay attention to. Fraud now relies on artificial identities that remain believable long enough to move people from conversation to action.
The synthetic identity becomes i a working tool by deception. The AI goes from trickery to execution. Corner to help the attackers find weaknesses quickly, develop exploits quickly and suppress the path from retest to damage. Major AI labs like Anthropic and OpenAI are building agent systems capable of taking multiple actions, raising questions about how those systems are validated, deployed and tested in real-world environments. Provenance now sits amid a security challenge.
Validation should go to the action layer
Conventional cybersecurity still assumes that critical attacks can ultimately be traced back to an individual user, group or organization. That assumption weakens as AI takes over the execution layer. If the system can adapt in real time and leave only automatic traces behind, attribution becomes difficult in both operational and legal situations.
The challenge is technical, legal and legal at the same time. An AI system does not require a fixed location, and can operate across borders at the same time, making traditional methods of installation and enforcement ineffective. This is already at odds with emerging regulatory frameworks, which emphasize accountability but lack clear mechanisms for identifying autonomous systems in practice. That’s why successive AI actions must have a cryptographic identity. A signed act creates a strong research trail, helps in establishing eligibility and gives investigators a solid foundation for future exposure.
The next level of security, therefore, should focus on exposure and detection. If an AI program can affect money, access, ownership or sensitive data, its subsequent actions must be signed, recorded and tracked in an accountable entity operating within defined permissions. This could take the form of enforced identity layers for AI agents interacting with financial systems, consumer platforms or critical infrastructure, similar to how SSL certificates establish web trust.
That would make the attribution more credible and the accountability more realistic. The goal is more important than any single implementation, whether it takes the form of Proof-of-Trust or some other machine ownership framework. Systems that can operate in sensitive areas must also be identified.
The violation of the Mexican government points to a broader change that is already underway. As autonomous fraud agents proliferate, accountability must be embedded in AI systems before anonymous machine action becomes the norm. Without this, we risk entering a stage where damage can be done on a large scale without clear authorization, undermining cybersecurity and legal and financial systems built on the assumptions of identified actors. The future of cybersecurity will reveal that action in the digital world still carries a name, a signature and a chain of obligations. That is the level we need to build now.




