In our digital era, cybercrime has long been a fact of life. However, advances in artificial intelligence (AI) threaten to shake up this status quo. The technology has not only enhanced the capabilities of cybercriminals, but has also made their attacks more effective. For instance, a hacker exploited Claude, a series of large language models (LLMs) created by U.S.-based company Anthropic, to extract approximately 150 gigabytes of data from agencies within the Mexican government. It allowed the assailant to evade the robust defense systems had in place to safeguard citizens’ information. Claude proved to be an invaluable accomplice, permitting the cybercriminal to walk away with highly sensitive data that included tax records and property agreements. The fact that a widely available LLM was manipulated to facilitate this caper has alarmed many cybersecurity experts. This incident may also reveal that operators are only scratching the surface of what AI-powered tools can do.
Cybersecurity authorities have seen their threat landscape change dramatically with AI’s rapid evolution. Specifically, the range of actors capable of inflicting damage has expanded and diversified. For veterans in this business, who have perfected their craft over the years, AI may be a welcome addition to their toolkits. Yet individuals with experience in cybercrime are not the only ones who may benefit from adopting solutions powered by this technology. Newcomers to this space, particularly those who may lack the skills and expertise expected from hackers, have found success relying on AI. These emerging trends may point towards an uncomfortable truth: cybercrime may become a much more profitable and accessible line of work. Officials responsible for cybersecurity around the world, including the likes of policymakers and product developers, must come together if they hope to contain this ever-changing problem. Otherwise, they risk remaining one step behind technologically adept cybercriminals into the foreseeable future.
Aiding Experienced Operators
Professionals engaged in illicit activities online have much to gain from using AI. The most notable advantage may be how the technology could drastically scale up their targeted attacks. Wired observed how “vibe hacking,” referring to a practice where individuals instruct an LLM to assist with an act of cybercrime using prompts that are conversational and straightforward, may be critical for achieving this goal. Hackers may ask an AI agent to generate adaptive malware, probe cyber defenses or even draft phishing emails that could be widely circulated in a social engineering scheme. Automating tasks that may have been mundane or time-consuming, techniques like vibe hacking provide savvy cybercriminals with an opportunity to further refine their strategies, experimenting with the technology in ways that push the boundaries of what it can do for them. By integrating AI into their workflows, operators may be emboldened to plot bigger, and more disruptive, jobs.
Tools enhanced by AI may be a boon to cybercriminals looking to augment their capacities. Yet for the developers behind these products, this news is far from encouraging. The Verge unpacked how tactics like vibe hacking expose how popular AI solutions have been swiftly and easily exploited by cybercriminals. As concern has grown regarding the security of these tools, companies have vowed to stop malign actors in their tracks. Specifically, they have implemented a raft of measures designed to reduce the likelihood that their products are misused. Even so, their ability to quickly mitigate the harms caused by tools they have put on the market has been called into question. While industry giants have devoted considerable time towards fortifying safety guardrails, inventive operators will rapidly test new methods of jailbreaking AI solutions. In this race, executives in Silicon Valley and elsewhere have found themselves struggling to keep pace with wily cybercriminals.
Upgrading Novice Hackers
Would-be operators have also greatly benefited from the AI boom. As this technology has grown in sophistication, it has simultaneously lowered the bar to entry for hackers interested in the world of cybercrime. The Wall Street Journal highlighted how AI can automate and simplify many of the tasks required to execute an effective attack. A publicly available LLM can churn out code capable of infiltrating complex systems and stealing vital data with relative ease. They can also help with the nontechnical elements of cybercrime that require cunning and tact, such as improving the language of a phishing email to sound more believable to a mark. Critically, a hacker with limited skills can hand off these responsibilities to an AI-powered solution and still find success. Although they may not reach the highs of their more tenured peers, greener cybercriminals may still be able to make a tidy profit by embracing these tools.
At this juncture, experts do not believe an AI agent can replace a tried-and-tested hacker. It may, however, be a substitute for manpower. Forbes took note of how AI, and its gains in efficiency, could enable low-level cybercriminals to pull off attacks that once necessitated a team of professionals. Its value to these hackers goes beyond producing script on their behalf. AI tools can make tactical decisions regarding viable targets and monetization schemes on a tight timeline. Inexperienced cybercriminals who lack the resources to hire outside help may discover that this technology allows them to navigate high-pressure situations shaped by changing variables. When the outcome of an attack is in doubt, the versatility and competence offered by an AI solution could make the difference in avoiding the worst. Taken together, an outsider looking to break into this space might conclude that this technology makes cybercrime more effective and economical.
Not Playing Games
Regardless of their degree of expertise, AI has helped cybercriminals maintain an edge over authorities. The technology has made veteran hackers more productive, and as a result, their plans have become more daring. Meanwhile, the companies whose solutions are being misused by malign actors have scrambled to mount a response. Cybercriminals without much training have also seized this moment, exploiting AI tools currently on the market to their advantage. For these beginners, it has become clear that technical knowledge and resourceful personnel are no longer required to carry out a coordinated attack. As the barriers to cybercrime continue to crumble, and the tools available to hackers become more potent, more may start to believe that this line of work provides a simple and lucrative way to make a living. In reality, this shift will all ensure that spending time online becomes less safe and secure for users across the globe.
Considering what is at risk, stakeholders concerned about data security must rise to the occasion. Policymakers, particularly those based at data protection agencies, must take the time to reevaluate regulations centered on privacy. The speed at which AI has evolved has uncovered gaps in preexisting legislation; officials must be responsive to the nature of innovation and subsequently craft policies that are more adaptable. Regulators must also work with AI developers to tamp down on the criminal misuse of the latter’s solutions. Crucially, their outreach must emphasize that the rights of individuals cannot come second to pushing what AI can do. Finally, international cooperation could help turn the tide against a potential cybercrime surge. Sharing best practices on how to combat AI-enhanced cybercrime could yield major dividends. Together, these concrete measures would level the playing field for cybersecurity authorities, empowering them to more effectively tackle this new form of digital crime.
[Image by Tung Nguyen from Pixabay]
The views and opinions expressed in this article are those of the author.

Aaron Spitler is a researcher whose work lies at the intersection of emerging technologies and human rights. He has worked at numerous organizations, such as Harvard University’s Berkman Klein Center, the International Telecommunication Union (ITU) and the Internet Society. He received his master’s degree from Tufts University’s Fletcher School of Law and Diplomacy.

