When AI becomes a weapon in the cybersecurity arms race

A new era of AI-enabled cybercrime shows how a single attacker can automate ransomware, exposing major gaps in enterprise AI security posture and governance

When Anthropic recently released its latest threat intelligence report, it revealed an alarming evolution in AI-powered attacks. Anthropic’s security team intercepted a lone hacker who had leveraged artificial intelligence to create a one-person ransomware operation. The attack demonstrated how cybercriminals could leverage AI to automate previously complex operations that traditionally required entire criminal organisations. The hacker utilised AI coding agents to systematically identify vulnerable websites and web services, then deployed machine learning models to generate malicious code that exploited these vulnerabilities.

After stealing the data, the attacker employed large language models to analyse and prioritise the stolen information based on its sensitivity and extortion value, before sending automated ransom demands to the targeted companies. The attacker successfully executed 17 ransomware incidents, demanding ransoms ranging from US$75,000 to US$500,000.

Öykü Işık, Professor of Digital Strategy and Cybersecurity at IMD.jpg
Öykü Işık, Professor of Digital Strategy and Cybersecurity at IMD, said cybercriminals move very fast and are not bound by rules or governance mechanisms that organisations need to comply with. Photo: IMD

What would traditionally require an entire criminal organisation had been condensed into a single operator leveraging AI’s capabilities. “This was one person, doing what would normally take a whole group of operators in a ransomware gang to do,” said Öykü Işık, Professor of Digital Strategy and Cybersecurity at IMD. “This is a very recent and very real example of how things are evolving, and companies need to be prepared.”

AI cybersecurity insights from industry

Professor Işık’s warning is borne out by industry research. IBM’s latest Cost of a Data Breach Report 2025: The AI Oversight Gap, revealed alarming gaps in AI security governance across organisations worldwide. While only 13% of companies reported breaches involving AI models or applications, a staggering 97% of those organisations lacked proper AI access controls. An additional 8% of companies admitted they did not know whether they had been compromised through AI-related attacks, suggesting the true scope remained hidden.

The research exposed shadow AI as a significant vulnerability, with one in five organisations experiencing breaches due to unauthorised AI tools used by employees. These shadow AI incidents cost an average of $670,000 more than breaches at firms with controlled AI environments. Meanwhile, 63% of breached organisations either lacked AI governance policies entirely or were still developing them, with only 34% of those with policies conducting regular audits for unsanctioned AI tools.

Learn more: Company directors fall short of cyber security skills mark

IBM’s research also found cybercriminals had rapidly weaponised AI capabilities, with 16% of data breaches involving attackers using AI tools – primarily for AI-generated phishing campaigns (37% of cases) and deepfake impersonation attacks (35%). The most common entry point for AI-related breaches occurred through compromised applications, APIs, and plug-ins within AI supply chains, resulting in 60% of incidents leading to data compromise and 31% causing operational disruption.

These statistics underscored a critical reality: as AI democratised both attack and defence capabilities, business leaders faced an unprecedented challenge in balancing innovation with security imperatives.

AI’s double-edged impact on cybersecurity attack and defence capabilities

The artificial intelligence revolution has created a parallel transformation in both cybersecurity threats and defences, fundamentally altering how organisations approach digital risk management. Yenni Tim, Associate Professor in the School of Information Systems and Technology Management at UNSW Business School, identified this duality as central to understanding the cybersecurity implications of AI.

“There are two dimensions to consider: cybersecurity of AI and AI for cybersecurity,” Prof. Tim explained. “AI’s black-box nature makes securing its implementation and use more complex, while at the same time, AI provides defenders with powerful tools like advanced pattern recognition for more accurate threat detection. But those same capabilities lower the barrier for attackers, who can exploit AI to scale and automate malicious activities.”

YenniTim-min.jpg
UNSW Business School Associate Professor Yenni Tim says that AI has contributed to a technological arms race that necessitates a fundamental organisational shift from security to digital resilience. Photo: UNSW Sydney

The democratisation of AI capabilities had indeed lowered entry barriers for cybercriminals. Prof. Işık observed how traditional hacking requirements had diminished: “We do see, unfortunately, that the cybercrime market is a very lucrative market. And recently, the entry barrier to that cybercrime market is getting lower and lower through the use of AI,” she said.

The underground economy had rapidly adapted to these opportunities. Dark web marketplaces offer specialised large language models designed specifically for criminal purposes, with subscription services providing hacking capabilities for as low as US$90 per month, according to Prof. Işık. “These criminals move very fast, and they are very agile. They’re not bound by rules or governance mechanisms that organisations need to comply with.”

IBM’s research confirmed this trend, revealing that 16% of data breaches involved attackers using AI, with AI-generated phishing attacks accounting for 37% of these incidents and deepfake impersonation attacks representing 35%. The speed and sophistication of AI-enabled attacks had outpaced many organisations’ defensive capabilities, according to research published in Harvard Business Review, which found that the entire process of phishing can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates.


However, the defensive applications of AI offer substantial benefits for organisations willing to invest appropriately. “AI is also a great friend for cybersecurity, but unfortunately, that side is developing more slowly than the attack side,” Prof. Işık acknowledged. Organisations that implemented AI extensively throughout their security operations demonstrated measurably superior outcomes, reducing breach costs by an average of $1.9 million and shortening breach lifecycles by 80 days compared to organisations with limited AI security integration.

Prof. Tim emphasised that this technological arms race necessitated a fundamental shift in organisational thinking. “This is why the conversation needs to move from security alone to digital resilience,” she said. “Resilience provides a capacity lens to understand the extent to which a business can defend, respond, and recover from disruptions, including cyberattacks.”

Building digital resilience beyond traditional cybersecurity frameworks

The concept of digital resilience represented a paradigm shift from reactive security measures towards proactive organisational capacity building. Prof. Tim’s research highlighted this evolution as essential for addressing AI-powered threats that traditional cybersecurity approaches struggled to counter effectively.

“Resilience is often misunderstood as a technical issue – having the most advanced systems. In reality, it is a socio-technical capacity. Resilience emerges when assets and human abilities are mobilised together through activities that enable the organisation to continue functioning, adapt to disruption, and advance over time,” she explained.

Organisations ‘think like a thief‘ to help protect themselves through proactive cybersecurity.jpeg
IMD Professor Öykü Işık suggests organisations ‘think like a thief‘ to help protect themselves through taking a more proactive cybersecurity position. Photo: Adobe Stock

This framework comprised three interconnected layers that organisations needed to develop systematically. The foundational layer addressed assets and abilities that could be drawn upon during crises. The operational layer focused on activities that mobilised and coordinated these resources effectively. The strategic layer encompassed goals of continuity, adaptation, and advancement that guided resilience efforts.

“For AI-powered threats, this means leaders cannot stop at acquiring tools,” Prof. Tim explained. “They must also invest in building the abilities of their people to use AI effectively, securely, and responsibly. Only then can assets and abilities reinforce one another to support different objectives to collectively maintain resilience.”

Prof. Işık approached resilience through the lens of proactive threat anticipation. “I talk about organisations ‘thinking like a thief‘ to help protect themselves from a cybersecurity perspective. What do I mean? Since the advent of the web, organisations have managed, to a certain extent, to protect themselves by taking a very reactive stance on this issue. So, thinking like a thief is more about pushing them to be proactive.”

This mindset required organisations to systematically evaluate their vulnerabilities from an attacker’s perspective. “If I were a black hat hacker, for example, how would I breach my systems? That kind of thinking is a great way to start thinking proactively on this topic,” Prof. Işık explained.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

The human element is critical in building organisational resilience. Despite technological advances, Prof. Işık noted that attackers continue to exploit human vulnerabilities as their primary strategy. “I would suspect that the majority of LLM use cases still target humans rather than technical vulnerabilities,” she observed. “Still, the human element is the most targeted one in cybersecurity. So, the more prepared we are from a behaviour perspective, the better organisations will be.”

The benefits of this approach are outlined in IBM’s AI cybersecurity research. Organisations that used AI extensively throughout their security operations saved an average of $1.9 million in breach costs and reduced breach lifecycles by 80 days. This dual capability contributed to the first global decline in average breach costs in five years, dropping 9% to $4.44 million; however, recovery remained challenging, with 76% of organisations taking more than 100 days to fully recover from incidents.

Achieving cross-functional cybersecurity ownership in AI-enabled environments

The traditional approach of isolating cybersecurity responsibilities within IT departments has become inadequate for AI-enabled environments, where technology decisions occur across multiple organisational functions. Prof. Işık identified that a fundamental challenge is shifting organisational risk perception from technical to business responsibility. “It comes down to recognising cyber risk as a business risk,” she said. “That’s really the starting point for genuine cross-functional ownership.”

A key driver of AI-enabled cyberattacks is the rise of quantum computing.jpeg
A key driver of AI-enabled cyberattacks is the rise of quantum computing, which presents a range of medium- and long-term strategic risks. Photo: Adobe Stock

Prof. Işık cited examples of high-profile failures that demonstrated the systemic nature of cybersecurity risks. In Sweden, 200 municipalities were shut down due to a cyberattack. “Apparently, these 200 municipalities all used the same cloud HR software provider – so this was a supply chain attack,” explained Prof. Işık, who noted that such incidents highlight how traditional risk assessment approaches failed to account for interconnected digital dependencies.

In response, she stated that effective cross-functional ownership requires embedding cybersecurity considerations into strategic planning and performance management processes. “Why don’t we make cyber resilience part of our organisations’ strategic planning cycles? And why don’t we help executives take responsibility by including this in their performance reviews?” Prof. Işık asked.

Another important step is to distribute accountability across business functions, based on decision-making authority. “Business executives need to see how their decisions change digital risks in the organisation,” said Prof. Işık. “If we can hold them accountable for that, then that is a good starting point to diffuse that risk across the organisation and not just leave that responsibility to the Chief Information Security Officer.”

Learn more: UNSW PRAxIS (Practice x Implementation Science) Lab

Prof. Tim’s research lab, UNSW PRAxIS, has a portfolio of ongoing projects on the responsible use of AI in businesses. Emerging findings from these projects indicate that this silo is a common and critical vulnerability that organisations must address systematically. “This siloing is common: cybersecurity is often seen as an IT problem. But in AI-enabled environments, that view is no longer adequate,” Prof. Tim explained.

The distributed nature of AI adoption amplified this challenge. Unlike previous technologies that remained within controlled IT environments, AI tools have proliferated across business functions, enabling individual employees to make technology decisions with security implications. “AI amplifies this need because it is a general-purpose technology,” said Prof. Tim. “Individuals across functions now have greater influence over how technologies are configured and used, which means ownership must be distributed.”

She agreed with Prof. Işık’s perspective that traditional technological safeguards, while necessary, are insufficient without the corresponding development of human capabilities. “Technological guardrails remain essential, but they must be paired with knowledge building that cultivates stewardship abilities across the workforce,” she said. “When employees understand their role in shaping secure and responsible use, resilience becomes embedded across the organisation rather than isolated in IT.”

Business leaders need to consider how AI fits with existing processes.jpeg
Business leaders need to consider how AI fits with existing processes, how it shapes employees’ work satisfaction and capabilities, and whether it enhances organisational coherence. Photo: Adobe Stock

The emerging quantum cybersecurity threat

A key driver of AI-enabled cyberattacks is the rise of quantum computing, which Prof. Işık said presents medium- and long-term strategic risks. While current quantum capabilities remain limited to specific problem domains, broader accessibility could fundamentally alter cryptographic assumptions underlying digital security.

“The moment this capability becomes widely accessible – over the cloud, for example, this gives rise to a new range of threats,” said Prof. Işık. “You can even go to Amazon today, which has an S3 (simple storage service) cloud computing environment that you can block time on. So, we are slowly getting there.”

Threat actors have already begun preparing for quantum decryption capabilities through “harvest now, decrypt later” strategies, collecting encrypted data for future exploitation. “We know that they have already been doing this,” said Prof. Işık. “They are sitting on encrypted data that they will be able to decrypt with quantum computing capability, because the RSA encryption that we heavily depend on is breakable with quantum computers.”

Organisational preparation for post-quantum cryptography remained inadequate, despite the availability of solutions. While quantum-safe encryption exists and some institutions are actively vetting these encryption algorithms, Prof. Işık noted that organisations need to invest time and resources in the process and develop a roadmap for transitioning from RSA to quantum-safe encryption systems.

Learn more: Four cybersecurity misconceptions placing your business at risk

Executive awareness of quantum risks is particularly limited, Prof. Işık added. “It might be only one in 50 organisations that say they are on top of this if you were to question them about quantum-safe transition planning,” she said.

Strategic leadership approaches for balancing AI ambition with cyber vigilance

Executive leaders face the complex challenge of leveraging AI’s transformative potential while maintaining appropriate security postures that protect organisational assets and stakeholder interests. Prof. Tim’s research suggests that successful leaders approach AI integration as both opportunity assessment and organisational stress-testing.

“Leaders should treat AI integration as both an opportunity and a stress test of organisational resilience. The question is not simply how much you can scale or automate, but whether AI is being integrated in ways that strengthen the organisation’s capacity rather than strain it,” Prof. Tim explained.

Learn more: As cyberthreats evolve, businesses need more than just tech solutions

This perspective requires leaders to evaluate AI initiatives holistically rather than focusing solely on efficiency metrics. For leaders, she said this means considering broader implications, such as how AI fits with existing processes, how it shapes employees’ work satisfaction and capabilities, and whether it enhances rather than erodes organisational coherence.

“Most importantly, leaders need to view AI as part of a living system that evolves over time,” said Prof. Tim. “Short-term efficiency gains can easily create long-term fragility, which is why employees must be continuously supported to develop the stewardship capabilities needed to adapt these systems.”

Six AI cybersecurity questions for business leaders to consider

  1. Has the right risk tolerance for AI technologies been established, and is it understood by all risk owners?
  2. Is there a proper balancing of the risks against the rewards when new AI projects are considered?
  3. Is there an effective process in place to govern and keep track of the deployment of AI projects within the organisation?
  4. Is there a clear understanding of the organisation-specific vulnerabilities and cyber risks related to the use or adoption of AI technologies?
  5. Is there clarity on which stakeholders within the organisation need to be involved in assessing and mitigating the cyber risks from AI adoption?
  6. Are there assurance processes in place to ensure that AI deployments are consistent with the organisation’s broader organisational policies and legal and regulatory obligations (for example, relating to data protection or health and safety)?

Source: Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, World Economic Forum

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy