Why boards should be nervous about the rise of “shadow AI”

Shadow AI is reshaping organisational risk as employees use generative AI tools without oversight, exposing governance gaps, data risk and compliance challenges

In March 2023, engineers at Samsung’s semiconductor division were given permission to use ChatGPT to assist with their work. Within 20 days, the company had recorded three separate incidents in which employees fed sensitive proprietary data into the publicly available tool. One engineer entered Samsung’s source code into the chatbot to fix a bug; another pasted in code to optimise a test sequence for identifying defective chips; and a third recorded a confidential internal meeting, transcribed it, and input it into ChatGPT to generate meeting notes. None acted with malicious intent, and the engineers were trying to do their jobs more efficiently.

The consequences were immediate and irreversible. Because ChatGPT retains user input data to train itself, Samsung’s trade secrets were effectively transferred to OpenAI’s servers. Samsung notified staff that it was banning generative AI tools, concerned that data transmitted to such platforms was stored on external servers, making it difficult to retrieve or delete. The episode generated global headlines, triggered disciplinary investigations, and exposed a governance failure that had unfolded entirely within the window of normal, goal-directed employee behaviour. The board of one of the world’s largest technology companies had no processes in place to prevent the incident.

ChatGPT retains user input data to train itself.jpeg
Because ChatGPT retains user input data to train itself, it retained sensitive, proprietary data provided by Samsung software engineers. Photo: Adobe Stock

Samsung’s experience was an early and unusually visible example of what researchers and risk professionals now call “shadow AI,” the widespread use of generative AI tools by employees without explicit employer approval, disclosure, or governance oversight. The 2024 Work Trend Index from Microsoft and LinkedIn, which surveyed 31,000 people across 31 countries, found that 78% of AI users were bringing their own tools to work, putting company data at risk. IBM’s 2025 Cost of a Data Breach Report found that one in five organisations reported a breach due to shadow AI, with high shadow AI usage adding an average of $670,000 to breach costs. A further 63% of breached organisations had no AI governance policy in place, and of those that did, only 34% audited for unsanctioned AI.

What made Samsung’s experience relevant was not simply what went wrong, but what it revealed about the structural gap between how boards governed technology and how employees actually used it. The engineers were not rogue actors. No framework existed to tell them what responsible use looked like, what data was off-limits, or what the consequences of a leak would be. That gap between permitting AI access and governing its use defines the shadow AI problem at an organisational level.

A governance blind spot hiding in plain sight

Chris Jackson, Professor in the School of Management and Governance at UNSW Business School, is direct about the seriousness of the issue. To illustrate the point, he first put the question to an AI (approved) platform that the university uses.

The response came back polished and comprehensive. It identified shadow AI as a serious governance issue, reflecting unmanaged technological adoption outside formal oversight, and exposing organisations to legal, security, reputational, and strategic risks. It noted that boards were structurally blind to unsanctioned AI because reporting lines emphasised approved systems and formal projects, while employee-driven experimentation happened informally using personal accounts or embedded third-party tools. It concluded with a set of board-level recommendations: establish clear AI governance frameworks, mandate disclosure of AI usage, invest in enterprise-approved tools, embed AI risk into existing risk committees, and foster responsible experimentation within defined guardrails rather than attempting prohibition.

Learn more: Toby Walsh debunks the top 12 artificial intelligence myths

Prof. Jackson’s assessment was candid. The answer was, in his words, “reputable, clever and thoughtful – the kind of answer that would take me an hour to do and, even then, it would not be so good.” But it contained nothing that would give an article genuine analytical weight.

He then spent a further two-and-a-half hours developing his own response. “I did not in fact save time and effort,” he noted, “but I learned a lot and thought about it, which is what is really needed as opposed to simply flicking you the answer.” That observation cuts to the heart of what he sees as the most underappreciated risk associated with shadow AI – not that employees are using it, but that organisations will fail to understand what responsible use actually requires.

More than rule-breaking: the misalignment argument

Prof. Jackson’s central argument is that Shadow AI is less a symptom of employee misconduct than of organisational failure. “Shadow AI actually often keeps people aligned and focused on the task and expedites good work to achieve the organisational goals,” said Prof. Jackson, who observed that AI use generally produces increases in quality and productivity and could serve as a strong starting point for complex projects.

He noted that the problem is not the technology itself, but the absence of frameworks to govern how its outputs are evaluated and applied. “The reality is that shadow AI often, but not always, produces rational, ethical and responsible decisions – more so than the human,” Prof. Jackson said. But the qualifier matters enormously. AI systems produce outputs that are often polished and plausible, even when the underlying reasoning could be flawed or oversimplified. Trusting those outputs without scrutiny, he suggested, was “like trusting your best friend in a bar after a few drinks with the decision-making of your business”.

Chris Jackson, Professor in the School of Management & Governance at UNSW Business School.jpg
UNSW Business School Professor Chris Jackson says that if organisations do not make AI available, workers will opt for shadow AI whether the organisation likes it or not. Photo: UNSW Sydney

Prof. Jackson’s perspective is supported by a substantial body of research on AI overreliance. A Microsoft Research literature review synthesising approximately 60 papers on the topic found that users accepting incorrect AI outputs was a consistent and documented problem, causing negative consequences, including poor human-AI team performance and ineffective human oversight.

A 2025 review published on arXiv confirmed that even skilled professionals over-relied on inaccurate outputs that sounded plausible, especially when there was not adequate time or resourcing available to critically evaluate them. Critically, for Prof. Jackson's “bar friend” analogy, the same research found that as AI capabilities advanced and outputs became more convincing, detecting errors became more difficult, not easier – making human oversight more essential, not less.

The structural blindness of boards

Boards struggle to recognise shadow AI for structural reasons, not negligence. Governance frameworks are built around formal reporting lines, approved systems and sanctioned projects. However, employee-driven experimentation occurs at the level of individual workflows within business units, often using personal accounts or AI capabilities embedded in already-approved tools. As such, warning signals never reach the board table because there are no mechanisms to carry them there.

The scale of the issue was captured by KMPG's report, Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025, which surveyed more than 48,000 people across 47 countries. It found that more than half of employees (57%) hid their use of AI and presented AI-generated work as their own, at a time when governance of responsible AI was trailing significantly behind adoption. The reasons were structural rather than personal: only 40% of respondents said their workplace had a policy or guidance on the use of generative AI, and half said they felt pressured to use AI out of fear of being left behind. The invisibility of shadow AI is not accidental. It is a rational response to workplaces that create performance incentives around AI's outputs while providing almost no frameworks to define acceptable processes.

Learn more: How AI is changing work and boosting economic productivity

This blind spot cannot be addressed through prohibition, according to Prof. Jackson. “If AI is not available, then workers will opt for shadow AI whether the organisation likes it or not – this cannot really be avoided,” he said.

Samsung’s own experience illustrated the point: having banned generative AI tools after the 2023 incidents, it subsequently expanded employee access to ChatGPT across group companies with enhanced security guidelines, conceding that governed access was more protective than outright restriction. “Use of AI should not be policed too much,” Prof. Jackson said, “because a sensible organisation should seek to protect itself by keeping all of this in-house and accept that they cannot stop the use of shadow AI.”

The oversight problem: effort, incentives and organisational reluctance

The hidden costs of responsible AI use are rarely as visible as its efficiency gains. The previously mentioned Harvard Business School and Boston Consulting Group study of 758 knowledge workers found that, for tasks beyond AI's capability boundary, workers using AI performed 19 percentage points worse than those working without it, and those who experienced performance decline tended to blindly adopt AI outputs rather than interrogate them. The incentive to defer was structural: speed and output were measured; verification effort was not.

“In particular, organisations need to develop policies and frameworks for the way in which AI outputs are used,” Prof. Jackson said. “There is the danger that people will adopt the outputs and models of AI without understanding and reflecting on the sense of it, the usefulness of it and its limitations.” The danger is “exacerbated when shadow AI is used” because, without sanctioned tools and frameworks, no institutional mechanism exists to catch errors before they reach decisions or client-facing outputs. “The reality is that organisations and staff will be reluctant to understand that processes associated with oversight are time-consuming and mentally effortful for people,” he said.

Organisations need to adopt new AI governance frameworks.jpeg
Organisations need to adopt new AI governance frameworks that recognise that consequential decisions occur at the level of individual workflows and outside any board-aligned reporting line. Photo: Adobe Stock

The governance gap is not a niche compliance shortfall, according to a survey of US public company directors. It found that while 62% of boards held regular AI discussions, only 27% had formally embedded AI governance into their committee charters (the oversight mechanism through which risk is actually governed). A similar NACD survey captured the asymmetry precisely: while 95% of senior leaders said their organisations were investing in AI, only 34% were incorporating AI governance.

What boards should do now?

Prof. Jackson’s recommendations reflect the logic of his analysis. Organisations need to encourage AI adoption within formal systems, develop clear policies for how outputs are used and attributed, and accept that eliminating shadow AI is not a realistic goal.

Rather, the objective should be to make the governed use the path of least resistance. “Organisations need to adopt new frameworks on such processes,” he said, with the emphasis on “new”. Existing IT governance and risk committee structures are not built for a world in which the most consequential tool-use decisions happen at the level of individual workflows, at speed, and outside any reporting line that reaches a board.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Research that draws on interviews with directors from more than 50 companies suggests that handling AI solely through audit committees risked fracturing the conversation, leaving the strategic and value-creating dimensions of AI to ad hoc agendas rather than structured oversight. Instead, boards need to assign clear committee responsibility, build AI literacy among existing directors, demand regular disclosure from management on actual AI use across the organisation (including unsanctioned use) and treat AI risk as a standing item rather than an occasional briefing.

A 2025 study in the California Management Review put the expertise gap in stark relief: of the top 50 US companies by market capitalisation, only six had directors with AI backgrounds, which means most boards were being asked to govern one of the most consequential risks on their agenda without the foundational knowledge to interrogate what management told them.

"I might emphasise that I think good practice with AI can improve quality, but to achieve that quality, much human time is needed. So it should not be used to speed up decision-making, yet this is probably how organisations will try to implement it," Prof. Jackson concluded.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy