Shadow AI: Execs breaking the rules

Published on the 18/11/2025 | Written by Heather Wright


Shadow AI: Execs breaking the rules

And the security training and literacy paradox…

When it comes to breaking the rules on AI use, the culprits aren’t just junior staff – they’re sitting in the boardroom.

A new report from cybersecurity and risk management vendor UpGuard shows that senior leaders, including CISOs, are among the worst offenders in using unauthorised AI tools, despite being responsible for enforcing security policies. Even more surprising is a paradox: Employees who receive AI safety training are often the most likely to engage in shadow AI practices.

“As employees’ knowledge of AI risks increases, so does their confidence in making judgements about that risk, even at the expense of following company policies.”

The survey of 1,500 workers globally, including Australia and New Zealand, shows more than 80 percent of workers, including 88 percent of security professionals, are using unapproved AI tools at work, actively bypassing corporate governance at all levels. Sixty-nine percent of CISOs are doing so daily.

Across the board just 17 percent say they use only company-approved AI tools.

Asia Pacific users reported lower rates of shadow AI, but even there, more than 50 percent say they use shadow AI regularly, add in those using it ‘occasionally for a specific one-off’ and the figure nears 80 percent.

At every level of seniority, the report found ‘substantial’ use of shadow AI, with senior leadership reporting by far the highest rate of regular use of AI – 50 percent more likely than other employees – despite being responsible for setting and enforcing security policies.

“Whether that is surprising depends on one’s opinions about executives’ quips the report.

Use was high across a range of sectors including finance, IT, manufacturing and healthcare

Asked why they were using unapproved tools, the most common answer wasn’t the expected one of the quality or speed of answers on their chosen tool. Instead it was that they used unapproved tools because it was easier, opening the doors for security teams to compete on convenience.

But the report also highlights another paradox: Increased knowledge of AI governance is resulting in increased shadow AI, with employees who better understand AI security requirements and risks more likely to use unapproved AI tools. The 40 percent of employees who received AI training were the ones most likely to use shadow AI, blowing Upguard’s early hypothesis that employees used shadow AI because of a knowledge gap and lack of understanding of the risk of unapproved tools, out of the water.

Instead, rather than curbing risky behaviour, AI training appears to fuel overconfidence, creating ‘AI power users’ who bypass controls in pursuit of productivity.

“The data suggests that as employees’ knowledge of AI risks increases, so does their confidence in making judgements about that risk, even at the expense of following company policies,” the report notes.

“While this does not mean we should avoid AI security awareness training, it certainly indicates it is not sufficient, and that such programs need new approaches in order to succeed.”

Fewer than half of workers said they recalled their companies’ policies around AI usage. At the same time, 70 percent report being aware of sensitive data shared with AI apps in their workplace.

The report notes nearly half (45 percent) of employees find ways around AI restrictions – 90 percent don’t even notice when security teams block AI apps

In another interesting finding, 27 percent of workers now trust AI more than their managers or colleagues for reliable information – despite very public cases of AI hallucinations.

That trust perspective has consequences with UpGuard noting that employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow.

The findings fly in the face of security leaders beliefs, with leaders tending to underestimate what percentage of their workforce is using shadow AI.

“Fact: Most employees are using unapproved AI tools,” the report says. “For many organisations, this discovery is where the investigation ends, usually resulting in reactive bans.”

The realisation, however, should be just the beginning of inquiry, not the end.

“By exploring the motivations of the workforce, we can find safe methods to channel their curiosity for the net benefit of the business.”

UpGuard says unauthorised AI usage in the workplace will continue to rise unless reinforced governance is implemented.

“It is clear the problem cannot be solved by blocking applications, as 41 percent of employees find a way around it.”

Instead, UpGuard says companies keen on creating a transparent environment need to shift from a fear-based approach of restriction to one of ‘guided enablement’.

“This new pivot must address the next steps: Providing visibility, implementing intelligent guardrails and offering vetted tools to make the secure path the path of least resistance.”

Because until compliance feels as effortless as a chatbot prompt, expect employees, executives and security teams included, to keep leading the AI rebellion… one AI app at a time.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere