
Nearly a third of employees surveyed by a global study admit to deliberately undermining their companies’ AI roll-outs – and while the research did not include South African respondents, a local AI adoption expert has told TechCentral that resistance is showing up here, too, though how widespread it is depends heavily on company culture.
The study of 2 400 knowledge workers across the US, UK and Europe, commissioned by enterprise AI vendor Writer and conducted by Workplace Intelligence, found that 29% of employees admit to actively sabotaging their company’s AI roll-out. Among Gen Z workers, the figure rises to 44%.
The reported sabotage takes several forms: feeding proprietary data into public AI tools, using unapproved platforms, deliberately producing low-quality AI output and, in some cases, tampering with performance metrics to make AI look ineffective. The most-cited motive, according to 30% of respondents who admitted to such behaviour, is fear of losing their job.
It is worth noting that Writer is a commercial AI platform vendor, and research commissioned by it that concludes executives should move faster on AI adoption should be read with that context in mind. What management calls sabotage, employees may describe as caution about tools that still hallucinate, misrepresent sources or expose sensitive data when used without adequate controls.
Much of what Writer study describes as sabotage overlaps with a phenomenon the industry has taken to calling “shadow AI” – employees using consumer AI tools such as ChatGPT, Claude or Gemini outside their organisation’s sanctioned technology stack, often because the approved tools are slower, less capable or not yet deployed.
Shadow AI
Shadow AI is a genuine security and governance concern, particularly when proprietary or customer data is pasted into public models with uncertain data retention policies. But it is not straightforwardly sabotage.
In many cases, it reflects employees getting on with the job using the best tools available to them, ahead of their employers’ procurement and risk processes. Whether that is a problem to be stamped out or a signal that official AI roll-outs are lagging user demand depends heavily on which side of the desk one is sitting at.
Read: South Africa’s draft AI policy is a bureaucrat’s dream
South African AI expert Dean Furman, CEO and founder of consulting and training firm 1064 Degrees and author of Exponential Potential, said resistance to AI tools by employees in South Africa is real but not uniform.
“It’s not usually widespread. It’s not like the default. It’s very much dependent on company culture,” Furman said in an interview with TechCentral.
A qualified actuary and former Discovery and Alex Forbes executive, Furman has been training South African corporates on AI adoption for several years and has previously featured as a guest on the TechCentral Show.

Culture, he argued, matters more than sector. “At some companies, there’s a lot of psychological safety. They know that it’s a company that puts their people first. But then there are other companies where they don’t have that safety, and they know that the leaders are a bit more ruthless. And there, a lot of the resistance comes, because in that situation, they don’t want to show that AI is so useful, because then it makes them seem less valuable.”
Resistance also plays out differently across seniority levels. Senior professionals – lawyers and actuaries among them – often resist out of identity, Furman said. “They almost got a mental block to be like, ‘Well, we have got so much training and so much know-how that AI could never be as good as us at certain things’, which is nonsense, because it exceeds their abilities in many ways.”
Further down the hierarchy, the calculation is more self-interested. “The leaders will be so excited, oh wow, something that previously took two days can now take 20 minutes. But from those individuals’ perspective, it’s a case of, ‘Oh well, I don’t really want my managers to know I can do this now in 20 minutes, because then I’m less valuable.’”
The business cost of that thinking is significant, Furman said. “If the process was previously taking a day and now it’s possible to do that process in an hour, those remaining seven hours of the workday is really that opportunity cost for a particular day. So then, if people aren’t embracing it, that is the cost.”
Leadership, he said, is often blind to the problem, and part of the responsibility lies with executives themselves. “It’s important for leaders to understand AI themselves. Because if not, they won’t know that a certain process should now be taking much less time. There’s no way of spotting it, because they don’t know what to benchmark it against.”
Underlying trend
The Writer and Workplace Intelligence research includes a number of findings that will sharpen the debate for employees and executives alike – though these figures, too, reflect the global survey rather than South African conditions.
According to the report, self-identified AI “super users” are about three times more likely than non-users to have received both a promotion and a pay rise. Employees who use AI tools reportedly save around six hours per week, while executives save nearly 12. The report also claims 60% of executives plan to lay off employees who cannot or will not use AI, that 69% of companies are already conducting AI-related layoffs, and that 76% of C-suite respondents consider employee sabotage a serious threat to their company’s future.
Read: Sage bets AI can save small business owners from admin hell
Those figures are self-reported survey data and should be read as such. The causal link between AI use and promotion, in particular, is one the report asserts rather than establishes.
Furman said the underlying trend in South Africa is nevertheless shifting, even if resistance persists. “I definitely feel that people are starting to realise this isn’t just a phase. This is just a new way of things working. AI is not going anywhere.” Pockets of deep resistance remain, he added: “There’s specific individuals that very much have a lot of fear.”

For South African executives, the useful question is not whether the global sabotage numbers apply here – they may or may not – but whether their own organisations have the psychological safety, leadership AI literacy and honest measurement to know the difference between genuine resistance and reasonable caution. – © 2026 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here.
