
AI-powered phishing has become the most virulent security threat to the business world. Threat actors are now using advanced generative models to create highly personalised and convincing e-mails that are capable of bypassing traditional security measures.
The pervasiveness of the threat is recognised in the World Economic Forum 2025 global cybersecurity outlook, which found that 66% of companies expect AI and machine learning to be the root cause of vulnerabilities. 47% said that artificial intelligence was the likely driver of increasingly sophisticated attacks, particularly with regards to social engineering.
A high percentage (25%) of respondents in the State of AI and Security Survey Report believe that AI has the potential to be of more value to cybercriminals than to the business. It makes sense. Cybercriminals use the same technologies as companies use because they want the same benefits, and to find the same vulnerabilities.
They are weaponising the technology, using its increasingly capable features to write natural language phishing e-mails, evade e-mail filters, extract sensitive data and interact with victims in ways that appear legitimate.
Attackers are producing very clean e-mails that contain carefully embedded instructions designed to trigger actions by the organisation’s own AI assistants before the user ever sees the message.
Malicious e-mail
For example, a malicious e-mail could be first read by an AI assistant which will automatically interpret the contents and execute its instructions. It doesn’t even pass by a human. The AI created e-mail hits the AI-managed system and the attack takes place without anyone even clicking a button.
These hidden instructions are capable of requesting user lists, downloading malware or even forwarding sensitive credentials to an external party.
It’s easy to see why these attacks are difficult for companies to detect. The e-mail itself contains no obvious indicators of compromise. There are no dangerous attachments or suspicious links or any of the known malware signatures.
Read: Autonomous AI agents emerge as the next major cybersecurity risk
This makes it supremely easy for e-mail security tools to then misclassify these messages as safe and pass them through the security barrier. A human might notice inconsistencies, especially if the e-mail body copy didn’t follow logic – like talking about an attachment that doesn’t exist, or a website without a link – but an automated system frequently misses these contextual clues.
Unfortunately, this type of phishing which combines AI-written content with behavioural insights and identity spoofing, is gaining momentum. The Proofpoint 2025 report found that there has been a more than a 1Â 300% increase in attacks using AI or automation. Increasingly, attackers are using combined cloned voices, business e-mail compromise techniques and AI-generated instructions.
Security breach
The challenge for the business is twofold. First, companies need to stop thinking that they are secure. Cloud platforms do not offer inherent protection. High profile outages, including DNS-related downtime, have shown that cloud environments are vulnerable. Attackers have breached major global cloud providers and extracted large volumes of sensitive information. It isn’t wise to assume that data hosted in platforms such as Microsoft Azure or AWS is automatically secure.
Security protocols within these systems need to be bolstered by independent defence layers to ensure that the business has more than one level of protection in place.
Second, companies need to pay attention. Attackers frequently intercept ongoing e-mail threads between companies and their customers and then insert fraudulent instructions that appear legitimate.

There have been incidents where an attacker used a compromised customer mailbox to send a fake invoice requesting the remaining balance on a transaction while contacting the supplier to request a refund of the original deposit. The company hadn’t been breached, but both the supplier and the company were financially affected.
There is a pattern to modern attacks. Cybercriminals no longer rely on single layer techniques. They’re combining AI, behavioural mimicry, identity cloning and supply chain compromise to create multi-stage fraud that passes unnoticed through traditional defences.
Fortunately, there are ways to address these risks. Implement tools that expand beyond e-mail filtering and antivirus protection with behavioural analysis, anomaly detection and multi-layered controls to spot unusual communication patterns.
They also need to reassess the security tools they rely on. Some companies still use home user or small business solutions that perform poorly when tested against enterprise-grade benchmarks such as SE Labs or Mitre Att&ck evaluations. Price-driven procurement can leave companies more exposed.
Finally, awareness remains a foundational defence. People cannot identify threats they do not know exist. Simple, ongoing situational awareness training that helps users recognise subtle red flags in e-mails, invoices and online interactions is invaluable. Many victims fall for these scams while distracted, overloaded or rushing through daily tasks, which is exactly when attackers strike.
Read: South Africa faces ‘triple-edged sword’ as AI fuels next-gen cyber threats
Cybercrime is no longer defined by obvious malicious attachments or poorly written phishing e-mails. It is defined by precision, automation and an ability to adapt faster than most organisations can respond.
This new generation of AI-driven attacks is not a temporary trend. It is the emerging norm, and it demands the same strategic attention as any other board-level risk.  – © 2026 NewsCentral Media
- The author, Richard Frost, is head of technology solutions and consulting at Armata Cyber Security
Get breaking news from TechCentral on WhatsApp. Sign up here.
