The new face of fraud - Why we need to accept that seeing is no longer believing
By Industry Contributor 16 January 2026 | Categories: feature articles
Richard Ford, Group CTO, Integrity360
You answer the phone. It is your daughter. You know it is her because you recognise the panic in her voice. She has run out of petrol in an area she doesn’t know; her phone is on 1% battery, and she needs you to eWallet cash to a petrol attendant’s number immediately.
Your heart rate spikes. You act. You pay.
In that moment, almost nobody stops to consider the technical feasibility of voice cloning. You trust your ears. But we have reached a point where our senses are becoming unreliable witnesses. Generative AI has democratised the ability to steal not just a credit card number, but a likeness.
Fraud has shifted. It is no longer just about financial theft; it is about identity theft in the most visceral sense.
Scams 2.0
Emergency scams are not new. Criminals have phoned victims for decades, usually targeting older individuals, pretending to be a grandchild in trouble. They rely on fear and urgency to bypass critical thinking.
The difference now is realism.
Scammers no longer need to be vague. They can scrape audio from a TikTok video, an Instagram Story, or a Facebook clip and use inexpensive – often free – AI tools to generate a clone of a voice. They use this to invent accidents, arrests, or hijackings.
This creates a significant security paradox. For years, the security industry has pushed for a move from passwords to biometrics. We trust our faces and voices to unlock our banking apps, verify our identity with SARS, and secure our phones. Biometrics are safer than passwords because you cannot forget them, and they are unique to you.
But what happens when the "key" to your digital life can be copied? If an attacker can clone one or more of the signals you use to prove you are you, the foundation of trust can begin to crumble.
The business bleed: From family to finance
While the family emergency scam grabs headlines, the risk to South African business is arguably higher. Business leaders are highly visible. We appear in webinars, speak on podcasts, and post video updates on LinkedIn. This provides hours of high-quality training data for an attacker.
Consider a finance administrator at a mid-sized logistics firm. They receive a WhatsApp voice note from the Financial Director. It sounds exactly like them – the same cadence, the same tone. The message asks for an urgent payment to a new supplier to secure stock before the weekend.
It is not a request that triggers a cybersecurity protocol; it triggers a subservient reflex. The employee wants to be helpful. They recognise the boss’s voice. They make the payment.
Many of us are familiar with the extreme version of this recently in Hong Kong, where an employee was tricked into paying over R400 million to fraudsters after attending a video call where every other participant was a deepfake recreation of their colleagues. But South African SMEs do not need to lose millions to be crippled; a diversion of R50 000 is enough to ruin cash flow for the month.
Analogue defences for a digital problem
As these tools become cheaper and faster, technology alone cannot be the only defence. We need to introduce friction back into our interactions.
The most effective control is often completely non-technical: the "pause and verify" rule.
For families, this means agreeing on a protocol while everyone is safe and calm. Agree on a "safe word" or a specific question that only a real family member would know the answer to. If a panicked call comes in, ask the question. If the voice on the other end cannot answer, hang up and call them back on their saved number.
For organisations, the principle is identical. No financial transaction should ever be approved based solely on a single channel of communication. If the instruction comes via WhatsApp voice note, verify it via a phone call or an email. If it comes via email, verify it via a call to a known internal extension. The losses of failing to act on urgency are going to be less than acting on potentially invented and fraudulent urgency.
Bringing identity risk into the mainstream
AI-driven identity fraud is not a futuristic sci-fi plot; it is a current risk management issue. It belongs in mainstream compliance discussions alongside POPIA and FICA.
If we accept that seeing is not believing and hearing is not enough, we can adapt. By normalising the act of verifying – by pausing before we pay – we make life significantly harder for criminals who rely on our reflex to trust what we hear.
Most Read Articles

Have Your Say
What new tech or developments are you most anticipating this year?

