AT a weekend braai (barbecue) in South Africa, I found myself locked in a debate with a well-educated friend defending a thoroughly debunked piece of Russian propaganda. The discussion quickly turned tense, irrational, and deeply uncomfortable. That moment made something frighteningly clear: we are no longer entering the age of disinformation—we are already deep in it.
Thanks to the rise of AI tools like deepfakes and generative content, falsehoods can now be created and shared at scale. Anyone with a motive can impersonate authority, distort facts, and manufacture convincing narratives that influence public opinion or corporate outcomes.
This is not a distant concern. It’s a present reality—and a growing threat to truth, democracy, and business.
From fake news to corporate sabotage
The 2024 KnowBe4 Political Disinformation in Africa Survey revealed a jarring contradiction: 84 percent of respondents use social media as their primary news source, yet 80 percent believe that’s where most fake news originates. Despite this, 58 percent said they’ve never received any training on spotting disinformation.
This gap between confidence and competence is echoed in the Africa Cybersecurity & Awareness 2025 Report. While 83 percent of participants claimed they could recognise a cybersecurity threat, 37 percent had fallen for fake news and 35 percent had lost money to scams.
The problem isn’t intelligence—it’s psychology.
Disinformation preys on how humans process information. We’re emotional, biased, and tend to believe content that feels familiar or reinforces our existing views. In this climate, repetition equals truth, and emotion trumps logic.
Even when deepfakes are publicly debunked—as in the case of the viral Hurricane Helena image—they continue to spread because they tap into people’s frustrations or beliefs. This is the illusory truth effect and confirmation bias in action.
A new weapon: deepfakes and state-sponsored lies
Disinformation campaigns across Africa have nearly quadrupled since 2022, according to the Africa Centre for Strategic Studies. Nearly 60 percent of these are state-sponsored, often used to destabilise democratic processes or sow distrust in institutions.
Artificial Intelligence is amplifying this problem. Deepfakes—highly realistic synthetic audio and video—make it easier than ever to impersonate leaders, spread falsehoods, or trigger economic and social chaos.
But the implications go far beyond politics. Corporate disinformation is the next frontier.
Why businesses should be deeply concerned
Organisations are increasingly vulnerable to narrative-based attacks. These aren’t about hacking firewalls; they’re about hacking belief. Attackers use disinformation to damage reputations, trigger financial decisions, or exploit employee trust.
Examples include:
- A Hong Kong finance officer transferring $25 million after a deepfaked video call with fake ‘executives’.
- Fake press releases crashing stock prices.
- Viral social media rumours destroying brands before PR teams even get a chance to respond.
The World Economic Forum’s 2024 Global Risk Report named misinformation and disinformation the top global threat, ahead of geopolitical conflict and climate change. Businesses must take this seriously.
Building cognitive firewalls
Technological defences aren’t enough. In the age of AI deception, organisations must also build cognitive resilience—the human capacity to question, verify, and pause before reacting.
Here’s how:
- Adopt a zero trust mindset for information
- Just as IT systems don’t trust users by default, employees should apply the same scepticism to what they see, read, or hear. Encourage verifying sources, challenging urgency, and spotting emotional manipulation—even when messages seem familiar.
- Educate for digital mindfulness
- Train teams to pause before clicking, sharing, or replying. Introduce awareness sessions on deepfakes, synthetic media, and how disinformation targets emotional responses. Understanding how psychological tricks work is as important as recognising phishing links.
- Treat disinformation like a cyber threat
- Include disinformation in threat assessments and incident response plans. Monitor for fake news, brand impersonation, or synthetic media involving your executives. Reputation is a target now—and it’s as vulnerable as your servers.
- This battle is psychological
- We must accept that truth has become a moving target. In a world where audio, video, and even “facts” can be manipulated, the most critical security tool may not be AI detection software—but human discernment.
- This requires cultural change, awareness, and training. It means treating clarity of thought and critical thinking as strategic assets. It means preparing your teams not just to defend against viruses, but against lies.
We’re in a world where anything can be faked. That makes clarity, doubt, and pause more valuable than ever.