The Liar's Dividend : How AI Is Compromising Humanitarian Spac

By Thomas Byrnes
12

EXECUTIVE SUMMARY

In Sudan, AI-generated audio, video, and images are being weaponised by both sides of the conflict. A September 2025 CDAC Network report found that "false narratives have directly led to civilian deaths, blocked aid deliveries and justified attacks on humanitarian workers."
Perhaps most dangerously, AI is now being used to dismiss authentic evidence as fake. Both sides in Sudan exploit this "liar's dividend," claiming real recordings are AI-generated to escape accountability. This may be the more corrosive threat: not that we can't trust what's fake, but that we're losing the ability to trust what's real.
A 2025 HLA/Data Friendly Space survey of 2,539 humanitarian practitioners found 93% report using or having used AI tools, 70% use them weekly or daily, but only 8% say AI is widely integrated in their organisations, and just 22% report having a formal AI policy.
Voice-cloning tools now run on consumer-grade hardware. With a short audio sample, attackers can generate convincing fake statements fast, often without needing an enterprise setup.
The box is open. The question is no longer "should we use AI?" It's whether we understand the information environment well enough to operate safely in it.

INTRODUCTION

Earlier this week I was running an AI training session with humanitarian staff. We'd covered the usual ground: data safety, which tools are secure for confidential information, how to build workflow cards that force a pause before you paste sensitive data into a chatbot. Practical stuff.

Then I opened a different box.

I started talking about what's happening in Sudan. In DRC. Voice cloning. Fake videos. AI-generated content reaching hundreds of millions of views. Armed groups running disinformation operations that would have required a state intelligence agency ten years ago, now possible with open-source tools and a laptop.

The call went quiet.

I've spent the last two years training humanitarian teams to use AI tools responsibly, treating AI like a junior staff member that needs constant supervision. That work matters. But here's what I've come to understand: we can't just teach people how to use these tools safely. We have to teach them how these tools are being used against us.

TWO SCENARIOS YOUR ORGANISATION ISN'T READY FOR

Before I get into what's happening in the field, I want you to think about whether your organisation has a response plan for either of these situations.

Scenario 1: The Cloned Field Coordinator

Your field coordinator attends a routine coordination meeting. Someone in the room opens Voice Memos on their phone. Nobody notices. They now have 20 minutes of clear audio.

That evening, they run the recording through open-source voice cloning software. This isn't science fiction. These tools exist, they're free, and some can clone a voice from under 30 seconds of audio. They run locally on consumer-grade hardware. No cloud account needed. No trace.

By morning, an audio clip is circulating on local WhatsApp groups. It's your field coordinator's voice, apparently saying that your organisation is spying for a foreign government. Or prioritising one ethnic group over another. Or something worse.

A mob forms at your compound gate. Staff can't leave. Local authorities are demanding answers. Community leaders are furious.

Your country director calls you. "What did they actually say in that meeting?"

Here's the problem: you have no idea what happened. The fake audio sounds exactly like your colleague. They deny saying it, but how do you prove a negative? How do you convince a community that what they heard with their own ears isn't real?

Most organisations I've worked with don't have a response strategy for this. They have crisis communications plans for real incidents, not fabricated ones. No protocol for rapid audio verification. No pre-established relationships with technical partners who could analyse the file. No prepared messaging for communities about synthetic media.

By the time you figure it out, the damage is done. Trust takes years to build and minutes to destroy.

Scenario 2: The Synthetic Paper Trail

An allegation surfaces: procurement fraud in your logistics unit. Significant money. You launch an investigation.

The logistics officer cooperates fully. They produce a detailed email chain, 47 messages back and forth with the supplier over three months. Quotes, counter-offers, technical specifications, even a few jokes about delivery delays. Everything looks legitimate.

But something feels off. The investigation team can't identify the supplier through normal channels. The email domain was registered recently. The phone numbers in the signature block go to disconnected lines.

Here's the question: is this paper trail real, or did someone use AI to generate a convincing but entirely synthetic correspondence in a few hours?

Most investigators aren't trained to spot AI-generated text. They're looking for financial discrepancies, not linguistic patterns. Email metadata can help, but it can be manipulated. Writing style analysis requires expertise most organisations don't have in-house. And even if you suspect the documents are fake, proving it to a standard that would hold up in a disciplinary process is another matter.

I'm not saying this is happening everywhere. I'm saying the capability exists, the tools are accessible, and the incentives for fraud are high. If your investigation protocols haven't been updated for the AI era, you're working with outdated assumptions.

THIS ISN'T HYPOTHETICAL: WHAT'S HAPPENING NOW

Let me be specific, because vague warnings about "AI risks" don't help anyone.

Sudan

The information war in Sudan has escalated dramatically. A February 2026 Small Wars Journal analysis documented how the RSF has "been quick to embrace the use of AI" since the conflict began, with AI-generated recordings of former President al-Bashir circulating as early as October 2023 and continuing through 2025.

In November 2025, Misbar fact-checkers identified an AI-generated video falsely claiming to show RSF forces dragging a Sudanese woman. The TikTok account spreading it had 21 AI-generated videos about the Sudan conflict, around 23,000 followers, and over 226,000 likes. Only one video was labeled as containing AI content. The rest circulated as if real.

A BBC investigation cited by Misbar in late 2025 exposed a coordinated disinformation campaign using stolen images of Somali women and AI-generated content to spread false narratives about Sudan. From January 2023 to September 2025, these accounts produced more than 47,000 posts reaching an audience of over 215 million users.

The consequences are measured in bodies. The CDAC Network found that warring parties spread claims that the Sudan Emergency Response Rooms, the volunteer networks providing lifesaving aid, were "collaborating with their enemies." The result: denial of humanitarian access, leaving millions without aid. Aid workers attacked at checkpoints. Operations suspended. In the first eight months of 2025, 265 aid workers were killed globally, continuing a trend that saw 383 killed in 2024, the worst year on record. As one humanitarian official told TRT World, the killing of aid workers "is too often justified as collateral damage, backed by disinformation suggesting they are aligned with one side or another."

The Liar's Dividend

And here's the twist that should concern every humanitarian professional, the insight that distinguishes this from a standard "deepfakes are scary" article.

Both sides in Sudan now exploit what researchers call the "liar's dividend." SAF supporters launched campaigns claiming authenticated recordings of RSF leader Hemedti were AI-generated fakes, even when independent analysis confirmed they were real.

AI doesn't just create false content. It creates a permission structure to deny real evidence.

Think about what that means for humanitarian operations. Your team documents an atrocity. You have video, audio, witness testimony. And the perpetrator's response? "That's a deepfake." Suddenly the burden of proof shifts. You're not just presenting evidence. You're defending its authenticity against a blanket presumption of manipulation.

This is the more corrosive threat: not that we can't trust what's fake, but that we're losing the ability to trust what's real.

DRC

In DRC, the UN's Deputy Emergency Relief Coordinator told the Security Council in April 2025 that "disinformation campaigns have undermined the UN's credibility, fueled public unrest, and strained its relationship with local communities."

WFP's Goma area office head said it directly: "There's another threat to humanitarian access in the region: disinformation." WFP staff now monitor social media to detect misinformation about their activities. Their sensitisation campaigns with youth groups and local authorities have helped recover trucks and transport food safely after coordinated disinformation spread.

The ICRC put it clearly: "A compromised security situation as a result of an information operation targeting humanitarian organizations can quickly halt humanitarian personnel from leaving their offices, distributing lifesaving assistance, visiting detainees, or bringing news to people who have lost contact with a family member."

The Scale

A December 2025 Reporters Without Borders analysis documented 100 journalists targeted by deepfakes across 27 countries over 24 months, with 74% being women. Tech Policy Press's year-end review noted that 2025 saw "an unprecedented flood of AI-generated content," with detection tools frequently struggling, some research suggesting failure rates of 35% or higher.

THE LITERACY GAP

Here's where it gets uncomfortable.

The Humanitarian Leadership Academy and Data Friendly Space surveyed over 2,500 humanitarian workers globally in 2025. The findings: 93% report using or having used AI tools, 70% use them weekly or daily, but only 8% say AI is widely integrated in their organisations, and just 22% report having a formal AI policy. TechSoup's 2025 benchmark tells a similar story: 85.6% of nonprofits are exploring AI tools, but just 24% have a formal AI strategy.

Individual adoption is running 12 to 18 months ahead of governance. In the trainings I've run over the past two years, the pattern is consistent: staff are experimenting with AI on personal devices, often without telling managers. They're solving real problems, drafting reports faster, translating documents, summarising meeting notes, but they're doing it outside any governance framework.

Most leadership teams would be surprised by what's happening in their own organisations. Not because staff are doing anything malicious, but because the gap between what's happening and what's been sanctioned is far wider than anyone assumes.

Meanwhile, some of the most sophisticated information operations in humanitarian contexts right now are being run by the actors we're trying to protect people from.

WHAT ACTUALLY NEEDS TO CHANGE

I'm not arguing against ethical AI frameworks. I helped write some of them. But ethics committees meeting quarterly won't help a programme officer in North Kivu who needs to recognise when a WhatsApp voice note from a "local authority" sounds wrong.

Defensive AI Literacy for Every Staff Member

Training can't just cover how to use ChatGPT safely. It needs to cover how to recognise synthetic media, how to verify sources in a contaminated information environment, and how to understand the tactics being deployed against humanitarian operations.

Here's one thing you can do tomorrow: sit down with your country director and record a verification phrase. Something specific and slightly absurd that only they would say. "The flamingo flies at midnight." Whatever. Store it securely. If a fake audio clip surfaces, you have a way to prove identity in 30 seconds. It sounds paranoid until the day it saves your operation.

Every staff member who attends an AI training should leave knowing not just how to prompt responsibly, but what a cloned voice sounds like, what AI-generated text tends to get wrong, and why a perfectly polished email chain might actually be a red flag.

Organisational Readiness

Most organisations have crisis communications plans for real incidents, not fabricated ones. That needs to change.

For voice cloning attacks: establish relationships now with technical partners who can analyse suspicious audio within hours, not weeks. Brief community leaders in advance that synthetic audio exists and is being used in conflicts. Develop template messaging in local languages explaining voice cloning, ready to deploy immediately.

For synthetic document fraud: update investigation protocols to include AI-generated content as a possibility. Train investigators on the basics. Accept that complex cases may need external forensic support. Budget for it.

For the Shadow AI gap: map your organisation's actual AI use. Not what the policy says, what's really happening on personal devices, in field offices, at coordination meetings. Build a pathway from Shadow AI to Safe AI to Governed AI. This isn't about banning tools. It's about creating approved alternatives that are easier and safer than the workarounds staff have already found.

Community Verification

Some of the most effective counter-disinformation work in Sudan has come from grassroots verification networks, local volunteers, teachers, and religious leaders who help communities distinguish real from fake. These efforts remain largely unsupported by international actors. The communities receiving AI-generated content on WhatsApp deserve more than to be passive recipients of whatever the algorithm serves them.

Does your organisation have a response protocol for synthetic media attacks? Who would you call at 6am if a fake audio clip of your country director went viral overnight? When was the last time your investigation unit updated its procedures to account for AI-generated evidence?

The box is open. The question is whether we're going to keep pretending it isn't.

This is what we're seeing in the field. At MarketImpact, we're building defensive AI literacy into our consultancy work with humanitarian teams, and through AidGPT we're developing training that goes beyond "how to use ChatGPT safely" to "how to recognise when these tools are being used against you." If your organisation is starting to ask these questions, you're not paranoid. You're paying attention.

#HumanitarianAI #Disinformation #AILiteracy #LiarsDividend

SOURCES

Small Wars Journal: "Violent Non-State Actors and Generative AI in Warfare: The RSF and the Sudanese Civil War" (February 2026)
Misbar: "AI-Generated Video Falsely Claims to Show RSF Dragging a Sudanese Woman" (November 2025)
Misbar: "Viral Misinformation About Sudan Fact-Checked by Misbar in 2025" (December 2025)
CDAC Network: "Sudan's information war: How weaponised online narratives shape the humanitarian crisis and response" (September 2025)
African Arguments: "The Deepfake is a powerful weapon in the war in Sudan" (October 2024)
UN Security Council: "Amid Record High Killing of Humanitarian Workers" (April 2025)
WFP: "Why disinformation threatens humanitarian operations in restive DRC" (October 2024)
Reporters Without Borders: "RSF analysis of 100 deepfakes shows mounting threat to journalists" (December 2025)
Tech Policy Press: "Five Things 2025 Taught Us About AI Deception and Detection" (December 2025)
HLA/Data Friendly Space: "How are humanitarians using artificial intelligence?" (August 2025)
TechSoup/Tapp Network: "State of AI in Nonprofits 2025" (January 2025)
ICRC: "Foghorns of war: IHL and information operations during armed conflict" (October 2023)
OCHA: "World Humanitarian Day: Attacks on aid workers hit another record" (August 2025)
TRT World: "World Humanitarian Day: Why aid workers are no longer safe in war zones" (August 2025)

Enjoyed this article?

This post is from Tom's Aid&Dev Dispatches — a weekly newsletter with insights on humanitarian & development trends. Join 7,900+ subscribers.

Subscribe on LinkedIn

About the Author

Thomas Byrnes is a Humanitarian & Digital Social Protection Expert and CEO of MarketImpact.