From deepfakes to digital trust: Combating digital misinformation

Global business leaders must adopt financial collateral models to protect operations from advanced AI-driven deepfake fraud and restore digital content trust

Early in 2024, an employee of global engineering firm Arup made a seemingly routine transfer of millions of company dollars, following a video call with senior management. Except, it turned out, the employee hadn't been talking to Arup managers at all, but to deepfakes created by fraudsters using artificial intelligence. The employee had been tricked into sending US$25 million to criminals after participating in what appeared to be a legitimate multi-person video conference with the CFO and other colleagues – all of whom were sophisticated AI-generated impostors.

What made the Arup case particularly instructive was not just the financial loss, but what it revealed about verification challenges in modern business. The attack didn't compromise Arup's digital systems through traditional hacking. Instead, it exploited human trust through a convincing “deepfake” – media which has been digitally manipulated or generated using artificial intelligence (AI), usually to realistically show a person doing or saying something they never did. Rob Greig, Arup's Chief Information Officer, later reflected on the incident: "It's freely available to someone with very little technical skill to copy a voice, image or even a video." His own experiment proved the point. It took him, with some open-source software, about 45 minutes to create a deepfake video of himself.

Arup’s experience is the tip of the deepfake iceberg, according to multiple industry surveys. A Sumsub Identity Fraud Report found that deepfake-driven fraud quadrupled globally in 2024, while the 2024 Global Economic Crime Survey found cybercrime (including impersonation fraud using deepfake technology) is the top-reported type of fraud across the world. And PwC’s most recent Global Digital Trust Insights Report found that security executives report that GenAI (67%) and cloud technologies (66%) have expanded cyberattack vectors over the past year, indicating companies are more vulnerable to sophisticated threats such as AI-generated deepfakes.


This growing technical risk is amplified by a parallel crisis of information integrity. The erosion of trust in traditional media compounds the problem, as audiences show a reduced reliance on established news sources, while social media platforms prioritise engagement over accuracy. This environment created a disjointed information ecosystem where neither traditional nor social media systems consistently aligned with truthfulness. The structural incentive to prioritise engagement over accuracy meant that misinformation typically attracted more immediate attention and could be produced quickly at lower cost through AI technology than carefully researched content. And this problem is only set to escalate, with industry experts suggesting that AI systems could generate up to 90% of online content by 2026.

A market-based solution to content verification

At the heart of this issue lies trust, which is front and centre of research from UNSW Business School and PwC Australia that suggests digital misinformation can be combated through a framework that treats trust as a tradeable commodity and allows market forces to align incentives with truth-seeking behaviour.

The research team, led by Lucas Barbosa, Honours Researcher at UNSW Sydney and former Senior Associate at PwC Australia, UNSW Business School Associate Professors Sam Kirshner and Eric Lim, together with PwC Australia AI Partner Rob Kopel and former AI Lead Tom Pagram, developed their methodology through game theory analysis, mathematical modelling, and experimentation. Their approach examined how financial collateral could create self-reinforcing incentives for accurate content creation and verification, using smart contracts and digital identity systems to manage the process.

Learn more: When AI becomes a weapon in the cybersecurity arms race

In their paper, A New Incentive Model For Content Trust, the researchers tested their theoretical framework through formal mathematical proofs and numerical verification, demonstrating how the system could resist collusion attempts and scale effectively. They further build on these foundational ideas in their article, Toward trustworthy content: the role of challengers, juries and veracity bonds in digital media platforms, by providing a proof of concept that platforms built on veracity bonds can increase trust in content by signalling accountability through financial stakes.

Mr Barbosa recalled that the catalyst for the research was a conversation with former colleagues, Mr Pagram and Mr Kopel, about advances in AI and the futility of trying to spot deepfakes in the future. “We realised that detecting deepfakes was a fool’s errand, and that a solution focused on verifying authenticity would be more robust as these GenAI models became more intelligent,” he said. “When we raised the idea with Eric, he was immediately excited and encouraged us to explore how incentives and game theory could make truth-seeking sustainable.”

Similarly, A/Prof. Lim explained how “he has been greatly disturbed and concerned with how little value our present society places on truth, and how the fast-paced social media space creates the incentive to just churn out new stories instead of truthful stories,” he said. “Author and journalist Douglas Murray astutely remarked that ‘we used to have our own opinions but now we have our own facts.’ It has created a Tower of Babel situation in that we can’t communicate or work together anymore in such a fragmented environment.”

A/Prof. Lim cited an AMA session by Charles Hoskinson (founder of the Cardano blockchain): "He mused about how it is possible to attach a value (aka, a veracity bond) to content being created online,” he said. “Ever since, I have been fascinated by how this model of ‘truth-seeking as a decentralised service’ could be realised. Working with Sam, Lucas, Rob, and Tom has allowed us to flesh out the idea a little more.”


How veracity bonds create “skin in the game”

The proposed system operated through "veracity bonds" – financial collateral that content creators staked on their published material to demonstrate confidence in its accuracy. When creators published content, they could deposit a bond as a guarantee of truthfulness. This mechanism transformed content creation from a risk-free activity into one with meaningful financial consequences.

If readers believed content was inaccurate, they could become challengers by providing contrary evidence and staking an equal counter-veracity bond. This requirement prevented frivolous challenges while ensuring both parties had equivalent financial exposure. A jury of verified users then evaluated the dispute, with the losing party's bond redistributed to the winning party and participating jurors.

The researchers noted that the incentive framework is grounded in the distribution of a forfeited veracity bond or a counter-veracity bond. This closed-loop mechanism meant "rewards for accurate assessments are funded exclusively by penalties for inaccurate content, thereby reducing reliance on external subsidies."

"One can think about this mechanism as a way for independent content creators to enhance their reputation as the new gold standard for news dissemination, or for traditional news media agencies to atone for past transgressions that have resulted in them losing the trust and their eroding audience numbers to social media influencers,” A/Prof. Lim explained.

Eric Lim UNSW.jpg
UNSW Business School Associate Professor Eric Lim said he is concerned at how the current news and social media landscape has become toxic, partisan, polarised, and violent. Photo: UNSW Sydney

“By participating in this arena as gladiators fighting over the truth of their content in a spectator sport, where defeat means financial and reputation loss, the wheat can be separated from the chaff. This continuous struggle to seek out truths between content creators and their challengers becomes a reflection of society’s negotiation for a platform of agreed-upon facts for our unity could spin off an entire industry by itself.”

In the insurance industry, for example, Mr Barbosa said veracity bonds could require claimants to stake collateral on the accuracy of their submissions, creating a strong disincentive for false claims. “Given that insurance fraud accounts for a significant share of losses, even a modest reduction through this mechanism could save companies billions while signalling greater trustworthiness to policyholders,” he said.

Building collusion-resistant community judgement

The system addressed potential manipulation through mathematical safeguards designed to prevent coordinated attacks. The research demonstrated that the probability of collusion decreased exponentially as the jury size increased, with analysis showing that expanding the jury to 43 members “presses the figure below 0.00004%" for collusion success when 10% of the juror pool was malicious.

Jurors underwent verification through digital identity systems and were held accountable through peer evaluation. They received compensation for quality assessments but risked damage to their reputation and reduced future selection due to poor performance. The framework incorporated anonymous evaluation systems where randomly selected viewers rated juror contributions, creating additional layers of quality control.

The researchers emphasised that "each juror is required to show active participation, including thorough examination of all evidence, clear justification of the perspectives, and casting votes that align with the available facts." This requirement ensured compensation connected to assessment quality rather than mere participation.

A/Prof. Lim explained that the spirit of King Solomon was the driving force behind the design of the original jury system. “Beyond being paid a fee for their service to adjudicate these challenges, the incentive for these jurors should be to strive for a reputation of sound judgement, wisdom, and integrity demonstrated through their participation in helping society establish a platform of accepted-upon facts that can be used to unify instead of to divide,” he said.

“In a world dominated by AI-generated fakes, such a reputation will become a rare and valuable commodity and jurors who developed such a reputation can commodify their reputation using their digital credentials across different market domains where wisdom and integrity are important.”

Mr Barbosa gave the example of how juries could be used to verify sustainability claims in corporate supply chains by requiring firms to back their environmental disclosures with veracity bonds. “Independent jurors drawn from accredited experts would adjudicate disputes, ensuring that only companies with genuine practices retain both their bonds and their reputations,” he said.

The jury model could also apply in the academic world, where Mr Barbosa said researchers could stake veracity bonds on their papers, signalling confidence in their findings while giving reviewers a financial incentive to assess them quickly and thoroughly. He explained that this process would discourage weak or fraudulent studies and speed up the peer-review process while strengthening trust in scholarly research.

Sam Kirshner_UNSW.jpg
UNSW Business School Associate Professor Sam Kirshner said testing how design decisions influence creator behaviour and audience perception will help build the veracity bonds framework. Photo: UNSW Sydney

Scaling truth verification through economic incentives

The economic model created multiple incentive layers that aligned participant interests with accuracy. Content backed by larger veracity bonds received greater visibility, encouraging creators to stake meaningful amounts on verified information. Challengers earned portions of forfeited bonds when successfully disputing false content, motivating them to identify and contest misinformation.

The visibility mechanism addressed concerns about creating "pay-to-win" scenarios by requiring that increased visibility came with proportionally higher financial risk if content proved false. As the researchers explained, "creators are not purchasing visibility; they are staking a bond that can be forfeited if their claims are proven false."

“One of the things that excites me about this framework is that it doesn’t prescribe a single model,” said A/Prof Kirshner. “There are many ways firms could adopt it, depending on the emphasis they want to place on trust. Some platforms might require every piece of content to carry a veracity bond, while others might allow creators to choose when to stake, leading audiences to weigh content differently depending on whether a bond is present.”

The size of the stake could also vary, A/Prof. Kirshner added: some systems might set a fixed bond for all creators (whether they are a multinational company or a freelance journalist) while others might scale the amount with reputation or reach. “We honestly don’t yet know how consumers will respond to these choices, or what level of stake will feel meaningful enough to shape trust. That uncertainty is part of the research frontier – testing how design decisions influence both creator behaviour and audience perception,” he said.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Mathematical analysis showed the system could handle substantial content volumes with reasonable juror pool sizes. For platforms processing millions of daily posts, required juror pools remained below one-tenth of 1% of active users, indicating practical feasibility once appropriate incentives attracted participation.

Practical business applications and benefits

Business professionals can apply multiple insights from this research to improve information quality within their organisations. The veracity bond concept could, for example, enhance internal communications by requiring stakeholders to demonstrate confidence in their claims through meaningful stakes, whether financial or reputational.

The research suggests that organisations struggling with information quality should examine their incentive structures. Systems that reward engagement without considering accuracy may inadvertently promote misleading content. As such, leaders should consider implementing accountability mechanisms that create consequences for spreading unverified information.

The framework also highlights the importance of diverse verification processes. Rather than relying solely on hierarchical fact-checking, organisations could benefit from distributed verification systems that leverage collective intelligence while maintaining quality controls through reputation systems and peer evaluation.

There is a reason why the idea of veracity bonds was first mooted by a blockchain founder (Charles Hoskinson), according to A/Prof. Lim. He explained that the nature of the blockchain industry is to be wary of the inevitability of corruption or deterioration in any centralised systems over time. “Our current systems of fact checking or ensuring truthfulness are invariably a model of appeal to authority or the more colloquial 'trust me bro' system, which begs the question: 'Quis custodiet ipsos custodes?’ (Who will guard the guards themselves?)" he asked. 

Had the veracity bond framework been in place, the Arup fraud would have been stopped.jpeg
Had the veracity bond framework been in place, the Arup deepfake fraudsters would have been forced to risk their own collateral while genuine managers could have proved genuine identities. Photo: Adobe Stock

"We are not claiming a foolproof system, but we believe that incentivising, through market forces, truth-seeking behaviours is a more robust system than what we currently have. As the late Charlie Munger said: 'Show me the incentive and I will show you the outcome.’”

Mr Barbosa reflected on the experience of Arup being defrauded as a result of an AI-generated deepfake. Had the veracity bond framework been in place, “the Arup fraud could likely have been prevented,” he said. “Deepfake imposters would have been forced to risk collateral they could not provide, while genuine managers could prove authenticity through bonded digital identities. This filter would have blocked deception, avoiding the $25 million loss and showing how truth-backed accountability protects business operations.”

For companies operating in information-sensitive environments, the research demonstrates how financial incentives can complement traditional verification methods. By requiring meaningful stakes from information providers and creating rewards for accurate verification, organisations can build more robust defences against misinformation while fostering a culture of accuracy and accountability.

“I am confident I am not the only one who cares and worries about how demoralising the current news and social media landscape has become toxic, partisan, polarised, and violent. Hybrid wars are fought in the terrains of our minds, as consumers in the online media environment for our hearts and souls. This is a pressing problem and for like-minded and people media platform owners who profess to love the truth,” said A/Prof. Lim. “For without it, the only options for 'unity' will be through tyranny and domination by force.”

To learn more about the use of veracity bonds, combating deepfakes and improving content trust, please contact UNSW Business School Associate Professors Sam Kirshner and Eric Lim.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy