Roundtable discussion: How can we manage the risk from AI to spread Information Disorder?
Overview
This summary report captures the insights from a two-day roundtable hosted by Cranfield University and the Network for Security Excellence and Collaboration (NSEC), exploring the intersection of Artificial Intelligence (AI) and Information Disorder (ID). The event brought together policymakers, academics, and industry experts to assess the risks posed by AI-driven disinformation and to propose actionable strategies for mitigation.
Key Findings
The Threat Landscape
- AI democratises disinformation: Tools once exclusive to state actors are now accessible to individuals and criminal groups.
- Deepfakes and generative content: AI can create highly realistic fake content (text, video, audio), making it harder to discern truth.
- Agentic AI: Capable of refining disinformation campaigns in real-time, mimicking legitimate marketing strategies.
- Intent is hard to prove: Disinformation may stem from hostile states, pranksters, or even accidental sources.
- Harms are universal: Regardless of intent, ID can cause societal unrest, economic damage, and erosion of public trust.
Targets of AI-Driven ID
- Events: Elections, pandemics, and disasters are prime targets.
- Relationships: Domestic trust (e.g., in media, police) and international diplomacy can be undermined.
- Industry: AI-generated smear campaigns can damage reputations and disrupt supply chains.
- Finance: Stock markets and economic stability are vulnerable to AI-fuelled rumours.
- Public trust: The general population may become increasingly sceptical of all information sources.
Systemic Vulnerabilities
- Lack of media literacy: The UK population is underprepared to critically assess digital content.
- No central authority: There is no dedicated body to lead on AI-ID risk management.
- Measurement gap: There is no framework to quantify the harms of ID.
- Communication breakdown: Government struggles to maintain authoritative communication amidst growing “noise pollution”.
- Reactive posture: Current strategies are not proactive or pre-emptive enough to counter evolving threats.
Recommendations
R1: Establish an Office for Media Literacy
- Promote critical thinking and media literacy through education and public campaigns
- Develop a Risk Management Framework for ID.
- Encourage community participation and peer review.
R2: Create a Policy Sandbox
- Experiment with technologies like blockchain and AI watermarking.
- Conduct game-theory and agent-based modelling research.
- Funnel successful research into policy trials.
R3: Develop Harm Assessment Models
- Quantify potential harms using frameworks akin to the National Risk Register.
- Fund interdisciplinary research to explore socio-economic impacts.
R4: Launch a National Resilience Programme
- Prepare likely targets for AI-ID attacks through scenario planning.
- Build a vetted pool of academic and technical experts.
- Develop “killchain” protocols for rapid response to disinformation.
R5: Introduce a Public Confidence Framework
- Implement an “8-Tick Check” to help users assess the credibility of news:
1. Trustworthy source?
2. Trust in the reporter?
3. Clear source-to-reporter path?
4. Parallel reporting from trusted outlets?
5. Expert verification?
6. No harm to public interest?
7. No benefit to hostile actors?
8. No manipulation or AI distortion?
R6: Structural and Strategic Preparedness
- Stress-test public services against disinformation.
- Design resilient supply chains and communication strategies.
- Equip the public with tools and knowledge to remain informed during crises.