The Liar's Dividend: A Legal Framework for Governing AI in Crisis
Educational Content – Not Legal Advice
This article provides general information. Consult a qualified attorney before taking action.
Disclaimer
This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.
Last Updated: April 11, 2026
The Liar's Dividend: A Legal Framework for Governing AI in Crisis
I. INTRODUCTION
The global information ecosystem is experiencing systemic instability as generative artificial intelligence (AI) has evolved from a theoretical threat into a catalyst for physical violence and democratic erosion.[^1] From the Southport riots of August 2024 to the Israel-Iran conflict of 2025–2026, the capacity of large language models (LLMs) and image/video generators to "add fuel to the fire" during critical events is evident.[^2] Some analysts project that by 2026 the majority of digital content may be synthetic, posing an existential challenge to the integrity of public discourse.[^3]
Scholarship has identified the phenomenon of the "liar's dividend," whereby the mere existence of AI tools allows malicious actors to cast doubt on real events—claiming they are fabricated—while simultaneously flooding the digital space with synthetic evidence of nonexistent atrocities.[^4] This collapse of authenticity is not uniform: deepfake attacks have a disproportionate gender bias, affecting female journalists in 74% of documented cases, often for purposes of dehumanization and delegitimization.[^5]
The current normative gap lies in the inadequacy of traditional security frameworks, designed for manual disinformation. Generative AI operates at a scale and speed that exploits data voids during crises.[^6] While the impact in electoral contexts has been contained by institutional resilience, in national security events—terrorist attacks or racial riots—AI has facilitated the coordination of "disinformation swarms" that are virtually impossible to detect by conventional means.[^7] This Article analyzes more than fifteen recent global crises to propose a framework of obligations that ensures the epistemic security of society.
II. GENERAL PRINCIPLES OF NORMATIVE RESPONSE
The governance of AI during crises must be guided by four cardinal principles that balance national security with fundamental rights.
2.1. Algorithmic Proportionality: "Freedom of Speech vs. Freedom of Reach"
Regulation should not censor content based on its synthetic origin but rather limit its viral amplification when it poses an imminent risk of harm. Doctrine distinguishes between freedom of speech and freedom of reach.[^8] In conflict situations, platforms have a duty to apply a graduated intervention standard: content identified as "High‑Risk AI"—for example, fake videos of attacks on infrastructure—must be de‑amplified or downranked before even exhaustive human verification, provided there are clear technical signs of inauthenticity.[^9]
2.2. Necessity and Exceptionality in Protocol Activation
Restriction of information flows is lawful only under the activation of formalized crisis protocols (such as Meta's Crisis Policy Protocol or the United Kingdom's PRIMER framework).[^10] This activation must depend on dynamic severity indicators that measure the risk of actual physical violence or harm to national security, preventing the communicative state of exception from becoming a tool of political suppression.[^11]
2.3. Radical Transparency and Traceability (C2PA Standard)
Every citizen has a fundamental right to know the provenance of the information they consume during a crisis. Industry must consistently and scalably implement the standard of the Coalition for Content Provenance and Authenticity (C2PA).[^12] This requires the use of inviolable metadata and invisible watermarks on all AI‑generated material, enabling users to verify the origin and modifications of a digital asset in real time.[^13]
2.4. Non‑Delegation of Sovereign Functions
Government cannot delegate critical decisions about factual truthfulness to black‑box algorithms. "Strategic decidability" requires human framing, narrative construction, and institutional coalition‑building.[^14] Although AI can process large volumes of social data for situational awareness, the final judgment about responding to an information threat must remain under human supervision (human‑in‑the‑loop) to ensure accountability.[^15]
Box of Preliminary Rules (Principles I–II)
- Presumption of Inauthenticity in Crisis: In situations of imminent violence or declared civil unrest, unverified multimedia content that incites aggression and lacks C2PA provenance metadata shall be presumed synthetic, authorizing preventive de‑amplification.
- Duty to Label "High‑Risk AI": Platforms shall apply "High‑Risk AI" labels to content that simulates evidence from the scene of a crime or armed conflict until its origin is certified.
- Right to Informational Integrity: Governments shall ensure that official information is protected with immutable provenance records from its origin to prevent spoofing.
III. GOVERNMENTAL DUTIES: STRATEGIC PREPARATION (EX ANTE)
Democratic resilience against generative AI cannot be an improvised reactive response after a crisis erupts; it requires an architecture of legal and operational duties that the State must refine during periods of normalcy.
3.1. Scenario Planning and Organizational Red‑Teaming
Government shall have the obligation to institutionalize AI threat scenario planning programs. As a lesson from the inadequacy of manual responses to the 2024 riots, the Cabinet Office shall conduct tabletop exercises and organizational red‑teaming scenarios involving all relevant state departments.[^16] This duty concretizes in the creation of an AI Security Strategy Playbook, similar to that proposed in comparative legislation (H.R. 3919), to identify critical vulnerabilities in data infrastructures.[^17]
3.2. Threat Intelligence Integration and Liaison Channels
The State shall establish formal and permanent intelligence‑sharing channels between security agencies and frontier AI labs. The AI Security Institute (AISI) shall assume the legal function of technical point of contact, identifying key personnel at AI companies to share malicious behavior signals before they go viral.[^18] Additionally, the National Security Online Information Team (NSOIT) shall define dynamic severity indicators (low, medium, high, critical) that routinely feed the Commonly Recognised Information Picture (CRIP), ensuring executive decisions are based on a technically accurate picture.[^19]
3.3. Multi‑Layer Crisis Communication and the End of Administrative Silence
The doctrine of the "Cost of Silence" holds that governmental ambiguity during a crisis creates information voids exploited by malicious actors using AI.[^20] Therefore, Government shall update its Emergency Planning Framework (PRIMER) to incorporate: (a) immediate publication of basic verified facts; (b) decentralization of narrative through religious centers, sports clubs, and local media; and (c) proactive saturation of data voids with verified content before propaganda networks position their synthetic narratives.[^21]
3.4. Hardening Governmental Authenticity (Immutable Provenance)
To protect citizens' epistemic integrity, Government shall implement a strategy of automatic provenance record embedding in all official digital content from its origin (C2PA standard). This measure has a dual purpose: first, it ensures that citizens can verify that a security directive comes from a legitimate source; second, it prevents spoofing attacks where AI is used to clone the voice or image of political leaders to instigate civil disorder.[^22]
IV. INDUSTRY OBLIGATIONS: ACTIVE DUTY OF CARE (IN ITINERE)
The advent of generative AI requires a redefinition of platform responsibility. In crisis contexts, the "reactive passivity" standard (acting only after reporting) is insufficient. Industry must transition to an active duty of care, whereby AI companies and social networks assume proactive responsibility for mitigating risks that threaten physical security and democratic stability.[^23]
4.1. Graduated Intervention Standard
Industry shall implement a graduated intervention system that does not limit itself to content removal but manages content reach.[^24] During declared crises, platforms shall algorithmically downrank all multimedia content that lacks C2PA provenance credentials and is linked to crisis keywords. Unverified content shall be presumed synthetic until proven otherwise, prioritizing public safety over virality.[^25] Moreover, content that simulates evidence of conflicts or atrocities shall carry a "High‑Risk" label if it shows technical signs of manipulation.[^26]
4.2. Crisis Command Centers and Rapid Response
Frontier AI companies and social media platforms (especially Categories 1 and 2B under the Online Safety Act) shall be required to formalize Crisis Command Centers with multidisciplinary teams of AI security experts to detect jailbreaking techniques in real time.[^27] Smaller firms lacking dedicated Trust & Safety personnel shall designate, as a minimum standard, a Government Liaison Officer to receive threat intelligence alerts.[^28]
4.3. Chatbot Abuse Mitigation
The use of chatbots as primary information sources during crises poses a risk of epistemic intoxication.[^29] Therefore, AI companies shall modify their conversational user interfaces (CUIs) to: (a) deploy prominent pop‑up warnings when a user queries an ongoing crisis, explicitly informing about the model's limitations for real‑time fact‑checking;[^30] and (b) implement rapid security patches to prevent models from citing known disinformation sources (e.g., the Pravda network) in responses about national security events.[^31]
4.4. Transparency in Monetization
A significant portion of crisis disinformation (e.g., Southport 2024) originates from sites that use AI to generate sensationalist news solely for advertising revenue.[^32] The digital advertising industry and platforms shall apply strict demonetization policies to any domain identified by independent verifiers as an "inauthentic AI aggregator" during crisis periods.[^33]
4.5. Cross‑Sector Collaboration: The FMF‑GIFCT Model
Industry shall expand the Frontier Model Forum (FMF) to include a threat‑reporting mechanism analogous to the Global Internet Forum to Counter Terrorism's (GIFCT) Incident Response Framework.[^34] This mechanism will facilitate: (a) a shared hash database of harmful synthetic content already detected; (b) reporting of agentic bot swarms that mimic human interaction to create false consensus; and (c) the obligation to publish summaries of interventions performed to allow public scrutiny.[^35]
V. CIVIL SOCIETY RESPONSIBILITIES: EPISTEMIC RESILIENCE
Civil society organizations (CSOs), academia, and independent fact‑checkers constitute the third pillar of functional stability. Their role is not merely advisory but operational: they act as a legitimacy firewall where the State and industry often lack the trust or agility needed.
5.1. Hybrid Verification and Defensive Use of AI
CSOs must move beyond manual fact‑checking toward hybrid verification augmented by AI.[^36] Experimental debunking chatbots (e.g., DebunkBot) have been shown to reduce citizens' confidence in conspiracy theories by 20%, with effects lasting up to two months.[^37] Specialized CSOs shall explore these interfaces under strict ethical and human oversight, as well as develop swarm scanners capable of detecting coordinated AI agents that manufacture artificial consensus.[^38]
5.2. Amplification of Credible Voices and Protection of Minorities
CSOs possess a competitive legitimacy advantage: proximity to specific communities often targeted by synthetic disinformation tactics. During a national security incident, grassroots organizations shall act as amplifiers of verified truth, adapting facts to the cultural codes of minority groups or skeptical demographics that may distrust official government channels.[^39] Given that 74% of deepfake victims in journalistic contexts are women, human rights CSOs shall have the duty to document and report these dehumanization campaigns.[^40]
5.3. "Red List" Mechanism and Chatbot Auditing
To prevent data poisoning of language models by hostile state actors, academia and news auditing CSOs shall collect and share links of inauthentic sites (e.g., the Pravda network or Storm‑1516) with organizations such as NewsGuard.[^41] Consequently, AI developers shall use this information to filter their training datasets and prevent their models from citing propaganda as authoritative source in national security queries.[^42]
5.4. Functional Independence and Data Access Under the DSA
For CSO intervention to be legally valid and credible, CSOs shall maintain functional independence from government, preserving a balance between public safety and the right to peaceful protest and free expression.[^43] However, to fulfill this role, researchers must have guaranteed access to social media platform data. Following the model of the EU's Digital Services Act (DSA), government shall facilitate accredited researchers' access to data tools to systematically examine how AI tools were exploited during the crisis.[^44]
VI. POST‑CRISIS MEASURES: REVIEW AND TECHNICAL HARDENING (EX POST)
The conclusion of an information crisis does not terminate legal obligations. The doctrine of "Epistemic Security" holds that the period immediately after restoration of public order is critical to prevent consolidation of the liar's dividend and to patch exploited technical vulnerabilities.[^45]
6.1. Institutionalization of Post‑Incident Review Processes
Government shall have the legal duty to formalize a Post‑Incident Review (PIR) process following each crisis event where significant use of AI information threats is identified.[^46] This mechanism, coordinated by the Cabinet Office, shall integrate feedback from law enforcement, regulators, and intelligence teams. NSOIT shall periodically review its monitoring indicators; the need for updating shall be presumed when new threat vectors (autonomous AI agents or "vibe coding" techniques) emerge that were not effectively detected during the crisis.[^47]
6.2. Industrial Hash Databases and Threat Repositories
To prevent harmful synthetic content from migrating from one platform to another after detection, industry shall implement a shared technical memory system. Following the FMF model, AI companies shall develop and maintain a hash repository of the most prominent information threats identified during the crisis.[^48] Participating platforms shall have the obligation to cross‑reference newly uploaded content against the repository; any match with a "High‑Risk AI" hash shall be immediately removed or downranked.[^49]
6.3. Sanitization of Language Models: Red Lists
After the identification of foreign disinformation networks that have flooded the ecosystem with fake articles to poison chatbots, civil society and developers shall activate an "Epistemic Sanitization" protocol.[^50] AI developers shall be required to use NewsGuard's data to remove inauthentic sources from their training datasets and program their models so that, when queried about the past crisis, they replace the false narrative with the verified version.[^51]
6.4. Accountability and Legal Sanctions
This Article proposes that national legal frameworks incorporate specific sanctions for the creation and dissemination of malicious deepfakes that incite physical violence or gender‑based hatred.[^52] Platforms that fail to preserve provenance records or allow users to strip AI metadata without warning of the risk shall be held liable for "technical facilitating negligence."[^53] Pursuant to Article 20(2) of the ICCPR, States shall criminalize the use of synthetic media that function as risk factors for genocide or mass atrocities, considering the scale and speed of AI as an aggravating factor.[^54]
VII. PERMANENT STRUCTURAL PRIORITIES
The dynamic and cross‑border nature of AI threats requires that the normative response rest on permanent structural priorities.
7.1. AI Incident Preparedness Framework
The State shall formally adopt a systemic disinformation response framework based on four pillars: (i) generation restriction (technical safeguards); (ii) dissemination limitation (de‑amplification and labels); (iii) countering interaction (verification tools at the point of consumption); and (iv) social empowerment (strengthening citizens' capacity to expose disinformation campaigns).[^55] This framework must be supported by an AI Security Preparedness Act (inspired by the Advanced AI Security Readiness Act – H.R. 3919) that obliges intelligence agencies to identify vulnerabilities in data centers and critical model developers.[^56]
7.2. Support for AI Startups
Small AI companies and startups often lack resources to implement robust Trust & Safety departments, making them the weakest link.[^57] AISI shall have the duty to maintain an up‑to‑date registry of dual‑use AI startups that could be exploited by hostile actors. The State shall be obligated to provide information toolkits, zero‑cost access to open‑source safety tools (e.g., ROOST), and mentorship programs led by the FMF and GIFCT.[^58]
7.3. Academic and Research Collaboration
Following the EU DSA model, Government shall legislate to require social media platforms to provide data access tools to accredited researchers studying systemic risks to public security.[^59] This access will enable post‑crisis audits of how recommendation systems prioritized or devalued synthetic content. Academic institutions shall be strategic partners in defining dynamic severity indicators, providing a scientific perspective to prevent government intervention thresholds from being arbitrary or politically biased.[^60]
7.4. Toward an International Standard for Synthetic Truth
This Article proposes the creation of a Global Coalition for Digital Integrity, analogous in ambition to the Paris Agreement, to standardize transparency and provenance norms worldwide.[^61] International priorities shall include: (a) standardization of the C2PA standard as a global requirement for all generative AI models;[^62] (b) adoption of a UN Global Digital Integrity Charter obligating States not to use AI to interfere in the democratic processes of other countries;[^63] and (c) integration of advanced media literacy into secondary and university curricula.[^64]
VIII. CONCLUSION AND NORMATIVE ANNEX
Academic research and institutional practice developed between 2024 and 2026 confirm that generative AI has transformed from a technical curiosity into a systemic threat vector to democratic stability.[^65] The capacity of frontier models to manufacture synthetic evidence indistinguishable from reality has inaugurated the era of the "liar's dividend," where truth itself is degraded to a subjective option under the weight of information saturation.[^66]
The central thesis is that social resilience against disinformation crises requires an epistemic security architecture grounded in industry's active duty of care, the State's strategic proactivity, and civil society's technical vigilance.[^67] The cost of governmental silence and algorithmic passivity during high‑sensitivity events acts as an accelerant of actual physical violence.[^68] Looking toward the 2026–2028 horizon, international standardization of provenance norms (C2PA) and constant auditing of training datasets against "infection" by foreign state propaganda are categorical imperatives to preserve the social contract in the digital age.[^69]
NORMATIVE ANNEX: 10 CONSOLIDATED RULES FOR AI GOVERNANCE IN CRISIS
-
High‑Risk AI Labeling: Platforms shall automatically apply a "High‑Risk AI" label to any synthetic multimedia content that simulates evidence from conflict zones, atrocities, or national security incidents, regardless of user self‑disclosure.[^70]
-
Presumption of Inauthenticity in Crisis: In situations of imminent violence or declared civil unrest, unverified content lacking C2PA provenance metadata and identified as viral shall be presumed to be manipulated synthetic material, authorizing immediate algorithmic degradation.[^71]
-
Epistemic Sanitization: LLM developers shall integrate updated "Red Lists" of inauthentic disinformation domains (e.g., the Pravda network) to filter their training datasets and prevent their chatbots from reproducing state propaganda as authoritative source.[^72]
-
Traceability of Official Information: Government shall implement digital signature and immutable provenance systems for all emergency communications from their origin, ensuring citizens can verify the authenticity of state security directives against possible spoofing.[^73]
-
Mandatory Chatbot Warnings: AI conversational user interfaces (CUIs) shall deploy prominent pop‑up warnings when detecting queries about active crises, explicitly informing users about the model's limitations for real‑time fact‑checking.[^74]
-
Graduated De‑amplification Protocol: The technology industry shall modify its recommendation algorithms so that, upon signals of "Critical Severity" defined by the State, the visibility of unverified content is reduced by 80% preventively, prioritizing public safety over viral reach.[^75]
-
Forensic Access for Researchers: Category 1 platforms shall provide accredited academic researchers with access to algorithmic moderation data and hash archives within a maximum of 60 days after a crisis to audit potential biases or technical failures.[^76]
-
Liability for Technical Facilitating Negligence: An AI company shall be held liable if it systematically allows the removal of provenance metadata in its editing tools without implementing sufficient technical safeguards to prevent the creation of deepfakes for criminal purposes.[^77]
-
Criminalization of Synthetic Incitement: States shall criminalize as a standalone offense the creation and distribution of synthetic media specifically designed to incite genocide, mass physical violence, or systematic dehumanization on the basis of gender or race.[^78]
-
Institutionalization of Post‑Incident Review (PIR): After the resolution of an information crisis, Government and industry shall conduct and publish an incident review report auditing the effectiveness of severity indicators and the intelligence flow between situation centers and frontier AI labs.[^79]
BIBLIOGRAPHY
Reports and Academic Articles
- Albader, F., Synthetic Media as a Risk Factor for Genocide, n.p. (2025).
- Bailey, M., New report on AI information threats following crisis events, n.p. (2026).
- Gilbert, D., Schroeder, D. T., & Kunst, J. R., AI-Powered Disinformation Swarms Are Coming for Democracy, Wired (Jan. 22, 2026).
- McDonald, B., & Stockwell, S., AI Information Threats and Crisis Response: Practitioners' Handbook, CETaS (Apr. 2026).
- NewsGuard, Two Data Filters Appear Able to Protect LLMs from Russian 'Infection', n.p. (Dec. 2025).
- Oversight Board, Board Calls for New Rules on Deceptive AI During Conflicts, n.p. (Mar. 10, 2026).
- RSF (Reporters Without Borders), RSF analysis of 100 deepfakes shows mounting threat to journalists — especially women, n.p. (Apr. 2026).
- Seger, E. et al., Epistemic Security for Crisis Resilience, n.p. (2026).
- Shah, P. J., Exploring Current Trends in Journalism, n.p. (2024).
- Stockwell, S., AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections, CETaS (Sep. 2024).
- Stockwell, S., & Baker, A., The Cost of Silence: Crisis Communication, n.p. (2025).
- Stockwell, S., Janjeva, A., & McDonald, B., Adding Fuel to the Fire: AI Information Threats and Crisis Events, CETaS (Feb. 2026).
- UC Berkeley, Liar's dividend in the Charlie Kirk shooting, n.p. (2025).
Legislation and Official Documents
- Alan Turing Institute, Written evidence (SMH0007), UK Parliament (2024).
- Codify Updates, Advanced AI Security Readiness Act (H.R. 3919), U.S. Congress (June 2025).
- European Commission, AI in emergency and crisis management, n.p. (2025).
- Olawunmi, K., Computational Propaganda, Disinformation, and Democracy, n.p. (2025).
- United Nations, Global Digital Integrity Charter, n.p. (2025).
- White House, National Policy Framework for Artificial Intelligence, n.p. (Mar. 2026).
Footnotes
[^1]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 3. [^2]: Id. at 7. [^3]: Shah, Exploring Current Trends in Journalism, 2024, n.p. [^4]: UC Berkeley, Liar's dividend, 2025, n.p.; Oversight Board, Deceptive AI During Conflicts, 2026, n.p. [^5]: RSF, Analysis of 100 deepfakes, 2026, n.p. [^6]: Stockwell, AI-Enabled Influence Operations, 2024, at 5. [^7]: Gilbert, Schroeder & Kunst, AI-Powered Disinformation Swarms, 2026, abstract. [^8]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 5 (citing DiResta, 2018). [^9]: Oversight Board, New Rules on Deceptive AI, 2026, n.p. [^10]: Id.; Shane, Preparing for AI security incidents, 2025, n.p. [^11]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 11. [^12]: Oversight Board, New Rules on Deceptive AI, 2026, n.p. [^13]: Id.; IT Rules (India), Amendments on Synthetically Generated Information, 2026, n.p. [^14]: Olawunmi, Computational Propaganda, 2025, n.p. (citing Dameron, 2026). [^15]: European Commission, AI in emergency and crisis management, 2025, n.p.; GCSA, Recommendations on AI performance in crisis, 2025, n.p. [^16]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 3. [^17]: Codify Updates, Advanced AI Security Readiness Act, 2025, n.p. [^18]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 3–4. [^19]: Id. at 12. [^20]: Stockwell & Baker, Cost of Silence, 2025, n.p. [^21]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4. [^22]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 32. [^23]: Oversight Board, New Rules on Deceptive AI, 2026, n.p. [^24]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4. [^25]: Oversight Board, Current mechanisms for labeling, 2026, n.p. [^26]: Oversight Board, Ruling on Israel-Iran conflict video, 2026, n.p. [^27]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 40. [^28]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4, 11. [^29]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 24. [^30]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 41. [^31]: NewsGuard, Two Data Filters, 2025, n.p. [^32]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 5. [^33]: Id. at 23. [^34]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4–5. [^35]: Id. at 5; Gilbert, Schroeder & Kunst, AI-Powered Disinformation Swarms, 2026, n.p. [^36]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 7. [^37]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 36. [^38]: Gilbert, Schroeder & Kunst, AI-Powered Disinformation Swarms, 2026, n.p. [^39]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 7–8. [^40]: RSF, Analysis of 100 deepfakes, 2026, n.p. [^41]: NewsGuard, Two Data Filters, 2025, n.p. [^42]: Id.; McDonald & Stockwell, Practitioners' Handbook, 2026, at 10. [^43]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 8. [^44]: Id. at 9. [^45]: Seger et al., Epistemic Security, 2026, n.p. [^46]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 9. [^47]: Id.; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 31. [^48]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 9. [^49]: Oversight Board, New Rules on Deceptive AI, 2026, n.p. [^50]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 10. [^51]: NewsGuard, Two Data Filters, 2025, n.p. [^52]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 23; RSF, Analysis of 100 deepfakes, 2026, n.p. [^53]: Oversight Board, New Rules on Deceptive AI, 2026, n.p. [^54]: Albader, Synthetic Media as a Risk Factor, 2025, n.p. [^55]: Alan Turing Institute, Written evidence (SMH0007), 2024, n.p. [^56]: Codify Updates, Advanced AI Security Readiness Act, 2025, n.p. [^57]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 11. [^58]: Id. [^59]: Id. at 9. [^60]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 31. [^61]: Shah, Exploring Current Trends in Journalism, 2024, n.p. [^62]: White House, National AI Framework, 2026, n.p.; Oversight Board, Open Letter to Tech Platforms, 2026, n.p. [^63]: Olawunmi, Computational Propaganda, 2025, n.p.; United Nations, Global Digital Integrity Charter, 2025, n.p. [^64]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 11; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 44. [^65]: Bailey, New report on AI information threats, 2026, n.p. [^66]: Oversight Board, Growth of Deceptive AI Videos, 2026, n.p.; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 5. [^67]: Seger et al., Epistemic Security, 2026, n.p. [^68]: Stockwell & Baker, Cost of Silence, 2025, n.p. [^69]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 29–30; NewsGuard, Two Data Filters, 2025, n.p. [^70]: Oversight Board, Ruling on Israel-Iran conflict video, 2026, n.p. [^71]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4; Oversight Board, Current mechanisms for labeling, 2026, n.p. [^72]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 10; NewsGuard, Two Data Filters, 2025, n.p. [^73]: Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 32; Alan Turing Institute, Written evidence (SMH0007), 2024, n.p. [^74]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 4; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 41. [^75]: Oversight Board, New Rules on Deceptive AI, 2026, n.p.; Codify Updates, Advanced AI Security Readiness Act, 2025, n.p. [^76]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 9; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 31. [^77]: Oversight Board, New Rules on Deceptive AI, 2026, n.p.; Oversight Board, Nuanced approaches to labeling, 2026, n.p. [^78]: Albader, Synthetic Media as a Risk Factor, 2025, n.p. [^79]: McDonald & Stockwell, Practitioners' Handbook, 2026, at 9; Stockwell, Janjeva & McDonald, Adding Fuel to the Fire, 2026, at 31.