Decision&LawAI Legal Intelligence
regulatory-analysisplatform-regulation

Algorithms of Disorder: Social Media, Misinformation, and the Failure of the Online Safety Act in the 2024 Riots

April 16, 2026
18 min read
3,800 words
Online Safety Actrecommendation algorithmsmisinformationgenerative AI regulationprogrammatic advertising

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: April 16, 2026

Abstract This article analyzes the report of the United Kingdom House of Commons Science, Innovation and Technology Committee on the role of social media in the summer 2024 riots following the Southport attack. It examines how platform architecture—driven by attention‑based business models and recommendation algorithms—facilitated the viral spread of misinformation and hate speech that led to real‑world violence. Despite the recent enactment of the Online Safety Act (OSA), the parliamentary inquiry concludes that this legal framework is insufficient to address "legal but harmful" misinformation and is already outdated relative to technologies such as generative artificial intelligence. Finally, the article sets forth five fundamental principles for effective regulation that would ensure public safety, free expression, platform responsibility, user control, and technological transparency.

Keywords: misinformation, recommendation algorithms, social media, Online Safety Act, platform regulation, generative artificial intelligence, programmatic advertising.

📄 Full report available for download: Algorithms of Disorder — Full Analysis (PDF)


1. Introduction

1.1. Context of the Report: The Southport Murders and the Escalation of Viral Misinformation

On July 29, 2024, a deadly attack in the town of Southport triggered a wave of violent disorder across the United Kingdom. Within hours, false and unfounded information began to circulate online, falsely identifying the suspect as a Muslim asylum seeker named "Ali Al‑Shakati." Notwithstanding police denials, the initial lack of official information created a "vacuum" in which misinformation was able to grow (Science, Innovation and Technology Committee, 2025, ¶7). As a result, anti‑immigrant and anti‑Muslim narratives achieved viral reach, propelled by a digital infrastructure that allowed such messages to spread massively before any effective intervention could occur.

1.2. The "Online Environment" as a Determining Factor in Inciting Real‑World Violence

The United Kingdom Home Office cited the "online environment" as a significant factor in inciting physical violence during this period of unrest (Science, Innovation and Technology Committee, 2025, ¶1). The investigation highlights that social media and encrypted private messaging platforms were not merely communication channels but active tools for organizing violent protests and attacks against mosques and migrant communities. Recommendation algorithms played a critical role by amplifying calls to violence and promoting false terms in high‑visibility sections, such as X's trending topics or TikTok's search suggestions (Science, Innovation and Technology Committee, 2025, ¶8).

1.3. Objectives and Scope of This Article

This article aims to unpack the Committee's findings regarding how the business models and recommendation technologies of major tech companies contribute to social instability. It critically analyzes why the Online Safety Act (OSA), despite years of deliberation, failed to protect citizens from such a central and foreseeable harm as algorithmically accelerated misinformation. Finally, it presents the Committee's recommendations for moving toward a "safe by design" model that holds platforms accountable for the content they choose to amplify.


2. Methodology of the Parliamentary Inquiry

2.1. Evidence Sessions, Witnesses, and Review of Documentary Submissions

The investigation conducted by the Science, Innovation and Technology Committee was characterized by a multi‑stakeholder approach and rigorous collection of evidence following the Southport events. The Committee held four oral evidence sessions, one private roundtable, and received an expert briefing specifically focused on social media algorithms (Science, Innovation and Technology Committee, 2025, ¶5). The process included testimony from a wide range of strategic sectors:

  • Civil society and affected groups: organizations directly impacted by the riots, representatives of local government, and experts in online narratives and disinformation.
  • Technology sector: senior executives of major global platforms, including Google, Meta, TikTok, and X.
  • Technical experts: specialists in the digital advertising market, fact‑checking organizations, and online safety advocates.
  • Regulators and the executive branch: representatives of Ofcom, the Information Commissioner's Office (ICO), and the Department for Science, Innovation and Technology (DSIT).

In addition to the oral testimony, the Committee analyzed more than eighty pieces of written evidence submitted by academics, independent researchers, campaign groups, and private citizens (Science, Innovation and Technology Committee, 2025, ¶5). This documentary basis allowed the Committee to compare the official statements of technology companies with the reality observed by "black box" researchers who analyze the external outputs of algorithmic systems.

2.2. Focus on Technological Aspects, Algorithms, and Business Models

Unlike purely sociological analyses, this parliamentary inquiry placed a deliberate emphasis on "the technological aspects of online services and markets that can lead to the amplification of false, unfounded or harmful information" (Science, Innovation and Technology Committee, 2025, ¶3). The Committee did not limit itself to examining the content of messages but investigated the architecture that sustains them.

A key methodological milestone was the attempt to audit algorithmic opacity. The Committee requested that the technology companies provide high‑level representations of their recommendation algorithms; however, the companies declined to provide them, citing trade secrecy and the risk that malicious actors might circumvent their protections (Science, Innovation and Technology Committee, 2025, ¶23). This methodological obstacle—the lack of transparency due to intellectual property claims—is identified in the report as a critical barrier to accountability, forcing the Committee to base part of its findings on independent studies that analyze system outputs in the absence of access to internal operations.


3. Analysis of Findings I: The Misinformation and Harm Ecosystem

3.1. The Southport Case: A Timeline of Amplified Falsehood

The Committee's investigation identifies the events following July 29, 2024, as a case study of how viral misinformation can catalyze physical violence. Within two hours of the attack (13:49), posts on X were already circulating claiming the suspect was a "Muslim immigrant." At 16:49, the fictitious name "Ali Al‑Shakati" was disseminated, shortly thereafter picked up by the website Channel3Now and amplified by accounts with millions of followers (Science, Innovation and Technology Committee, 2025, ¶7‑8).

This phenomenon was facilitated by an initial information vacuum: the police's legal inability to release details about the suspect (who was a minor at the time) allowed unfounded narratives to occupy the digital public square. The impact was massive: false claims about the attacker reached 155 million impressions on X in just ten days, and the false name had a potential reach of 1.7 billion people across various platforms (Science, Innovation and Technology Committee, 2025, ¶8).

3.2. The Role of Recommendation Algorithms in Spreading Extremist Narratives

The report concludes emphatically that harmful content was not merely "present" but was actively propelled by the companies' algorithmic tools. The Committee found evidence that:

  • On X, the attacker's false name appeared in the "Trending in the UK" section and in the "What's happening" sidebar.
  • On TikTok, the system suggested "Ali Al‑Shakati arrested in Southport" as a recommended search query in the "Others searched for" feature (Science, Innovation and Technology Committee, 2025, ¶8).

These systems are designed to maximize engagement and, by definition, prioritize content that generates strong emotional reactions—such as outrage or hatred—regardless of its truthfulness. This technical architecture creates "echo chambers" that normalize extremist rhetoric by making it appear more widespread than it actually is (Science, Innovation and Technology Committee, 2025, ¶20).

3.3. Platform Responses: Crisis Protocols and Operational Inconsistencies

Although companies such as Meta, TikTok, Google, and X activated crisis protocols and removed thousands of posts, the regulator Ofcom characterized these responses as "insufficient, inconsistent, and uneven" (Science, Innovation and Technology Committee, 2025, ¶12).

  • Meta removed 24,000 posts for violence and incitement, but its Oversight Board expressed serious concerns about the company's ability to accurately moderate violent images.
  • TikTok admitted that, although it removed the false name from suggested searches, its response could have been faster, as its primary focus was on video content moderation (Science, Innovation and Technology Committee, 2025, ¶11).
  • X was criticized for the ineffectiveness of its 'Community Notes' system; during the riots, most harmful posts from high‑profile accounts displayed no contextual note (Science, Innovation and Technology Committee, 2025, ¶12).

3.4. The Attention Economy: Profiting from Polarizing Content

One of the Committee's most critical conclusions is that the advertising‑based business model incentivizes the spread of dangerous content. Because platforms depend on ad views, there is an intrinsic disincentive to moderate content that generates high volumes of interaction.

Evidence presented by the Center for Countering Digital Hate (CCDH) estimated that far‑right figures generated nearly 39 million ad impressions on X in the week following the attack, which could have generated daily advertising revenue for the platform of approximately £27,976 (Science, Innovation and Technology Committee, 2025, ¶13). The Committee maintains that as long as public disorder remains profitable for technology companies, safety measures will continue to be reactive and deficient (Science, Innovation and Technology Committee, 2025, ¶14).


4. Analysis of Findings II: The Online Safety Act Under Scrutiny

4.1. The Insufficiency of the OSA Regarding "Legal but Harmful" Content

One of the Committee's most critical conclusions is that the Online Safety Act (OSA), despite its lengthy legislative process, was not designed to address misinformation that is not strictly illegal or directed at children. The report highlights that much of the misleading content that drove the 2024 unrest falls into the category of "legal but harmful" content—a category that was deliberately removed from the Act's scope by the previous government. For example, the Committee notes that immigration status is not a protected characteristic under the OSA's illegal content obligations, leaving many of the anti‑immigrant narratives circulating after Southport outside the regulatory framework. Consequently, the Committee concludes that "the Act fails to keep UK citizens safe from a core and pervasive online harm" (Science, Innovation and Technology Committee, 2025, ¶18).

4.2. The Gap Between Existing Legislation and Current Technological Reality

The report argues that the OSA is already "out of date" relative to the speed of technological development. The legislation focuses on regulating at the level of specific content or technology categories (such as social media or search engines), rather than being based on universal principles or safety objectives. This structure creates significant gaps:

  • Legal indeterminacy of new tools: the legal status of generative AI chatbots is not entirely clear under the OSA's current categories (Science, Innovation and Technology Committee, 2025, ¶70).
  • The failure of the "False Communications" offense: although Section 179 of the OSA introduced this offense, the Committee describes it as "vaguely worded" and difficult for platforms to implement (Science, Innovation and Technology Committee, 2025, ¶42).
  • Exclusion of small platforms: the Act's focus on user base size leaves the public unprotected against "small but risky" services that act as incubators of extremist narratives before those narratives jump to major platforms (Science, Innovation and Technology Committee, 2025, ¶52‑53).

4.3. The Need for a "Safe by Design" Approach Versus Reactive Moderation

The Committee criticizes the OSA and Ofcom's codes of practice for focusing excessively on reactive moderation measures (removing content once posted) rather than imposing proactive design measures. While content‑level moderation places responsibility on the individual user, system‑level regulation would "shift responsibility to the platforms that approve, host, and algorithmically recommend and spread harmful content" (Science, Innovation and Technology Committee, 2025, ¶51).

Among the recommendations for achieving a "safe by design" environment, the report proposes:

  • Mandatory de‑amplification: requiring platforms to embed tools that identify and algorithmically deprioritize fact‑checked misinformation, reducing its reach without censoring lawful speech (Science, Innovation and Technology Committee, 2025, ¶31).
  • Design audits: subjecting recommendation algorithms to independent audits to assess whether their design adjustments are amplifying systemic risks.
  • Crisis protocols: establishing clear mechanisms requiring platforms, in emergency situations such as that of 2024, to actively slow the spread of false information with the potential to cause serious harm (Science, Innovation and Technology Committee, 2025, ¶19).

5. Proposal for a New Regulatory Framework: The Committee's Five Principles

Given the deficiencies identified in the Online Safety Act, the Parliamentary Committee proposes that regulation of the digital ecosystem should not rely solely on specific technology rules but on five guiding principles to ensure the system's resilience against future crises (Science, Innovation and Technology Committee, 2025, ¶6).

5.1. Public Safety and Algorithmic Accountability

The first principle states that algorithmically accelerated misinformation is a danger that requires coordinated action among companies, government, and law enforcement. The Committee maintains that platforms must be proactive: a key recommendation is that systems algorithmically demote misinformation that has been verified as such by fact‑checkers. Likewise, platforms are required to assume systemic responsibility for the impact of amplifying content that, while not strictly illegal, has the potential to cause massive public harm (Science, Innovation and Technology Committee, 2025, ¶6, principle 1).

5.2. Balancing Free Expression and Safety

The second principle emphasizes that neither government nor private companies should act as "arbiters of truth." The report stresses that any measures to mitigate misinformation must be consistent with the fundamental right to free expression (incorporated into UK law through Article 10 of the European Convention on Human Rights). However, the Committee notes that this is a qualified right, and restrictions are justified and proportionate when their purpose is to protect national security, public safety, health, or to prevent disorder and crime (Science, Innovation and Technology Committee, 2025, ¶6, principle 2).

5.3. User Control and Transparency of Recommendation Systems

The fourth and fifth principles focus on citizen empowerment and technical accountability.

  • Control (Principle 4): users should have a "right to reset" their personal data. This would allow citizens to delete the data history that feeds recommendation algorithms, thereby breaking potential "echo chambers" or radicalization loops (Science, Innovation and Technology Committee, 2025, ¶32).
  • Transparency (Principle 5): the Committee is emphatic that platform technology must be transparent, accessible, and explainable to public authorities and independent researchers. It denounces that the current characterization of algorithms as "intellectual property" acts as a barrier to assessing the true extent of the social harm they cause (Science, Innovation and Technology Committee, 2025, ¶27).

6. Generative Artificial Intelligence: The Next Frontier of Misinformation

6.1. Findings on "Hallucinations," Synthetic Content, and "Deepfakes"

The massive integration of generative artificial intelligence (AI) and large language models (LLMs) by companies such as Google, Meta, and X has introduced new critical risks to information integrity. The Committee warns of AI "hallucinations" —where the model produces false information in a convincing manner; estimates of the frequency of such errors across different tasks range from 0.7% to 79% (Science, Innovation and Technology Committee, 2025, ¶64). A notable example cited in the report is Google's 'AI Overview' feature, which in its experimental phase produced erroneous responses reflecting racist stereotypes (Science, Innovation and Technology Committee, 2025, ¶64).

Beyond unintentional errors, generative AI has drastically reduced the cost and difficulty of creating large‑scale deceptive content. The Committee received evidence that a "perpetually improving disinformation machine" could be built for as little as $400, capable of generating thousands of iterations of false messages tailored to specific audiences (Science, Innovation and Technology Committee, 2025, ¶66). During the 2024 riots, it was identified that the site Channel3Now likely used AI to harvest data from social media and generate its deceptive posts. Likewise, AI‑generated hate images were detected circulating on Meta's platforms that were not initially removed by their moderation systems (Science, Innovation and Technology Committee, 2025, ¶68).

6.2. Regulatory Gaps Regarding AI Under the OSA

The report concludes that the Online Safety Act significantly fails to specifically address the risks of generative AI. Because the Act regulates rigid technology categories, the legal status of chatbots and other AI services is unclear, creating a regulatory gray area (Science, Innovation and Technology Committee, 2025, ¶70). The Committee highlights the following deficiencies:

  • No mandatory labeling: the OSA contains no measures to visually identify AI‑generated content or "deepfakes."
  • Lack of training transparency: there are no requirements for companies to disclose the datasets used to train their models or their internal safety mechanisms.
  • Deployment of experimental features: the Act does not protect users against the release of "experimental" AI tools that feed false information to mass audiences (Science, Innovation and Technology Committee, 2025, ¶74).

To close this gap, the Committee recommends that future legislation require AI platforms to conduct risk assessments of their outputs and to implement non‑removable digital watermarks and metadata, ensuring that all synthetic content is clearly identifiable to the public (Science, Innovation and Technology Committee, 2025, ¶78).


7. The Digital Advertising Market: Financing Harmful Content

7.1. Opacity in Programmatic Advertising and Google's Dominant Role

The Committee's report underscores that any serious attempt to tackle misinformation must address the digital advertising market, valued at $790 billion worldwide in 2024 (Science, Innovation and Technology Committee, 2025, ¶79). The core of the problem lies in "programmatic advertising," an automated real‑time bidding system that the Committee describes as "excessively complex and opaque." On average, a single advertising campaign may involve 9,000 websites, making it impossible for brands to track precisely where their money goes. As the coalition UK Stop Ad Funded Crime (UKSAFC) states, in this environment "people literally do not know who is being paid" (Science, Innovation and Technology Committee, 2025, ¶86).

This ecosystem is massively dominated by Google, which controls 90% of the sell‑side market share, between 40% and 80% of the buy‑side, and approximately 50% of the exchange connecting the two (Science, Innovation and Technology Committee, 2025, ¶81). The Committee notes with concern that in April 2025, a U.S. district court ruled that Google had monopolized key digital advertising technologies, harming both publishers and consumers of information (Science, Innovation and Technology Committee, 2025, ¶81). This dominant position, combined with the fact that 78% of Alphabet's revenue comes from advertising, creates a structural conflict of interest in which engagement volume takes precedence over information safety.

7.2. Monetization of Hate: The Channel3Now Case and the Failure of Self‑Regulation

The parliamentary inquiry identified a direct link between advertising profit and the 2024 violence. The website Channel3Now, which published the false name of the Southport suspect, is cited as an example of how misinformation is incentivized by the market. According to the organization CheckMyAds, it is "likely" that Google facilitated the monetization of this false information (Science, Innovation and Technology Committee, 2025, ¶94). Although Google claimed to have demonetized the site two days after the events, it did not provide the Committee with data on how much revenue either Google or Channel3Now earned from that deceptive content. The Committee describes as "unacceptable" that Google appeared unaware of this chain of events and failed to offer assurances that it would prevent recurrence (Science, Innovation and Technology Committee, 2025, ¶96).

The report concludes that industry self‑regulation has failed. Current tools such as keyword blocklists are described as "highly defective," often harming legitimate journalism or diverse communities while failing to stop the funding of malicious actors. Moreover, industry‑led initiatives such as the Global Alliance for Responsible Media (GARM) have dissolved following legal challenges, leaving a "regulatory gap" (Science, Innovation and Technology Committee, 2025, ¶97). Accordingly, the Committee strongly recommends the creation of a new independent body, not funded by industry, to oversee digital advertising processes and to establish "Know Your Customer" (KYC) standards similar to those in financial markets (Science, Innovation and Technology Committee, 2025, ¶98‑100).


8. Discussion and Future Perspectives

8.1. The Role of Platforms as Content Curators, Not Mere Intermediaries

One of the Committee's most profound reflections concerns the legal status of social media companies. Historically, these companies have argued that they are "platforms" and not "publishers," thereby abdicating responsibility for the content they host. However, the report maintains that this model is "deeply unsatisfactory" today. Because they employ sophisticated recommendation algorithms that actively amplify and push specific content to users, the Committee concludes that these services act, in fact, as content curators (Science, Innovation and Technology Committee, 2025, ¶28). The report urges the government to state its position on whether these companies should begin to be treated legally as publishers, recognizing that although this is a complex area of law, the current situation of non‑accountability for algorithmic amplification is unsustainable.

8.2. Implications for Future Regulation: The Systemic Risks Model

The parliamentary analysis suggests a paradigm shift toward a "systemic risks" approach, similar to that adopted by the European Union in its Digital Services Act (DSA). While the OSA focuses on mitigating specific content (illegal or harmful to children), the systemic risks model requires platforms to assess how the design of their algorithms influences large‑scale social dangers such as misinformation (Science, Innovation and Technology Committee, 2025, ¶43).

The Committee argues that regulating solely at the content level shifts responsibility to the individual user, whereas regulation at the systemic design level would place responsibility on the platforms that approve, host, and algorithmically recommend harmful content (Science, Innovation and Technology Committee, 2025, ¶51). The investigation warns that the OSA is already an "insufficient first step" and that the online safety regime must be based on universal principles that remain sound in the face of technological development that evolves faster than governments can legislate (Science, Innovation and Technology Committee, 2025, ¶6). The future of regulation, therefore, should not depend on "prescriptive methods" that become obsolete, but on specifying safety outcomes that platforms must proactively guarantee.


9. Conclusions and Key Recommendations

9.1. Synthesis of the Committee's Five Fundamental Conclusions

After an exhaustive analysis of the digital environment and the 2024 events, the Committee reaches five conclusions that define the current state of online safety in the United Kingdom (Science, Innovation and Technology Committee, 2025, ¶14‑45):

  1. Accountability of business models: advertising‑based models incentivize the spread of harmful and deceptive content because they prioritize emotional engagement to maximize time spent on the platform.
  2. Legislative insufficiency of the OSA: the Online Safety Act, in its current form, fails to protect citizens against algorithmically accelerated misinformation by not including measures against "legal but harmful" content for adults.
  3. Platforms as curators: technology companies can no longer be considered mere intermediaries; the use of sophisticated algorithms to push content makes them de facto content curators.
  4. Obsolescence relative to AI: the regulatory framework is outdated relative to generative AI, which enables the massive, low‑cost creation of synthetic misinformation, eroding public information integrity.
  5. Failure of the advertising market: the digital advertising ecosystem is opaque and lacks effective regulation, allowing the inadvertent monetization of hate speech and misinformation.

9.2. Roadmap for the Government and the Regulator (Ofcom)

To correct these deficiencies, the Committee sets forth a series of strategic recommendations aimed at transforming digital safety in the United Kingdom:

  1. Proportionate algorithmic intervention: require platforms to embed tools that algorithmically demote content that has been verified as false by independent organizations, without resorting to censorship (Science, Innovation and Technology Committee, 2025, ¶31).
  2. Radical transparency and auditing: the government should commission a large‑scale research project allowing independent researchers access to the "black boxes" of recommendation algorithms to assess their impact on public safety (Science, Innovation and Technology Committee, 2025, ¶29‑30).
  3. User empowerment: mandate a "right to reset," allowing users to delete the data that feeds their recommendation profiles, thereby breaking radicalization loops (Science, Innovation and Technology Committee, 2025, ¶32).
  4. New regulation for AI and advertising:
    • Enact specific legislation on generative AI, including mandatory labeling of all synthetic content with non‑removable digital watermarks (Science, Innovation and Technology Committee, 2025, ¶78).
    • Create an independent advertising oversight body, not funded by industry, that imposes "Know Your Customer" (KYC) controls in the programmatic supply chain (Science, Innovation and Technology Committee, 2025, ¶98‑100).
  5. Dissuasive sanctions regime: Ofcom should be empowered to impose significant fines—up to 10% of the company's worldwide revenue or £18 million, whichever is higher—on companies that fail to comply with risk assessments regarding the spread of systemic harms (Science, Innovation and Technology Committee, 2025, ¶30).

10. Bibliography

This bibliography has been compiled exclusively from the sources, testimony, and documents cited in the report Social media, misinformation and harmful algorithms of the United Kingdom House of Commons Science, Innovation and Technology Committee.

United Kingdom Legislation and Parliamentary Documents

  • Children and Young People Act 1933.
  • Contempt of Court Act 1981.
  • Culture, Media and Sport Committee. (2024). Trusted voices. Sixth Report of Session 2023–24, HC 175.
  • Data (Use and Access) Act 2025.
  • Home Affairs Committee. (2025). Police response to the 2024 summer disorder. Second Report of Session 2024–25, HC 381.
  • House of Commons Science, Innovation and Technology Committee. (2025). Social media, misinformation and harmful algorithms. Second Report of Session 2024–25, HC 441.
  • Human Rights Act 1998.
  • Intelligence and Security Committee of Parliament. (2020). Russia. HC 632.
  • National Security Act 2023.
  • Online Safety Act 2023.
  • Public Order Act 1986.

Regulatory and Governmental Bodies

  • Department for Science, Innovation and Technology (DSIT). (2023). UK children and adults to be safer online as world‑leading bill becomes law.
  • Government Communication Service. RESIST 2 Counter Disinformation Toolkit.
  • Ofcom. (2024). Ofcom's three‑year media literacy strategy.
  • Ofcom. (2024). Statement: Protecting people from illegal harms online.
  • Ofcom. (2025). Statement: Protecting children from harms online.

Witnesses (Oral Evidence) — Selected

  • Ahmed, I. (Center for Countering Digital Hate). Session of January 21, 2025.
  • Baroness Jones of Whitchurch (Minister for the Future Digital Economy and Online Safety). Session of April 29, 2025.
  • Bunting, M. (Ofcom). Session of April 29, 2025.
  • Fernandez, W. (X). Session of February 25, 2025.
  • Jain, L. (Logically). Session of March 18, 2025.
  • Law, A. (TikTok). Session of February 25, 2025.
  • Middleton, K. (University of Portsmouth). Session of March 18, 2025.
  • Mohammed, Z. (Muslim Council of Britain). Session of January 21, 2025.
  • Muthiah, R. (Joint Council for the Welfare of Immigrants). Session of January 21, 2025.
  • Smith, P. (Incorporated Society of British Advertisers, ISBA). Session of March 18, 2025.
  • Spring, M. (BBC). Session of January 21, 2025.
  • Storey, A. (Google). Session of February 25, 2025.
  • Yiu, C. (Meta). Session of February 25, 2025.

Written Evidence (Referenced by SMH Number)

  • 5Rights Foundation (SMH0024).
  • Antisemitism Policy Trust (SMH0005).
  • Big Brother Watch (SMH0043).
  • Center for Countering Digital Hate (SMH0009).
  • Clean up the Internet (SMH0023).
  • Full Fact (SMH0070).
  • Global Witness (SMH0048).
  • Institute for Strategic Dialogue (SMH0062).
  • Logically (SMH0049).
  • Molly Rose Foundation (SMH0016).
  • Online Safety Act Network (SMH0031).
  • UK Stop Ad Funded Crime (UKSAFC) (SMH0004).

International Reports and Other Documents

  • CheckMyAds. (2025). Digital Advertising and Its Role in the 2024 Southport Riots.
  • European Union. (2022). Digital Services Act (DSA).
  • Meta Oversight Board. (2025). Posts supporting UK riots.
  • United Nations. (2024). United Nations Global Principles for Information Integrity.

Back to News