top of page

Don’t Shoot the Messenger: Section 230 and Platform Accountability

  • 1 day ago
  • 16 min read
Luis Carvajal Picott

Edited by Keerthi Chalamalasetty, Josiah Jones, Judge Baskin, and Sahith Mocharla


Meta is (once again) back in the ring. A lawsuit filed last Friday in the U.S. District Court of San Francisco alleges that Meta misled users about what “end-to-end encryption” means: a protection framework where only the sender and receiver can read messages. Plaintiffs from Australia, Brazil, India, Mexico, and South Africa allege that Meta not only stores user chats and messages but can readily access them [1]. Meta has been exchanging punches with groups like this for the past decade, dating back to when end-to-end encryption was first introduced to Messenger in 2016 and then later became the default for all users in 2023 [2]. Meta isn’t alone in this fight; lawsuits involving Google, Apple, Zoom, and others reflect a rapidly changing internet and an enduring debate over digital communication. Specifically, whether tech companies should bear ultimate accountability and liability for the content shared on their platforms. Section 230 of the Communications Decency ACT (CDA) is the law that governs the norms of the internet and provides protection to platforms that host (user-created) content [3]. Under Section 230, they cannot be held liable for what their users post, such as messages, stories or highlights [4]. However, with the emergence of encryption and algorithmic recommendation systems this shield, designed to protect and empower companies, has instead begun to harm users. Given this development since the inception of Section 230, its evolution becomes increasingly vital to understand as courts, lawmakers, and international actors begin to reevaluate its role in shaping the future of digital safety and accountability [5]. 


I. CONTEXT 

The controversy surrounding Section 230 emerged from a legal paradox in the early 1990s stemming from the case Cubby v. CompuServe (1991) [6]. CompuServe ran online forums but didn’t review or edit posts. In some instances a user posted a defamatory post and Cubby sued. In this case, the district court held that CompuServe was a distributor, like a library, rather than a publisher as it did not moderate the user forums. This key distinction became the legal precedent for what would constitute a distributor. Cubby’s resolution created a clear guideline wherein the platform was not held liable for the defamatory content on the platform due to the company having no knowledge about the defamatory statements [7]. Cubby wasn’t standalone. Prodigy, an online content hosting platform, featured a forum called Money Talk where anonymous people could post their financial advice [8]. One user posted defamatory and accusatory comments aimed at an investment firm’s president Danny Porush. In Stratton Oakmont v. Prodigy (1995), the New York Supreme Court held that Prodigy’s efforts to moderate content made it a publisher and thus it was held liable for users’ defamatory statements [9]. While Cubby v. CompuServe clearly set the standard for what would cause liability, Stratton Oakmont v. Prodigy gave courts a tangible case to establish liability. The decision in Stratton as opposed to Cubby, only a few years later, created a chilling effect where platforms that sought to keep their spaces civil risked greater legal exposure than those that did not [10]. This created the incentive for platforms to avoid being held accountable for the content on their platforms and thus, they began to stop moderating content [11]. These shifts in corporate incentives open the floodgates to a deluge of unrestricted media where platforms willingly avoid engagement in moderation, forming a slippery slope where illegal and heinous activities can take place, anonymously and without oversight [12]. In an effort to centralize and resolve the differing opinions issued by courts on this matter, U.S. Representatives Christopher Cox (R-CA) and Ronald Wyden (D-OR) drafted Section 230 in 1996 [13]. The central provision of the law is simple: Section 230(c)(1) declares that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” [14]. Similarly to how the Bill of Rights or Constitution are considered foundational documents to the United States, Section 230 is often considered the foundational document that spurred the unimaginable growth of the internet. Rooted in the protection for good-faith moderation practices, this galvanized companies to continuously develop their platforms without the fear of relentless lawsuits [15]. Yet, this legal framework was intended for small bulletin boards and chatrooms, not the trillion-dollar social media empires that rule the world today. 


II. SHIFTING PRECEDENTS AND TENSION

The Fourth Circuit’s decision in Zeran v. America Online, Inc. (1997) entrenched Section 230’s incredibly broad immunity. In this case, Kenneth Zeran sued AOL after various defamatory posts falsely tied him to a joke regarding the Oklahoma City bombing [16]. The court ruled that holding AOL liable would impose tort based publication duties, meaning civil wrongs causing harm to another, that are inconsistent with the Communication Decency Act’s text and spirit [17]. This case established the foundation for the future of digital immunity, being cited thousands of times in later cases. Similarly, in the Ninth Circuit’s Carafano v. Metrosplash.com (2003), actress Christianne Carafano sued after someone had created a fake dating profile of her [18]. The court held that Metrosplash’s questionnaire system did not constitute it as an “information content provider,” and it instead simply supplied the content [19]. This decision expanded the application of applied immunity by defining the direction set in Section 230. This not only expanded the protections of companies, but gave way to increased certainty that companies shouldn’t be held liable nor fear litigation. 

Further expanding the influence and importance of proper guardrails for these growing digital communities and forums was the advent of the social media age. As the 2000s and 2010s continued, the online communities we know today began to take shape. With that technological maturity came new legal issues to be resolved by the courts. In Fair Housing Council v. Roommates.com (2008) the immunity of Section 230 was limited when the platform “materially contributed” to unlawful content [20]. In this example, Roommates.com required user prompts regarding race and sexual orientation. The Ninth Circuit found the company partially responsible for violating the Fair Housing Act because the site’s design actively elicited discriminatory preferences that the statute expressly forbids landlords from making or publishing [21]. Fair Housing Council was one of the first major cases against Section 230, instead of widening protections, the decision created clear standards on where companies should draw the line between moderation and distribution. 

Although this decision was limited to the Ninth Circuit, it served as a catalyst for later cases such as in Barnes v. Yahoo! (2009), where another exception was carved out for Section 230 [22]. In this case, Barnes sued Yahoo! not for hosting content, but rather for failing to meet a promise. Barnes had explicit videos of himself posted on Yahoo! and requested for Yahoo! to take it down. When Yahoo! failed to remove the content it had promised to delete, the court stepped in and allowed a claim based on a promissory estoppel [23]. This is a legal doctrine that can enforce a promise even without formal consideration, preventing injustice when one party reasonably relies on a promise. The decision in Barnes marked a watershed moment for the interpretation and use of Section 230. While platforms were immune from being treated as publishers, they were still liable for breaching specific commitments that they outlined [24]. This newly established precedent held companies to a higher standard than before. Now, when requests were reasonably made, the company had to follow suit and could no longer rely on its sheer size and non-publisher role for innocence. Empirically, today we can see this with the ‘report’ features that are hosted in most applications. While these systems are increasingly automated, they still serve the purpose of preemptively removing content which could be called for deletion. While the aforementioned cases primarily dealt with online issues around the spread of misinformation and defamatory comments, other cases during the same period tested the spillover effects on these platforms. In Doe v. MySpace (2008), a sexual assault incident was facilitated through MySpace; plaintiffs argued there was a level of negligence involved as the platform failed to protect minors. The Fifth Circuit dismissed the case, ruling that imposing such duties would treat MySpace as a publisher, which is what Section 230 explicitly protects against [25]. While cases such as Barnes v. Yahoo! pushed for reform, and did so successfully, Doe v. MySpace once again limited the scope of where and how this reform could take place. The fine line of what constitutes a publisher has grown increasingly thin and firms are unable to decipher which side of the line they are on. 

With the dominance of social media, the boundaries of neutrality were tested. Primarily, in Jones v. Dirty World Entertainment (2014) courts upheld immunity for the gossip site that selectively published salacious posts [26]. They reasoned that encouragement did not amount to creation. This precedent from Jones was reinforced in Force v. Facebook, Inc (2019), when victims of Hamas attacks claimed that Facebook’s recommendation algorithms promoted terrorist content [27] . The court, instead, emphasized the “neutral tools” doctrine, claiming that its popularity garnered it attention, not Facebook outright selecting said posts [28]. These cases highlight how much has changed from just 2014 to 2019. In a similar case, Herrick v. Grindr LLC (2019) rejected product liability theories when the site had stalkers use Grindr to impersonate the plaintiff [29]. The Second Circuit reasoned that treating software as a defective product would undermine Section 230’s intent and thus flood the courts with litigation. Moreover, Grindr did not uniquely facilitate the stalkers, and instead merely provided a platform for all users alike. This specific case expanded precedent from just protecting platforms from being labeled as publishers, as now the proprietary software of each company was protected. In theory, extending to algorithms and the very content that people consume. Recently, however, a set of two judges from the Ninth Circuit have caused a stir with recent cases related to Section 230. Lemmon v Snapchat (2021) and Doe v Reddit (2022) both signaled there was potential for change. In Lemmon, the court essentially ruled that Snapchat’s “speed filter” allegedly encouraged reckless driving; noting that the harm itself arose from app design and not user speech [30]. In Doe v. Reddit, plaintiffs accused the platform of knowingly hosting CSAM (child sexual abuse mateiral). The court then held that Section 230 did not bar claims alleging intentional facilitation [31]. These cases were the first fractures to the foundational statements of Section 230. Instead of broad application, the Ninth Circuit found specific loopholes to erode the, previously, total immunity that Section 230 provided platforms [32]. 

Moreover, with the advent of globalized media threats are no longer solely the consideration of the massive American companies that functionally control the internet. The Supreme Court case of Gonzalez v. Google (2023) proves this best as families of ISIS victims argued that YouTube’s algorithms recommended extremist content [33]. The Court ultimately declined any attempt to reform Section 230, but not without backlash. Supreme Court Justice Clarence Thomas specifically suggested that “neutral tools” no longer reflected the modern algorithmic complexity of issues, and advocated for reform of Section 230 [34]. These arguments can be found in the case of Enigma Software v. Malwarebytes (2020), which demonstrated that fractures were beginning to form even in courts that had historically been pro-immunity circuits. In this specific case, the Ninth Circuit held that Malwarebytes could not claim the protections given by Section 230 due to “anticompetitive conduct” [35]. Specifically, Malwarebytes had claimed that a competitor’s software was malicious: which had been a relatively common tactic by bigger firms to knock down their competition. Although this case didn’t directly change a precedent set by Section 230, it did mark a rare boundary of what the statute could and could not cover, and highlighted how it could have been abused for corporate advantage [36]. Through these decisions, courts have transformed Section 230 from a narrow liability defense into a constitutional-esque principle of governance. As a whole, algorithmic amplification blurs the line between publisher and platform, largely due to the fact that an algorithm is not passively existing, but actively managed by the platform. The platform(s) predicts engagement, rewarding outrage and eye-grabbing headlines that can shape public discourse. This new role shifting away from merely platforming the content and now determining what content gets viewed has continued to garner attention from lawmakers. These algorithms have been treated as engagement and recommendation engines that are ‘neutral,’ but critics have argued that they are active curators. A 2021 Brookings Institution study found that algorithmic ranking, not user intent, determines exposure to extremism [37]. Algorithmic liability also complicates security risk calculus, and causation. The issue implicates public opinion, popular sovereignty, and personal/individual agency: particularly as they relate to learned behaviors. A key concern is whether a platform’s (proprietary) design can constitute liability when it fosters harmful content, and what legal or normative standards determine when such conduct is deemed harmful. Each party has their own interpretations of these questions, which makes policy decisions a difficult endeavor. Does a user agency break the causal chain? 


III. REFORM AND GLOBAL STANDARDS 

Proponents of reform argue that Section 230’s sweeping immunity is an anachronism. The law was created in an age when individuals knew each other personally by their usernames. Today, entire populations of countries can be found within the follower count of one influencer. To address this issue, Danielle Citron, a University of Virginia law professor and leading scholar on cyber civil rights, has proposed a “reasonable care” standard which would preserve immunity for good-faith moderation, but withdraw it when companies ignore foreseeable risks [38]. Those who oppose have countered that reform risks far too many things all at once. Reform may erode free speech, stifle innovation, perhaps even cause investors to pull out or companies to shift to another country. Many scholars contend that scaling back immunity wouldn’t just lead to excessive censorship, but would also deter small platforms from entering the market, killing competition and upward growth [39]. Once companies fear liability, they will remove lawful, but controversial speech, silencing vulnerable voices [40]. Critics also note that legislative overreach could backfire, concentrating power in large corporations which are the only actors who can really afford compliance on a large scale [41]. Both sides agree, however, that the status quo is untenable. The law must evolve to reflect the architecture of modern communication, but the question of what will be sacrificed in that process is still very much up for debate. 

The United States holds some of the largest media corporations in the world. The sweeping authority of Section 230 has granted the United States a unique opportunity of monopolistic control on media and outreach. However, other democracies have adopted a far more nuanced approach. The European Union unveiled the Digital Services Act (DSA) in 2024 [42]. Since the creation of the DSA, Europe has essentially rejected a blanket immunity approach, instead opting for a proportional standard of responsibility [43]. Large platforms must assess systemic risks by virtue of having the resources to do so. By disclosing algorithms and swiftly removing illegal content, they are held accountable, while also enjoying some degree of protection from unwarranted or frivolous lawsuits. Other actors such as the United Kingdom have released their own accords, like the Online Safety Act in 2023. This act functionally takes a “duty of care” approach, which requires companies to protect minors and mitigate harm, while preserving privacy [44]. The Online Safety Act’s efficiency is often debated, but any attempt at reform is better than the status quo. Crucially, unlike Section 230, it explicitly addresses encryption; mandating a level of transparency without requiring backdoors [45]. Similarly, in 2021, Australia was one of the first countries to institute an online safety act. In this model, Australia empowers an “eSafety Commissioner” to remove abusive or non-consensual content across platforms, even those hosted abroad [46]. This central and sweeping force has been relatively successful in demonstrating how intermediary liability can coexist with strong enforcement, although some critics have argued that there is a risk of overreach [47]. These frameworks reflect a global trend: transparency and accountability are becoming more important for consumers and governments alike. The goal is usually to conduct security measures without having to sacrifice innovation. This is difficult, but the progress around the globe has revealed a divergence in technical strategy between the U.S. and its allies. While some would contend that the U.S. still controls the bulk of online innovation, due to Section 230, it has also faced the most scrutiny. When American companies go abroad, the regulation ceiling is higher. Regulations force them to adapt, which often comes at a cost that small companies are unwilling to accept or unable to afford. Conflicting legal expectations can hinder the careful balance many of these companies have tried to establish, and compliance with E.U. and U.K. standards can create a de facto global norm [48]. But when the U.S. dominates the industry, those norms are effectively second-class. 

Conversely, Section 230’s influence remains a powerful force. Nations such as India, Brazil, and Japan have all adopted a similar system. Their adaptations of Section 230 reflect the law's role in nurturing innovation by not empowering companies. But as digital threats multiply, many are now attempting to revise their laws based on Section 230’s deficiencies. For example, India’s 2021 IT Rules impose due-diligence obligations resembling the European model [49]. The global shift toward European norms demonstrates a global convergence replacing the standard originally established by Section 230. [50]. Scholars from Cambridge warn that divergent liability regimes may fragment the internet into “regulatory zones,” complicating cross-border communication and enforcement [51]. They contend that the cross-pollination of ideas and discourse will be harmed by these ‘regulatory zones.’ Scholars also argue that regulatory harmony is essential to preserving an open and accountable global network [52].

The debate over Section 230 underscores the changing global truth that digital governance cannot remain static or limited by borders. Instead it must constantly change in order to adapt to evolving norms. As encryption and algorithms redefine communication, the distinction between a ‘publisher’ and ‘platform; collapses. Courts have thus far improvised through analogy, treating apps as products, and algorithms as editors, but improvisation is no substitute for legislation. A coherent reform framework should incorporate lessons from abroad, such as transparency, duty of care, and narrowly tailored obligations. The E.U.’s proportional model and the U.K.’s accountability approach prove that regulation does not require the sacrifice of innovation. Congress could easily clarify that Section 230 immunity applies to user content, but not to negligent design, fraud, or manipulation. Section 230’s future depends not on whether it is repealed, but on how it adapts. The law’s next iteration must secure public trust, while preserving the innovation and commercial freedom that made it the foundation of the modern internet.


IV. CONCLUSION

After 30 years, Section 230 stands at the center of the digital world we know today. Section 230 once shielded free speech, but now guards trillion dollar systems. Encryption and algorithms were limited to science fiction in 1996, and since then, these blind spots have been exposed. The binary logic of Section 230 is no longer sufficient. An ever-changing system needs a regulatory framework designed to change with it. Meta’s constant squabbles in the legal ring are not isolated examples, but a mirror reflecting Section 230’s outdated assumptions. The United States must reform the cornerstone of the modern internet to protect U.S. citizens and the world. Learning from partners like the European Union, while keeping true to America’s innovative spirit is the best path forward. Our future depends on sustaining the same free and entrepreneurial mindset that created the internet, for generations to come. 


[1] Nathaniel Lacsina, Meta Sued Over WhatsApp Privacy Claims as Users Question Encryption, GULF NEWS (Jan. 25, 2026), https://gulfnews.com/technology/media/meta-sued-over-whatsapp-privacy-claims-as-users-question-encryption-1.500420108.

[2] Spencer Feingold, Why Meta’s Encryption Push Is Raising Digital Safety Concerns, WORLD ECONOMIC FORUM (Dec. 11, 2023), https://www.weforum.org/stories/2023/12/meta-facebook-encryption-digital-safety/.

[3] Valerie Brannon, Section 230: An Overview, U.S. CONGRESS (Jan 4, 2024), https://www.congress.gov/crs-product/R46751

[4] See [3]. 

[5] Alan Z. Rozenshtein, Interpreting the Ambiguities of Section 230, BROOKINGS INSTITUTE. (Oct 26, 2023), https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/

[6] Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).

[7] See [6]. 

[8] LegalClarity Team, How Stratton Oakmont v. Prodigy Led to Section 230, LEGALCLARITY (Jul 21, 2025), https://legalclarity.org/how-stratton-oakmont-v-prodigy-led-to-section-230/

[9] See [8].

[10] Tyler Dillon, Leash the Big Dogs, Let the Small Dogs Roam Free: Preserve Section 230 for Smaller Platforms, 74 Fed. Comm. L.J. 171, LEXISNEXIS (May 3, 2024), https://www.wbklaw.com/wp-content/uploads/2022/09/NOTE_-Leash-the-Big-Dogs_-Let-the-Small-Dogs-Roam-Free_-Preserve-Section-230-for-Smaller-Platform.pdf

[11] Mark Stepanyuk, Stratton Oakmont v. Prodigy Services: The Case that Spawned Section 230, WASHINGTON JOURNAL OF LAW, TECHNOLOGY & ARTS (Feb 18, 2022), https://wjlta.com/2022/02/18/stratton-oakmont-v-prodigy-services-the-case-that-spawned-section-230/

[12] see [11]. 

[13] see [3]. 

[14] see [3]. 

[15] Gregory M. Dickinson, Section 230: A Juridical History, STANFORD TECHNOLOGY LAW REVIEW (Fall 2024),  

[16] Zeran v. America Online, Inc., 958 F. Supp. 1124 (E.D. Va. 1997).

[17] Jonathon W. Penney, The Chilling E The Chilling Effect Claims in ‘Z ect Claims in ‘Zeran v. AOL’, OSGOODE HALL LAW SCHOOL OF YORK UNIVERSITY (2020), https://digitalcommons.osgoode.yorku.ca/scholarly_works/3078/

[18] Carafano v. Metrosplash.com, Inc., 339 F.3d 1119 (9th Cir. 2003). 

[19] See [18].

[20] Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008). 

[21] See [20]. 

[22] Barnes v. Yahoo!, Inc., No. 05-36189, 532 F.3d 1092 (9th Cir. Feb. 25, 2011).

[23] See [23]. 

[24] Eric Rosen, Section 230 Immunity Changes, DYNAMIS LLP (2025), https://www.dynamisllp.com/knowledge/section-230-immunity-changes

[25] Doe v. MySpace, Inc., 07-50345 (5th Cir. May 16, 2008) 

[26] See [25]. 

[27] Force v. Facebook, Inc., No. 18-397, 934 F.3d 53 (2d Cir. 2019). 

[28] See [27]. 

[29] Herrick v. Grindr LLC, No. 17-cv-00932, 2018 WL 447189 (S.D.N.Y. Jan. 17, 2018).

[30] Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021). 

[31] Jane Does 1–11 v. Reddit, Inc., 40 F.4th 1150 (9th Cir. 2022). 

[32] Alan Z. Rozenshtein, Interpreting the Ambiguities of Section 230, YALE JOURNAL ON REGULATION (Apr 17, 2024) https://www.yalejreg.com/bulletin/interpreting-the-ambiguities-of-section-230/

[33] Gonzalez v. Google LLC, 598 U.S. 617 (2023). 

[34] Scott Nover, Why the US Supreme Court is struggling with a case about YouTube's algorithms, QUARTZ (Mar 01, 2023), https://qz.com/why-the-us-supreme-court-is-struggling-with-a-case-abou-1850174033

[35] Enigma Software Group USA, LLC v. Malwarebytes, Inc., 64 F.4th 1184 (9th Cir. 2023). 

[36] See [35]. 

[37] Paul Barrett, How Tech Platforms Fuel U.S. Political Polarization and What Government Can Do About It, BROOKINGS INSTITUTE (Jan. 13, 2021), https://www.brookings.edu/articles/how-tech-platforms-fuel-u-s-political-polarization-and-what-government-can-do-about-it/

[38] Danielle Keats Citron, Section 230’s Challenge to Civil Rights, BOSTON UNIVERSITY LAW REVIEW (2023), https://www.bu.edu/bulawreview/files/2023/10/CITRON.pdf

[39] Michael Daly Hawkins, Uproot or Upgrade? Revisiting Section 230 Immunity in the Digital Age, THE UNIVERSITY OF CHICAGO LAW REVIEW (2023), https://lawreview.uchicago.edu/online-archive/uproot-or-upgrade-revisiting-section-230-immunity-digital-age

[40] Steven R. Disharoon, Section 230 of the Communications Decency Act Under Fire Once Again, WOOD SMITH HENNING BERMAN (May 13, 2024), https://www.wshblaw.com/publication-section-230-of-the-communications-decency-act-under-fire-once-again

[41] Repealing Section 230 Would Cost Americans Over $2.2 Trillion, COMPUTER & COMMUNICATIONS INDUSTRY ASSOCIATION (Jan 12, 2026), https://ccianet.org/research/stats/repealing-section-230-would-cost-americans-over-2-2-trillion/

[42] Digital Services Act (DSA), EUROPEAN COMMISSION (last visited Feb. 11, 2026), https://digital-strategy.ec.europa.eu/en/policies/digital-services-act

[43] See [42].

[44] Online Safety Act Collection, UNITED KINGDOM DEPARTMENT FOR SCIENCE, INNOVATION AND TECHNOLOGY (last visited Feb. 11, 2026), https://www.gov.uk/government/collections/online-safety-act

[45] Anna Richards, The Online Safety Act: Privacy Threats and Free Speech Risks, THE CONSTITUTION SOCIETY (Nov. 15, 2024), https://consoc.org.uk/the-online-safety-act-privacy-threats-and-free-speech-risks/

[46] New Powers to Remove Harmful Online Content Beyond Australia, AUSTRALIAN DEPARTMENT OF INFRASTRUCTURE, TRANSPORT, REGIONAL DEVELOPMENT, COMMUNICATIONS, SPORT, AND THE ARTS (last visited Feb. 11, 2026), https://www.infrastructure.gov.au/department/media/news/new-powers-remove-harmful-online-content-beyond-australia

[47] Australia Strengthens Online Safety Laws to Compel Social Platforms to Remove Abusive Content, AUSTRALIA TIMES (Apr. 14, 2022), https://australiatimes.com/australia-strengthens-online-safety-laws-to-compel-social-platforms-to-remove-abusive-content

[48] Marie-Therese Sekwenz, EU Digital Regulation as Industry Shaping Policy: The DSA, Brussels Effect, and Global Competitiveness, NETWORK LAW REVIEW (Fall 2025), https://www.networklawreview.org/sekwenz-digital-regulation/

[49] India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and Automated scanning of online content, FUTURE OF PRIVACY FORUM (Mar 11, 2021), https://fpf.org/blog/india-massive-overhaul-of-digital-regulation-with-strict-rules-for-take-down-of-illegal-content-and-automated-scanning-of-online-content/

[50] Dawn Carla Nunziato, The Digital Services Act and the Brussels Effect on Platform Content Moderation, THE UNIVERSITY OF CHICAGO JOURNAL OF INTERNATIONAL LAW (2023), https://cjil.uchicago.edu/print-archive/digital-services-act-and-brussels-effect-platform-content-moderation

[51] Felicity Deane et al., Trade in the Digital Age: Agreements to Mitigate Fragmentation, CAMBRIDGE UNIVERSITY PRESS (Aug 14, 2023), https://www.cambridge.org/core/journals/asian-journal-of-international-law/article/trade-in-the-digital-age-agreements-to-mitigate-fragmentation/3B6C315E0F4ACC040F7014D0712EE5D8

[52] See [51].


 
 
 

Comments


  • Grey Instagram Icon

© 2026 Texas Undergraduate Law Journal

bottom of page