top of page

Censorship or Responsible Governance? Free Speech in the Misinformation Era

  • Writer: TULJ
    TULJ
  • 7 minutes ago
  • 18 min read
Katherine Manz

Edited by Keerthi Chalamalasetty, Emily Mandel, Mac Kang, and Sahith Mocharla


On January 20, 2025, President Donald Trump issued an executive order “restoring freedom of speech and ending federal censorship,” aimed at ensuring absolute free speech on social media platforms [1]. In it, he accused the preceding Biden administration of: 


“...Censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve. Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate. Government censorship of speech is intolerable in a free society” [2]. 


This concentration of censorship claims upon social media is owed to the increasing use of social media as a public forum. Today’s social networking platforms host an estimated 5.66 billion ‘user identities,’ including 253 million in the United States alone [3]. Reporters, politicians, media personalities, and even athletes use social media to break news and engage with their audiences. For better or for worse, social media has become a vital, and often primary, channel of personal and political expression—a marginally regulated, open-access, international public forum. This conception of a public forum is inconceivably far removed from what it would have meant in 1791, when the First Amendment was ratified. Without the benefit of legal precedent nor substantial case law, the novel capabilities of these technologies have raised increasingly difficult legal questions regarding private censorship by social media companies, as well as the state’s ability to regulate online speech. 

President Trump’s statement targets Democrats’ government censorship and overreach as the primary issue in the modern digital space, while the Biden administration and the Democratic Party argue that right-wing misinformation and conspiracy theories pose the greater threat. This conflict is part of a wider trend: as the spread of false information online has become increasingly politically relevant, a 2025 report by Pew Research Center found that 70% of the United States sees misinformation as a major threat to the country, and 55% supports its regulation by the federal government [4] [5]. However, most misinformation has long been protected by the First Amendment, as its regulation would necessarily allow the government to determine the veracity and acceptability of claims. As a result, its handling has been left largely in the hands of the social media platforms themselves. Following the rampant misinformation surrounding the COVID-19 pandemic and the 2020 presidential election, however, the argument around government regulation has intensified. The question President Trump’s statement poses is this: is it acceptable for the government to influence the internal censorship policies of social media companies, and if so, to what extent and effect? 



Social Media and the Modern Free Speech Landscape


Standards for censorship differ across media. Radio and broadcast television, for instance, are more heavily regulated than cable television due to the classification of airwaves as a scarce public resource and thus a more convincing public interest. The early days of broadcast media saw mandates like the Fairness Doctrine and the Equal Time Rule, which imposed requirements upon private broadcast stations intended to foster free, multidirectional discourse. The Fairness Doctrine mandated that a broadcast station presenting a controversial public issue “must afford reasonable opportunity for the presentation of opposing viewpoints,” while the Equal Time Rule required that stations must allow the same access to air time to all political candidates [6] [7]. In upholding the Equal Time Rule in CBS, Inc. v. Federal Communications Commission (FCC) (1981), the Supreme Court held that the public had an affirmative right to hear the opinions of all those considered political candidates by the FCC [8]. As a result, both mandates saw the federal government impose necessarily subjective judgments of fairness and candidacy onto private speech platforms, with the justification that they were ensuring free and equitable speech rights. The regulations played a central role in US media until the 1980s, when the Reagan administration took aim at what it saw as overt government censorship. The Fairness Doctrine wasn’t formally revoked until 2011, but the Equal Time Rule has been essentially defunct for decades, as part of a trend of deregulation tied to a growing breadth of new media options beyond mainstream news outlets [9]. The American audience now has access to a massive variety of print, radio, television, and digital news channels—including social media. 

Social media, which encompasses a broad range of internet-based social communication, has historically been self-regulated [10]. Users agree to a site’s terms of service in order to create an account, and are therefore subject to the guidelines of that particular site, where violations may result in their suspension or termination. Private social media platforms thus have discretion over the information shared on their sites, while the government is typically unable to directly intervene to prevent the proliferation of taboos like hate speech, which is protected by the First Amendment [11]. However, the federal government does exert regulatory control over these social media platforms, which can include heading investigations into data privacy compliance—a subject that can determine platforms’ ability to operate profitably [12]. If the government also leverages this power to influence internal content regulation, as President Trump alleges of the Biden administration, it may constitute indirect government censorship. 

While legal precedent regarding social media is still relatively new, the few landmark cases have firmly defended free speech. In Packingham v. North Carolina (2017), one of the first cases addressing the modern Internet, the Supreme Court struck down a North Carolina law that banned registered sex offenders from accessing social media sites where minors were permitted to maintain accounts [13]. In condemning the complete foreclosure of social platforms, the Court noted that these sites were, for many, “the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge” [14]. The designation of social media as a “modern public square” set the stage for its treatment as a public forum where folks are entitled to be heard. In 2019, the Second Circuit went so far as to establish a right to engage with politicians’ social media accounts, holding in Knight First Amendment Institute v. Trump that because the President’s Twitter account constitutes a public forum, blocking individuals on that account is therefore considered unconstitutional viewpoint discrimination [15]. The Supreme Court clarified this matter in Lindke v. Freed (2024), holding that preventing someone from commenting under a social media post was only unconstitutional if the official both possessed authority to speak for the government and purported to do so in the relevant post(s) [16]. As a bulk of conversation transfers to private platforms, freedom of speech has thus become an increasingly affirmative right, entitling individuals to the access offered by social media. 

On the other hand, the modern social media landscape has become thoroughly politically charged as attempts to combat COVID-19 misinformation clash with a right-wing campaign of free speech absolutism. Rather than attempt to censor content on social media, states have recently attempted to forbid internal censorship of platforms, arguing that they constitute common carriers—commerical enterprises “open to the general public” [17]. Right-wing cultural commentators and politicians increasingly allege that social media platforms are politically biased toward the left-wing, even in fact-checking; in 2021, Florida’s governor Ron DeSantis even blamed “nameless, faceless boards of censors” for limiting consumers’ rights to access information [18]. Moody v. NetChoice (2024) concerned two attempts to secure social media consumer rights: Florida’s Senate Bill 7072 and Texas’ House Bill 20 [19]. Florida’s SB 7072, aimed at combating alleged Big Tech censorship of right-wing political candidates, would have fined social media platforms banning candidates $100,000 per day until access was restored, while Texas’ HB 20 more broadly forbade censorship based on “viewpoint” [20] [21]. The bills promptly faced lawsuits from the trade association NetChoice, which requested preliminary injunctions on both, citing violations of the First Amendment. The district courts came to different conclusions; the Eleventh Circuit upheld an injunction against the Florida bill, while the Fifth Circuit reversed one against the Texas bill, claiming that platforms’ content moderation activities were “not speech” [22]. The Supreme Court ultimately vacated both judgments and remanded the cases to their respective district courts due to a lack of “proper analysis,” noting in specific that “the government cannot get its way just by asserting an interest in better balancing the marketplace of ideas” [23]. While there is no conclusive judgment at this time, it appears unlikely that government intervention will prevail over social platforms’ editorial rights. 

Legal precedent surrounding social media, albeit limited, has therefore been largely against any kind of government intervention, whether pro- or anti-content removal. Nevertheless, the specifics of legislating social media remain in their infancy, leaving the door open for future diverging precedent. 


The Biden Administration’s Actions


The primary driver of the Biden administration’s actions to reduce misinformation was the COVID-19 pandemic. In March 2021, the Johns Hopkins Center for Health Security released a report urging the Biden administration to develop a strategy to combat rampant misinformation surrounding the virus, especially the vaccine [24]. The report posited disruption of health-related misinformation as a compelling public interest, claiming that rampant misinformation had “hindered public health response efforts and contributed to an increase in fear and social discord in the population” [25]. Social media networks contended with the COVID-19 concerns laid out in the Johns Hopkins report simultaneously with the political misinformation that had fueled the January 6th attack on the Capitol just two months earlier. Another March 2021 report, this by the Harvard Kennedy School, notes both disinformation and increasing hate speech as concerns, claiming that they have “eroded crucial democratic institutions and discourse” [26]. As a result of both institutional and popular concerns, the Biden administration began to undertake actions to combat misinformation within its first year.

In a December 2021 fact sheet, the Biden administration announced three initiatives regarding misinformation: the creation of an interagency Information Integrity Research and Development Working Group, whose purpose was to design a strategy for tackling misinformation; the launch of a Digital Literacy Accelerator targeting students’ capacity to recognize misinformation; and the release of a toolkit by the Surgeon General targeting health misinformation specifically [27]. In April of 2022, the Department of Homeland Security created a ‘Disinformation Governance Board’ in order to address human trafficking and threats to election security, although it was swiftly decried by Republicans as a “Ministry of Truth” and officially shut down by August [28]. In private, the administration communicated extensively with several social media companies, urging them to remove or limit posts containing misinformation regarding COVID-19 or the 2020 election [29]. When these recommendations were denied, officials, including President Biden, publicly criticized the reluctant platforms—claiming, for instance, that Facebook was “killing people” by allowing COVID-19 misinformation—and proposed regulatory reforms surrounding permissible content [30] [31]. This direct attempt to influence social media sites’ internal misinformation policies is the subject of Murthy v. Missouri (2024), as well as President Trump’s January 2025 executive order. 


Murthy v. Missouri (2024) and Free Speech Precedent


Murthy v. Missouri, initially Missouri v. Biden, was filed in May 2022 by the states of Missouri and Louisiana. It accused the Biden administration of violating the First Amendment right to freedom of expression by demanding that social media sites censor conservative views, alleging harm to plaintiffs for suppression of content and to the populations of both states [32]. In April 2023, Judge Terry Doughty of the Louisiana District Court granted an injunction to the plaintiffs, temporarily enjoining a number of government entities and individuals from contacting social media platforms regarding “protected free speech” [33]. In September, the Fifth Circuit narrowed the injunction on appeal, decreasing the number of federal entities affected [34]. Finally, in June of 2024, the Supreme Court shot the case down on the grounds that the petitioners lacked standing, given their inability to prove a direct link between the government’s actions, the restriction of their posted content, and likely future injury [35]. The court voted 6-3 to stay the injunction on government action. Given that the case was rejected on standing, though, the Supreme Court did not answer the central question of the permissibility of government action to encourage censorship of dangerous or misleading information. 

Despite the demise of Murthy, the constitutionality of the Biden administration’s communications remains politically relevant. In May of 2024, House Republicans compiled an interim report on the Biden administration’s actions entitled ‘The Censorship Industrial Complex,’ claiming to have proven that the administration took actions that “suppress[ed] free speech and intentionally distort[ed] public debate” [36]. The Biden administration concedes to having communicated with social media platforms, but claims that their actions were requests rather than demands, and therefore did not have substantial influence on the social media platforms. They additionally claim that any exerted influence was in the compelling interests of public health and national security. 

In order to evaluate the constitutionality of the Biden administration’s actions, the boundaries of freedom of speech and censorship in the United States must be clearly qualified. Freedom of speech, as protected by the First Amendment, is “the right to speak, write, and share ideas and opinions without facing punishment from the government” [37]. The key phrase here is “without facing punishment from the government,” which is broad enough to allow for both private regulation of speech (including by social media platforms) and a certain level of government influence, provided it does not overreach or castigate. When the government does overreach and violate the protections of the First Amendment, its actions fall under censorship: ergo, the accusation against the Biden administration [38]. Freedom of speech, however, is not absolute. Over centuries, the courts have also provided for certain free speech exemptions and circumstances, including obscenity, defamation, and incitement [39]. Most notably, the Supreme Court has historically upheld the government’s power to silence wartime speech that could potentially undermine national security. Two cases decided during World War I, Abrams v. United States (1919) and Schenck v. United States (1919), determined that the First Amendment did not apply to speech that incited sedition, disorder, or crime [40] [41]. In the century since the World War I rulings, though, the Supreme Court has reversed course, advancing and broadening free speech protections—including the protection of false information. 

Any restriction of speech based solely on its expressive content is subject to strict scrutiny, which requires the government to prove that the law serves a compelling governmental interest and that it is the least restrictive means of doing so [42]. As a result, the circulation of most false ideas and statements is protected by the First Amendment. As held by the Supreme Court in New York Times Co. v. Sullivan (1964), charges such as libel and defamation require proof that the disseminator knew that the statement was false or acted with reckless disregard for its veracity—the simple dissemination of false information is not enough, it must be intentional [43]. In fact, as Judge Henry Edgerton explained in Sweeney v. Patterson (1942), errors of fact are a natural product of free public discussion, and “whatever is added to the field of libel is taken from the field of free debate [44]. In line with this principle, the Supreme Court has continually defended speech that is targeted solely on grounds of veracity.

It is therefore not enough for the Biden administration to prove that it took actions aimed at preventing false speech. There are two avenues to constitutionality: either the Biden administration did not exert sufficient pressure on social media companies to substantially influence their content moderation policies, or if such pressure existed, the action was undertaken in such a way that it does not violate the First Amendment under strict scrutiny.


Evaluation of Constitutionality: Coercion


On the first count, there is likely insufficient evidence to prove that the Biden administration’s actions resulted in legitimate coercion. The border between permissible governmental recommendation and undue influence is best defined by the 2024 case National Rifle Association v. Vullo, wherein the National Rifle Association (NRA) accused Maria Vullo, former superintendent of the New York Department of Financial Services (DFS), of pressuring DFS-regulated entities to disassociate from the NRA [45]. The Supreme Court unanimously found that the NRA’s allegations, if true, would constitute a violation of the NRA’s First Amendment rights, holding that “government officials cannot attempt to coerce private parties in order to punish or suppress views that the government disfavors” [46]. The key word is coerce, which is legally held to be the use of threats, explicit or implicit, to instill fear and force a person or entity to behave contrary to their wishes [47]. As the NRA contended in NRA v. Vullo, for instance, Ms. Vullo confronted multiple NRA-affiliated insurance brokerages with regulatory infractions, then told the firms that the DFS would be “less interested” in pursuing the infractions if they stopped providing services to the NRA [48]. These communications caused the insurance brokerage Lockton to disaffiliate from the NRA, explicitly citing the fear of “losing [its] license to do business in New York” [49]. The Vullo case follows a long precedent forbidding indirect suppression of speech through coercion, dating back to Bantam Books, Inc. v. Sullivan (1963), where Rhode Island’s government indirectly threatened a book distributor with legal prosecution [50]. In short, both Bantam Books and NRA v. Vullo concern attempts to terminate speech based on its contents, undertaken through significant and specific threats.

Evidence on the Biden administration’s actions is more obscure. From a results-based perspective, Facebook, Youtube, and Amazon all changed their content moderation policies in 2021; the documents retrieved by the House Subcommittee on the Weaponization of the Federal Government include email requests that specific posts be “removed ASAP”; and internal communications from each of the companies directly attribute policy changes to government pressure [51]. An August 2021 internal Facebook email prompted a discussion of how to combat misinformation “stemming from continued criticism of our approach from the [Biden] administration,” while YouTube similarly sought direct White House feedback about their policy changes [52]. These communications seem to imply substantial government influence in the platforms’ moderation policies. However, the Supreme Court’s findings in Murthy contradict that this influence was unconstitutional; Justice Barrett notes in the majority opinion that platforms have “independent incentives to moderate content and often exercise [] their own judgment,” noting that they began moderating COVID-19 theories even before White House officials began communications in mid-2021 [53]. Additionally, the Court found that the “frequent intense communications” between the White House and social media platforms “had considerably subsided by 2022,” contradicting the plaintiffs’ narrative that the Biden administration had undertaken a years-long censorship campaign [54]. Facebook, in fact, ceased automatic censorship of the popular theory that the COVID-19 virus was man-made in May 2021 [55]. There is therefore insufficient evidence that the Biden administration caused social media companies to act uncharacteristically through threats or intimidation.


Strict Scrutiny and Conclusions


If the Biden administration’s efforts to limit certain speech were deemed to infringe on the First Amendment, they would then have to stand up under strict scrutiny—that is, to prove that their actions were “narrowly tailored” to advance a compelling government interest using the least restrictive means possible [56]. As the administration’s conduct was not deemed coercive, this is not a necessary test. Nevertheless, there is a sufficiently compelling government interest to justify more expansive governmental actions meant to combat misinformation.

One of the earliest developments of compelling interest lies in Sweezy v. New Hampshire (1957), where the Supreme Court ruled in favor of an academic who had been jailed for refusing to answer questions about lectures he had given [57]. In a concurring opinion, Justice Frankfurter cited the necessity of an “overriding judgment” to abridge constitutional rights, based on his conception of the case as a balance between “the right of a citizen to political privacy…and the right of the State to self-protection” [58]. As Dr. Stephen A. Seigel argues in “The Origin of the Compelling State Interest Test and Strict Scrutiny,” the conception of balance that defines compelling interest is based on the assumption that “no constitutional right [is] beyond limitation, and none [can] prevail over an appropriate subordinating governmental interest” [59]. This argument is further corroborated by case law from periods of crisis and wartime—see Schenck v. United States (1919) and Korematsu v. United States (1944)—where individual rights were deemed subordinate to the compelling interest of national security. This is not to say that speech rights should be routinely infringed upon, but that restriction of protected speech and other limitations on core constitutional rights are built into the compelling interest test.

As the Supreme Court held in Thomas v. Collins (1945), any attempt to limit free expression “must be justified by clear public interest, threatened not doubtfully or remotely, but by clear and present danger” [60]. This test, first identified in Schenck, weighs danger to the state or citizens, often tied to death and threats to the tenability of the state, against a citizen’s right to political privacy. While it has most often been used to justify wartime censorship, elements of the clear and present danger test can be found in today’s misinformation landscape. A study of Twitter in 2018 found it took true information six times as long as falsehood to reach 1500 people, and false political information in particular reached more people than any other category [61]. This false information, which proliferates because of its congruence with existing political beliefs, contributes to the formation of online echo chambers. The continual consumption of information that justifies existing beliefs has unsurprisingly led to stark political polarization—a 2022 poll found that 72% of Republicans and 63% of Democrats viewed the opposing political party as immoral [62]. While false statements are generally protected by the First Amendment on the grounds that falsehoods are unavoidable in free debate, their scale and impact have transformed the nature of public discourse. Rather than aiming to resolve a political issue based on agreed-upon facts, modern discourse works backwards, generating facts at will to support ideology. “False statements of fact…interfere with the truth-seeking function of the marketplace of ideas,” as the Supreme Court noted in Hustler Magazine, Inc. v. Falwell (1988), going so far as to create divergent political realities [63]. The results have been grave and a demonstrable threat to human life. The United States has suffered over 1.2 million recorded deaths attributed to COVID-19, and an estimated third of those deaths could have been prevented with compliance with public health recommendations [64]. Meanwhile, a starkly polarized political climate fueled by misinformation has led to escalating political violence, with at least 300 cases identified between the January 6th insurrection and the 2024 election [65]. With generative AI amplifying the speed and sophistication of fabricated content, is it any wonder that 94% of American adults consider online misinformation a threat to their country [66]? 

The question, then, lies in what can be done. Social media algorithms prioritize engagement that often promotes content that draws moral outrage, whether true or false [67]. In the fight against misinformation, these algorithms could potentially be regulated to rely less indiscriminately on user engagement, or could in themselves become a self-check of misinformation. X’s newly-instituted “Community Notes” are a user-run fact-checking service that notifies users if a post they have interacted with has false content, which helps combat misinformation while minimizing government regulatory overreach––yet still dependent on public consensus to define ‘objective truth’. Sites could also be stricter with accounts that often spread misinformation. The Center for Countering Digital Hate found that 65% of anti-vaccine misinformation across Facebook, Instagram, and Twitter from 2020 to 2021 was spread by just twelve people [68]. Earlier removal of the ‘Disinformation Dozen’ accounts could have saved lives. 

Ironically, this type of high-volume, long-term misinformation is one of the key reasons why regulation is so controversial. As extremist attitudes creep into the mainstream, mainstream actors and politicians begin to defend them on the basis of anti-censorship. It’s hard to say whether government interference would positively combat misinformation and political polarization, but there are two preliminary conclusions. One: this high-volume, long-term spread of misinformation is a detriment to the marketplace of ideas that our democracy is founded upon. And two: our current decisions about how online misinformation fits with the First Amendment will set an enduring precedent that guides how we confront the misinformation era. 


[1] Exec. Order No. 14,149 90 Fed. Reg. 8243 (Jan. 28, 2025). 

[2] See [1] 

[3] Digital 2025: The United States Of America, DataReportal – Global Digital Insights (Feb. 25, 2025), https://datareportal.com/reports/digital-2025-united-states-of-america.https://datareportal.com/reports/digital-2025-united-states-of-america.

[4] Jacob Poushter Prozorovsky Moira Fagan, Maria Smerkovich and Andrew, False Information Online as a Threat, Pew Research Center (Aug. 19, 2025), https://www.pewresearch.org/2025/08/19/false-information-online-as-a-threat/.https://www.pewresearch.org/2025/08/19/false-information-online-as-a-threat/.

[6] Adrian Cronauer, The Fairness Doctrine: A Solution in Search of a Problem, 47 FCLJ 51 (1994).

[7] Communications Act of 1934, 47 U.S. Code § 315 (1934) 

[8] CBS, Inc. v. FCC, 453 U.S. 367 (1981)

[9] See [6] 

[10] Deborah Fisher, Social Media and the First Amendment, The Free Speech Center, https://firstamendment.mtsu.edu/article/social-media/.

[11] Kenneth Ward, Hate Speech and Hate Crime | ALA, American Library Association, https://www.ala.org/advocacy/intfreedom/hate

[12] Social Media: Regulatory, Legal, and Policy Considerations for the 119th Congress (2025), https://www.congress.gov/crs-product/IF12904.

[13] Packingham v. North Carolina, 582 U.S. 1730 (2017)

[14] See [13] 

[15] Knight First Amendment Inst. at Columbia Univ. v. Trump, 928 F.3d 226 (2d Cir. 2019)

[16] Lindke v. Freed, 601 U.S. 187 (2024)

[17] Common Carrier, LII / Legal Information Institute, https://www.law.cornell.edu/wex/common_carrier.

[18] Gov. DeSantis Proposes Law That Would Fine Big Tech Companies That ‘Deplatform’ Political Candidates, WFLA (Feb. 2, 2021), https://www.wfla.com/news/florida/gov-desantis-holds-press-conference-from-florida-state-capitol/.

[19] Moody v. NetChoice, LLC, 603 U.S. 707 (2024)

[20] See [18]

[21] Tex. H.B. 20 § 120-001 (2021)

[22] NetChoice, Computer Communications Industry Association v. Paxton, 134 F. 4th 799 (5th Cir. 2024)

[23] See [19] 

[24] Tara Kirk Sell, et al., National Priorities to Combat Misinformation and Disinformation for COVID-19 and Future Public Health Threats: A Call for a National Strategy (2021)

[25] See [24] 

[26] Caroline Atkins, et al., Recommendations to the Biden Administration: On Regulating Disinformation and Other Harmful Content on Social Media § 1 (2021)

[27] The White House, FACT SHEET: The Biden-Harris Administration Is Taking Action to Restore and Strengthen American Democracy, The White House (Dec. 8, 2021), https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2021/12/08/fact-sheet-the-biden-harris-administration-is-taking-action-to-restore-and-strengthen-american-democracy/.

[28] Aaron Blake, Analysis | The Tempest over DHS’s Disinformation Governance Board, The Washington Post, Apr. 29, 2022, https://www.washingtonpost.com/politics/2022/04/29/disinformation-governance-board-dhs/.

[29] Nicole Ezeh, Can Government Coerce Removal of Content on Social Media?, National Conference of State Legislatures, https://www.ncsl.org/events/details/can-government-coerce-removal-of-content-on-social-media.

[30] Betsy Klein, et al., Biden Backs Away from his Claim that Facebook is ‘Killing People’ by Allowing Covid Misinformation, CNN, July 19, 2021. 

[31] See [29] 

[32] Murthy v. Missouri, 603 U.S. (2024)

[33] Missouri v. Biden, 680 F. Supp. 3rd. 630 (W.D. La. 2023)

[34] Jacob Sullum, The 5th Circuit Agrees That Federal Officials Unconstitutionally

'Coerced' or 'Encouraged' Online Censorship, Reason, Sept. 11, 2023

[35] See [32]

[36] H.R. Rep. No. 119-4, 1 (2024) 

[37] Freedom of Speech, LII / Legal Information Institute, https://www.law.cornell.edu/wex/freedom_of_speech.

[38] Elizabeth R. Purdy, Censorship, The Free Speech Center, https://firstamendment.mtsu.edu/article/censorship/.

[39] See [37] 

[40] Abrams v. United States, 250 U.S. 616 (1919)

[41] Schenck v. United States, 249 U.S. 47 (1919)

[42] False Speech and the First Amendment: Constitutional Limits on Regulating Misinformation (2025), https://www.congress.gov/crs-product/IF12180

[43] New York Times Co. v. Sullivan, 376 U.S. 254 (1964)

[44] Sweeney v. Patterson, 128 F.2d 457 (D.C. Cir. 1942)

[45] National Rifle Association of America v. Vullo, 602 U.S. 175 (2024)

[46] See [45]

[47] Coercion Definition, Meaning & Usage | Justia Legal Dictionary, https://dictionary.justia.com/coercion

[48] See [45]

[49] See [45] 

[50] Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963)

[51] See [36]

[52] See [36] 

[53] See [32]

[54] See [32]

[55] See [36] 

[56] Strict Scrutiny, LII / Legal Information Institute, https://www.law.cornell.edu/wex/strict_scrutiny

[57] Sweezy v. New Hampshire, 354 U.S. 234 (1957)

[58] See [57] 

[59] Stephen A. Siegel, The Origin of the Compelling State Interest Test and Strict Scrutiny, (2006). 

[60] Thomas v. Collins, 323 U.S. 516, 530 (1945)

[61] Soroush Vosoughi, Deb Roy & Sinan Aral, The Spread of True and False News Online, 359 Science 1146 (2018).

[62] Johanna Dunaway, The ‘Great Divide’: Understanding US Political Polarization, Syracuse University Today (Oct. 23, 2025), https://news.syr.edu/2025/10/23/the-great-divide-understanding-us-political-polarization/

[63] Hustler Magazine, Inc. v. Falwell, 485 U.S. 46, 52 (1988)

[64] Sahana Sule, et al., Communication of COVID-19 Misinformation on Social Media by Physicians in the US | Public Health | JAMA Network Open | JAMA Network, Communication of COVID-19 Misinformation on Social Media by Physicians in the US (2023), https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2808358

[65] Ned Parker & Peter Eisler, New Cases of Political Violence Roil US Ahead of Contentious Election | Reuters, Reuters, https://www.reuters.com/world/us/new-cases-political-violence-roil-us-ahead-contentious-election-2024-10-21/

[66] See [4] 

[67] Killian L. McLoughlin & William J. Brady, Human-Algorithm Interactions Help Explain the Spread of Misinformation, 56 Current Opinion in Psychology 101770 (2024). 

[68] Shannon Bond, Just 12 People Are Behind Most Vaccine Hoaxes On Social Media, Research Shows, NPR, May 14, 2021, https://www.npr.org/2021/05/13/996570855/disinformation-dozen-test-facebooks-twitters-ability-to-curb-vaccine-hoaxes.


 
 
 
  • Grey Instagram Icon
  • Twitter
  • Grey Facebook Icon

© 2025 Texas Undergraduate Law Journal

bottom of page