top of page

Section 230: Walking the Tightrope of Tech Tolerance

Farhan Buvvaji

Edited by Nikita Nair, Colin Crawford, and Vedanth Ramabhadran.


Introduction

We live in a time when terrorist recruitment content is just as easy to find as a video of a cat chasing a laser. At the same time that a tablet can provide all of human knowledge for students in remote regions of the world, an adult can be convinced that the Earth is flat. All of this is largely possible through the global soapbox that continues to expand and connect each of us daily: the Internet. Ultimately, the Internet is a platform for any idea we wish to convey. However, there is little legal evaluation of those ideas, despite the potential harm they may cause. 

Often, the consequences of online content are chaotic. An infamous example was Pizzagate from 2016, an expansive conspiracy theory that convinced some Internet users that a pizza restaurant was a political front for human trafficking [4]. Despite the baseless nature of the rumor, a 28-year-old man threatened the restaurant with an assault rifle and was later arrested. These occurrences have become increasingly common, especially as online content influences users' beliefs, actions, and thoughts. 

Yet, despite the volatile nature of activity online, there are subsequently minimal legal repercussions. The defining legal protection in place for online websites is Section 230 of the Community Decency Act of 1996. During the early boom of the Internet in the 1990s, questions regarding liability concerns for websites and extraneous platforms were raised—especially after state court cases dealt with online policing and moderation. Notably, Cubby, Inc, v. CompuServe Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. laid the first prominent legal foundations for online censorship and moderation [6]. 


History

In the 1991 case Cubby, Inc. v. Compuserve Inc., the United States District Court for the Southern District of New York evaluated whether the internet service provider, CompuServe, should be held liable for inflammatory content distributed on their platform. Although the court did deem the content defamatory, they held that internet service providers are not liable for users' content [6]. Not only was this case the origin for deeming online defamation illegal, but more importantly, it established that online websites are distributive platforms, not content publishers. In essence, the court compared online websites to libraries, a platform where content is stored and distributed, but not the authors or publishers of the "books” inside [6]. The court reasoned that websites shouldn't be liable for the content on their sites if they lacked prior knowledge of such inflammatory material existing. However, the immunity from liability relied on one primary condition: the online platform not enforcing content moderation. After all, if they did moderate, they could have had prior knowledge of illegal content on their sites. An immediate critique stood out against this reasoning: what was the legal incentive for online sites to moderate? If companies attempted to censor, they risked being held liable for any missed illicit content, so why bother? With little incentive to review content, Cubby, Inc. v. Compuserve Inc. was a major victory for online platforms as it paved the way for the leniency given towards websites. 

Unlike Compuserve, what happens then if a website does moderate content on its platform? In 1995, the New York Supreme Court's case Stratton Oakmont, Inc. v. Prodigy Services Co. tested this exact question. Prodigy Services was an online network offering a wide range of computer services for consumers and companies. In the case of Stratton Oakmont, Prodigy Services's bulletin board feature included defamatory posts from users targeting the infamous brokerage firm [6]. It did not take long for Stratton Oakmont to sue Prodigy. While Compuserve never pursued moderation or censorship, Prodigy had active measures, such as reviewing user content along post guidelines and employing "Board Leaders" that enforced such moderation [6]. That degree of moderation was enough for the New York Supreme Court. They ruled in favor of Stratton Oakmont, controversially reasoning that Prodigy's content review process designates them as a publisher rather than a distributor of user content [6]. Using the library analogy previously, Prodigy was viewed as the author of the "books" sitting on the shelves. The logic was simple: since the content had to face editorial approval and community safety guidelines, Prodigy was ultimately seen as the guilty publisher of the content. The decision's ramifications further cemented the lack of legal incentives for online moderation. Both Cubby, Inc. v. Compuserve Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. reflected the first reactions of the legal system to the Internet, as the judicial system demonstrated that the First Amendment comes before content moderation in both cases. 

Although the reasoning was consistent in both cases, there was an apparent conundrum regarding incentives for moderating content on the Internet. If an online platform made any effort to moderate, it could be held liable. However, complete inaction from platforms could be just as detrimental because public exposure to illicit content would be unbarricaded. Compromise was crucial. In 1996, Congress passed the Community Decency Act of 1996 with that in mind. Specifically, the compromise was Section 230, which addressed content moderation and online platform liability. The two statutes of Section 230 that receive the most attention are Section 230(c)(1) and Section 230(c)(2), which are the primary laws dictating online content screening and liability. 

First, Section 230(c)(1) establishes that no computer service provider or user will be treated as the publisher for any content they did not create [1]. Thus, Section 230(c)(1) reconfigures the library analogy that courts have used. With this statutory change, the library is not treated as the publisher for any books; only the “author” of the book is treated as the publisher. Section 230(c)(1) has two main goals: define what publishing means and alleviate liability concerns for online platforms. Before Section 230, a recurring question was what “publishing” meant and if moderating content makes platforms publishers. Section 230(c)(1) explicitly defines publishing as any content directly made by a user or online platform and does not group content moderation with that definition. Ultimately, Section 230(c)(1) clarifies that online platforms are not liable for content made by their users, ensuring that websites can safely host content without the fear of lawsuits. 

Section 230(c)(2) further protects online platforms by ensuring they can safely moderate without facing civil liability lawsuits. Under this statute, no online platform will be held liable for any moderation methods aiming to restrict access to any content deemed "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected" [1]. In short, Section 230(c)(2) provides online platforms complete discretion for screening offensive material and moderating content on their sites. While Section 230(c)(1) protects the content on websites, Section 230(c)(2) aims to assist online moderation by ensuring platforms don't incur lawsuits for screening content in good faith. Ultimately, both statutes addressed the questions raised by Cubby, Inc. v. Compuserve Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. upon its passage. 

However, that was 26 years ago. The rigidity of Section 230 laws has since been overshadowed by the prolific, exponential advancement of the Internet since the 1990s. From the algorithms that generate attention to groundbreaking applications, the modern-day Internet has easily surpassed the expectations of legislators during the 1990s. Unfortunately, that has come with a cost. 

The range and widespread usage of online platforms have been a powerful tool for malicious means, whether for misinformation, polarization, human trafficking, or even terrorist recruitment. However, many have accused online platforms of being the cause. Take misinformation, for example, which has received widespread attention in the United States. Since online platforms, such as social media, rely on constant attention from users, algorithms have advanced enough to pre-select content that keeps users engaged. The result is that millions find themselves entrapped in echo chambers, relying on a steady stream of misinformation that perpetuates confirmation bias. In the wrong hands, this becomes lethal. From 2005-2016 alone, social media factored into the radicalization of more than 50% of individuals exposed to extremist content [12]. Moreover, with the usage of bots and the novelty of inflammatory content, such misinformation spreads 70 percent faster than truthful news [14]. With online content becoming the predominant source of information, these trends could continue to have disastrous consequences, whether it's polarization or even direct violence. 

The government has taken notice of this growing issue  — surprisingly from both sides of the aisle. In Congress, U.S. Senators Lindsey Graham (R-South Carolina) and Richard Blumenthal (D-Connecticut) introduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act), aiming to amend Section 230 to hold tech companies liable for online child sexual abuse content. The executive branch, too, has taken action. During his tenure, President Trump passed an executive order that would include a "Good Faith" prerequisite for an online platform to have liability protection, meaning a computer service provider must have a "Good Faith" motive for moderating content [8]. However, the executive order was repealed under President Biden's administration since the prerequisite didn't specify what would be considered adequate, leading to multiple lawsuits challenging the executive order's constitutionality [15]. President Biden has also publicly criticized Section 230 and called for reform, but has yet to propose changes. 

The Supreme Court, however, has maintained the longstanding interpretation of Section 230 by continuing the pattern of leniency given to online platforms. During the 2022-2023 term, the Supreme Court received two cases with the same plaintiff: Twitter, Inc. v. Taamneh and Gonzalez v. Google L.L.C. In 2015, an ISIS terrorist attack in France killed a U.S. citizen, Nohemi Gonzalez. Her father filed an action against both Google and Twitter, arguing that these sites enabled international terrorism by not acting against terrorist groups that use their platforms for algorithmic recruitment, propaganda, and coordinating attacks [16]. The district court and U.S. Court of Appeals dismissed the claim by citing Section 230, but surprisingly, the Supreme Court took a different approach. In a unanimous opinion in favor of Twitter in Twitter, Inc. v. Taamneh, the Supreme Court reasoned that Twitter must knowingly have assisted ISIS in this attack to be guilty of aiding and abetting international terrorism – which they had not [2]. Unlike the lower courts, the Supreme Court avoided evaluating Section 230 entirely, writing that they "decline to address the application of Section 230" since there was little evidence of tying either tech giant to the terrorist attack. The Supreme Court's reluctance was readily apparent during oral argument, as Justice Kagan stated they are a court and not the "nine greatest experts on the internet" to further evaluate Section 230 statutes [10]. 

Although tech giants will keep their liability immunity after the Supreme Court's decisions, the discourse around reforming Section 230 has not stopped. With little updates since 1996, Section 230's statutes have simply not kept up with the ever-changing online environment. As algorithms advance and malicious exploitation continues online, Section 230 will continue to face public scrutiny. A solution, however, will not come easy. Section 230 has come to earn the title of "The twenty-six words that created the Internet" – and not without reason. It was an early compromise that provided the legal protection necessary for online platforms to thrive and expand into what the Internet is today. Thus, any change would truly require walking on a shaky tightrope, balancing the need to address modern concerns while risking disrupting the entire online environment as we know it. Fortunately, there have been two proposed solutions that have gained traction. 


Solution #1: Affirmative Duty

Affirmative duty is when an entity is required to put forth some reasonable action to meet its specific obligation. This form of statutory language has also been referred to as the "Duty of Care" method, which inherently creates a legal obligation to "care"; otherwise, entities violate their duty [11]. Regarding online platforms, Section 230 could be amended to add an affirmative duty to enforce content-moderation policies as a prerequisite for liability immunity. If a platform failed to meet its legal obligation, then it wouldn't be provided the liability immunity described under Section 230(c)(1). Rather than universally defining online platforms as non-publishers, Section 230(c)(1) would have requirements for "reasonable steps" that address unlawful content for websites to be considered non-publishers [11]. There is precedence for such a solution; American Affairs Journal writes that Affirmative Duty has already been applied to traditional businesses, which have "a common law duty to take reasonable steps not to cause harm to their customers, [and] take reasonable steps to prevent harm" to create a safe environment [3]. Similarly, if websites foster an unsafe platform for users without putting forth reasonable preventive measures, they would fail their affirmative duty. 

Although such direct statutory change has yet to occur, some state courts have already begun to adapt their interpretation of Section 230 to include affirmative duty to some extent. For instance, the Texas Supreme Court ruled against Facebook in 2021 regarding a human trafficking case. The plaintiffs were victims of sex trafficking who were exposed to traffickers through Facebook and Instagram. Facebook cited Section 230, but prosecutors argued that Facebook enabled trafficking and didn’t have adequate preventive measures in place. While Facebook cited Section 230 as liability protection for the trafficking content on their platform, the Texas Supreme Court rejected the notion that Section 230 protects websites from their "own misdeeds” [9]. Justice Jimmy Blacklock delivered the opinion, conveying that Section 230 does not "create a lawless no-man's-land on the Internet" where all content is immune from liability [9]. Instead, the court interpreted Section 230 as protecting online platforms solely from users' content but not universal protection for inaction and misdeeds that enable human trafficking. The establishment of Affirmative Duty would continue to hold websites liable for such inaction and poor moderation, making it a viable incentive for online platforms to implement measures moderating malicious user content.

Like any promising solution, however, Affirmative Duty for further online moderation comes with stringent critiques. Online content qualifies as free speech under the First Amendment, making such content moderation highly sensitive. Thus, the main fear of requiring more moderation is the potential impact on the First Amendment. For instance, in Stratton Oakmont v. Prodigy, the court faced the argument that moderation could lead to a First Amendment Chilling Effect – otherwise known as collateral censorship, where online platforms censor users to avoid being held liable for the results of users’ speech [6]. The States Court of Appeals for the Fourth Circuit met the same issue in the 1997 case, Zeran v. America Online, Inc. Here, Chief Judge Wilkinson explained how holding liability based on moderation efforts would incentivize the removal of any message, no matter if they were defamatory [6]. If any inflammatory content was missed, websites would become open to lawsuits. Not only does this collateral censorship harm online platforms, but it also has the potential to "freeze" users' freedom of expression entirely [6].

Another critique of applying Affirmative Duty to Section 230 is how courts determine "Cause-in-fact" information in cases. Cause-in-fact refers to the actual evidence and factual causation that can prove an entity is at fault for causing harm, which was a significant point of contention during oral argument in Twitter, Inc. v. Taamneh[2]. Twitter was accused of not taking down accounts and information that helped coordinate the terrorist attack. However, what would it look like if they were to take action? Would Twitter have needed just to remove accounts? Should they also have to remove retweets, accounts that press the "like" button on recruitment posts, and flag those who sustain a longer duration of engagement? The ambiguity around reasonable moderation often leads to more questions than answers. Without defining clear parameters or a brightline for appropriate action, online platforms argue that it's difficult to determine safe levels of content moderation. If they moderate too excessively, platforms risk collateral censorship and First Amendment violation; if websites moderate too little, they could fail Affirmative Duty and lose liability immunity. In summary, Affirmative Duty could be an effective method of incentivizing moderation, but the solution raises valid logistical concerns; if these concerns aren't addressed, the government risks rampant collateral censorship and enigmatic court cases revolving around liability.


Solution #2: Good Faith 

Good faith is a legal rule often used in contracts involving honest dealing, and can vary between sectors and deals. Generally, good faith entails an honest purpose, performance of duties, and no malicious intent toward the opposite party. Given how good faith is rather abstract, the statutory framework for good faith must be carefully crafted to be as clear as possible [3]. As previously discussed, President Trump had pursued a good faith prerequisite in Section 230, but this was later repealed due to ambiguity. However, the Department of Justice (D.O.J.) determined that a good faith prerequisite could be practical if defined appropriately. By including a statutory definition of good faith in the context of content moderation, the D.O.J. reasoned that liability immunity could be confined to just those moderating with reasonable measures. 

Under the D.O.J’s proposed guidelines for good faith, there would be four requirements. First, the online platform must make its terms of service and respective content moderation guidelines publicly available [17]. Second, any moderation efforts must be consistent with their publicly released content moderation policies [17]. Third, any content removal should have objective justification that aligns with any inappropriate item outlined in Section 230 (c)(2) [17]. Lastly, the online platform should provide a notice explaining the reasoning for moderation to the publisher of the flagged content [17]. These guidelines ensure consistent and transparent content moderation while providing an achievable prerequisite for online platforms. Such a change would be a significant improvement from the current statutory language of Section 230(c)(2)(A), which simply states "any action voluntarily taken in good faith" without laying out the criteria for Good Faith [1]. Additionally, the D.O.J. proposal ensures moderation isn't done on a "voluntary" basis but instead incentivizes content screening by making it a clear prerequisite rather than a recommendation. 

Similar to affirmative duty, the good faith approach has also faced opposition. Primarily, critics argue that good faith will have little impact on tech giants' decision making but instead crush smaller platforms with constant litigation pressure. Given the vague nature of good faith, litigation would be forced to go through complex processes to ensure online platforms are meeting each and every requirement. For tech giants with legal resources, this would be bearable. However, for smaller online services, the expenses and length of cases could be devastating for maintaining their platform. 

More importantly, critics argue that there are more definable statutory changes that would be effective in moderating content from bad actors. Also known as the "carve-out" method, specific statutory exemptions could be added to Section 230 that explicitly define what types of content won't receive liability immunity [17]. The D.O.J. themselves have recognized the effectiveness of carve-outs, suggesting liability exemptions for content related to child exploitation, terrorism, and cyberstalking [17]. Compared to enforcing good faith, courts have more explicit interpretations of what qualifies in carve-outs, making decisions regarding liability easier to determine. For instance, the 2008 case Fair Housing Council of San Fernando Valley v. Roomates.com involved online discrimination when processing roommate pairings [5]. Through a series of questions that inquired about personal information and preferences, the United States Court of Appeals for the Ninth Circuit determined that Roomates.com enabled online discrimination. Despite the existence of Section 230 in 2008, Roomates.com didn't receive protection because of explicit carve-outs that didn't condone discrimination [5]. Given the preexisting success of carve-outs, many critics have unsurprisingly argued that case-specific exemptions will be a far more effective statutory change to Section 230(c)(2) than experimenting with good faith guidelines. 


Conclusion 

These solutions may be imperfect, but their existence emphasizes the dire nature of the Section 230 dilemma. The legislation that created the Internet as we know it has also been the bane of many modern online issues, ranging from cyberbullying to terrorist recruitment. Statutory change may be necessary, but prudence can't be emphasized enough; any amendment to Section 230 can completely change the incentive landscape – for better or worse. Fortunately, there is precedent for change. 

In 2018, Congress passed Stop Enabling Sex Traffickers (S.E.S.T.A.) and Fight Online Sex Trafficking Acts (F.O.S.T.A.) that amended Section 230 to make online platforms moderate content related to trafficking [13]. With their passage, any enablement or inaction from online platforms would violate federal sex trafficking laws. Although the progress made by these bills has been controversial, they ultimately signified a strong stand against universal Section 230 protections and exemplified that statutory amendments can be achieved. Additionally, while the Supreme Court has been slow to reevaluate Section 230, state courts have already begun to evaluate online platforms' legal obligations and accountability. Currently, 40 states have filed a lawsuit against Meta – formerly known as Twitter – for worsening the youth mental health crisis, and Meta plans on invoking Section 230 once again [7]. To bypass Section 230 protections, prosecutors hope to frame Meta’s features as being at fault instead of users [7]. Yet, without a firm interpretation in place, the result remains unpredictable. In the best interest of millions, such discourse and progress should continue as our immutable legal system aims to keep up with the dynamic online world. Meanwhile, the government can only continue walking the tightrope of tech tolerance.

 

[1] 47 U.S. Code § 230 - protection for private blocking and screening of offensive material, Legal Information Institute (2018), https://www.law.cornell.edu/uscode/text/47/230 (last visited Oct 25, 2023).

[2] Clarence Thomas, Supreme Court of the United States TWITTER, INC. v. TAAMNEH (2022), https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf (last visited Oct 25, 2023).

[3] Danielle Keats Citron, The internet will not break: Denying bad ... - fordham university The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity (2017), https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5435&context=flr (last visited Oct 25, 2023).

[4] Eli Rosenberg, Roberta’s, popular Brooklyn Restaurant, is pulled into “Pizzagate” hoax The New York Times (2016), https://www.nytimes.com/2016/12/07/nyregion/robertas-restaurant-brooklyn-threatened-fake-news-pizzagate-conspiracy.html (last visited Jan 31, 2024). 

[5] Fair hous. council v. Roommates.com, LLC, Global Freedom of Expression (2018), https://globalfreedomofexpression.columbia.edu/cases/fair-hous-council-v-roommates-com-llc/ (last visited Oct 25, 2023).

[6] Hlr, Section 230 as First Amendment rule Harvard Law Review (2023), https://harvardlawreview.org/print/vol-131/section-230-as-first-amendment-rule/ (last visited Oct 25, 2023).

[7] Isaiah Poritz, Social Media Addiction Suits Take Aim at big tech’s legal shield Bloomberg Law News (2023), https://news.bloomberglaw.com/tech-and-telecom-law/hundreds-of-social-media-addiction-suits-face-first-legal-hurdle (last visited Oct 26, 2023). 

[8] James Rosenfeld, Executive order “clarifies” (rewrites) online speech protections: Davis Wright Tremaine Media Law Monitor | Davis Wright Tremaine (2020), https://www.dwt.com/blogs/media-law-monitor/2020/10/executive-order-online-speech-protections (last visited Oct 25, 2023).

[9] Jimmy Blacklock, In the Supreme Court of Texas - Texas Judicial Branch IN RE FACEBOOK, INC. AND FACEBOOK, INC. D/B/A INSTAGRAM, RELATORS (2021), https://www.txcourts.gov/media/1452449/200434.pdf (last visited Oct 23, 2023).

[10] Kagan: Justices not “greatest experts” on the internet, The Washington Post (2023), https://www.washingtonpost.com/video/politics/kagan-justices-not-greatest-experts-on-the-internet/2023/02/21/cd28bdfc-9201-4d3c-a9f7-7c36e7b73c15_video.html (last visited Oct 25, 2023).

[11] Michael D. Smith & Marhsall Van Alstyne, It’s time to update Section 230 Harvard Business Review (2021), https://hbr.org/2021/08/its-time-to-update-section-230 (last visited Oct 25, 2023).

[12] Michael Jensen, Use of social media by US extremists - UMD The Use of Social Media by United States Extremists (2016), https://www.start.umd.edu/pubs/START_PIRUS_UseOfSocialMediaByUSExtremists_ResearchBrief_July2018.pdf (last visited Oct 25, 2023).

[13] Mike Wacker, How congress really works: Section 230 and Fosta American Affairs Journal (2023), https://americanaffairsjournal.org/2023/05/how-congress-really-works-section-230-and-fosta/ (last visited Oct 23, 2023).

[14] Peter Dizikes , Study: On Twitter, false news travels faster than true stories MIT News | Massachusetts Institute of Technology (2018), https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308 (last visited Oct 25, 2023).

[15] Rebecca Klar, Biden revokes Trump-era Order Targeting Shield for website operators The Hill (2021), https://thehill.com/policy/technology/553895-biden-revokes-trump-era-order-targeting-shield-for-website-operators/ (last visited Jan 31, 2024). 

[16] Supreme Court of the United States, REYNALDO GONZALEZ, ET AL., PETITIONERS v. GOOGLE LLC (2023), https://www.supremecourt.gov/opinions/22pdf/21-1333_6j7a.pdf (last visited Oct 25, 2023).

[17] U.S. Department of Justice, Section 230 — Nurturing Innovation or Fostering Unaccountability? (2020), https://www.justice.gov/file/1286331/download (last visited Oct 25, 2023).



36 views
bottom of page