|
Planar Protector
Join Date: Dec 2020
Posts: 912
|
|
What is Section 230?
Quote:
Code:
Section 230 is a provision in the 1996 Communications Decency Act, itself part of Title V of the Telecommunications Act of 1996. The Telecommunications Act of 1996 amended wide portions of the Communications Act of 1934, in an effort to update the old law and bring the emerging Internet communications under the purview of the Federal Communications Commission. By and large, the 1996 law moved the Internet under similar regulations as television, radio, or printed communications, eliminating older platform-based policies with regulations on the content of speech rather than the medium of speech.
Section 230 itself is rather brief, particularly in the portions that matter. Most of the law is contained in c(2), the portion dealing with liability. The relevant text for this discussion is c(1) and c(2)A: " (1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2)Civil liability
Code:
No provider or user of an interactive computer service shall be held liable on account of—
**(A)**any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected". In more basic terms, a website may not be held liable for content hosted on its site that it did not directly publish. Ergo, the Washington Post may be sued or charged for its own published articles, but not for the comments section. Now many take this to mean that moderation is not necessary, or that Section 230's liability shield is the cause of lax moderation. However, this is not the case.
A Tale of Two Publishers - History of Section 230
Code:
To understand Section 230's role, it's important to consider the context of its passage. Two major cases set the stage for Section 230: Cubby Inc v CompuServe Inc and Stratton Oakmont Inc v Prodigy Services Co. The facts of both cases were remarkably similar: CompuServe was an ISP that hosted its own online news forum. On this forum, a newsletter called Rumorville USA published defamatory content about Cubby, Inc and its competing newsletter. At trial, CompuServe conceded that they had hosted the content and that the content was defamatory, but demonstrated that they had contracted out responsibility for moderating the forum. Because they had no knowledge of the defamatory content, they were not held liable for its defamation by the SDNY District Court. This was consistent with the 1959 Supreme Court case Smith v California, which held that a law prohibiting possession of "obscene" books could not be enforced against Smith, a book store owner, as Smith did not have direct knowledge of the obscene content. The majority held that to hold otherwise would require book stores to have direct knowledge of every book they sold, which would invariably lead to a massive reduction in books available for sale, and that even if the obscene content may be regulated, this regulation applies to publishers, not platforms.
Three years later, the New York Supreme Court took up Stratton Oakmont v Prodigy Services, which on its face had similar facts: Prodigy hosted an online forum, Money Talk, on which an anonymous user posted allegations that Stratton Oakmont and its president had engaged in fraud during their initial stock offer. Stratton Oakmont sued Prodigy for defamation, even though Prodigy had not directly published the material. However, the court here ruled against Prodigy and held them liable. The basis was that Prodigy's Money Talk was directly moderated by Prodigy and had defined rules about content. This, according to the court, removed their shield: "Prodigy's conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice." In essence, a forum which moderated content -- perhaps by removing misinformation, or flagging misinformation -- could be held liable for user-generated content that was defamatory, while a site with no moderation could not.
Then representatives Chris Cox (R-CA) and Ron Wyden (D-OR) introduced Section 230 to the already proposed Telecommunications Act, which had already existing language directing websites to control the transmission of "obscene" content to minors. Without such a provision, by the existing precedent, sites either could have complied with limiting access to "obscene" content by minors or could be shielded from defamation and libel suits, but not both. As such, without Section 230, any site with user-generated content could have faced civil suits for defamation or libel contained only within user-generated content. Thus Section 230 was introduced to provide this shield and allow for web forums to be moderated without opening the door for wider suits.
Section 230 had pretty immediate ramifications. Zeran v AOL in 1997 held that AOL could not be held liable for their failure to "timely remove" libelous content, with the Court explaining: "Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted." This was not a blanket immunity, however. Fair Housing Council v Roommates held that Roommates.com was not protected by Section 230 against suits arising from the Fair Housing Act, as Roommates.com had expressly created the ability to filter by race and thus facilitated the illegal discrimination. A similar case was made in the passage of FOSTA/SESTA, and websites were no longer shielded for "facilitating" sex trafficking. Further, sites can still be held responsible for user-generated content they had knowledge of for other cases: Internet Brands was held criminally negligent for failing to inform an anonymous user of rape threats made against her that they had demonstrably known about.
Criticisms and Alternatives to Section 230
Code:
Recently, there's been a movement to review Section 230 in two directions. From the right, many push to require "neutrality" by publishers and prohibit them from discriminating based on "political views". This is an odd and unfeasible path forward, for numerous reasons but not in small part because there is a great deal of speech that is protected as "political speech" but that nonetheless would be tremendously unappealing for companies to host. Imagine, for example, if a forum was prohibited from removing content that calls for genocide or violence on the basis that those views may be "protected political speech," and beyond that even user-created and moderated forums would be held to those standards so long as the website was responsible for the moderation tools existing. From the left, there has been a push to require sites to be more active in removing hate speech or misinformation. However merely removing these protections would largely bring us back to the pre-230 era where companies were encouraged not to moderate as means of avoiding liability. Stronger proposals would be likely to cause sites to opt for no user-generated content, as monitoring and moderating every post to avoid any false content would be a massive and implausible undertaking.
While simple repealing of Section 230 protections, or directives to remove all misleading or defamatory content are implausible, US law does have a remedy: the DMCA takedown notice. While DMCAs are notoriously misused and subject to their own set of problems, it handled a similar issue concerning copyrighted material posted on platforms by users. Under a DMCA takedown notice, a publisher or platform is notified of copyrighted material on their site and (typically) given a summary of reasons the copyright owner believes the material in question is infringement. These also are required to give some explanation on why Fair Use doctrine would not apply and can face civil suits if filing a DMCA takedown without a good-faith argument against Fair Use. A site faces liability for copyrighted materials only if a DMCA takedown notice is issued and they choose to ignore it. Further, users may file counter-notices or counter-claims, and present their own evidence for fair use, and the material can be re-instated without liability if the original copyright holder fails to challenge the notice. This is a very imperfect system, but could at least serve as a guide for revising Section 230 to create some degree of compelling sites to address misinformation without unduly burdening them. This is the method used in EU's Directive 2000/31/EC (their broad E-Commerce Directive), which grants liability protection provided that the publisher in question had no knowledge of the illegal content. The obvious rejoinder is the Streisand Effect, and there's little reason to think that if, for example, Twitter were to remove defamatory tweets that the misinformation would be ended.
As to matters of hate speech, there's really nothing to be done. Sites cannot be compelled to remove the speech any more than the US government can prohibit hate speech, and the US government's power to prohibit hate speech is practically nonexistent. They are under no obligation to do so, or not to do so. And proposals one way or the other are unlikely to change that, except for the possibility that a complete elimination of Section 230 would mean they have strong incentives to revert to no moderation. Other attempts to force moderation of extremism by sites, or to curb radicalization, are going to be as fruitless as attempting to do so in any public forum.
Finally as to charges of political bias, there's really no case to be made here either. Removing protections for sites that demonstrate "bias" is going to be running against a lot of very difficult legal challenges. The first is that drafting legislation requiring moderation to be "politically neutral" or to not discriminate on "political views" is going to have difficulty avoiding a strikedown for vagueness. Under US constitutional law, a law must provide "explicit standards" for interpreting what is and is not a violation. If courts must decide and explain what is meant by a demonstrated political bias, or similar language, it will be difficult to enforce or uphold the law, and finding a suitable method for defining political bias is pretty implausible.
Conclusions (TL;DR)
Code:
Section 230 was passed with the explicit intent of shielding websites from liability for defamation claims should they moderate. Without Section 230, liability shielding for defamation would exist only for sites that do not moderate, and thus a site which takes efforts to stop misinformation is more liable, not less, for user-generated content. While there are plausible alternatives to Section 230, and ways to change it to make companies more responsive to taking down false or misleading content, these still require the basic framework of Section 230. But removing it wholesale would lead either to complete eliminations of forums or to a complete elimination of moderating efforts in forums far more than it would address misinformation.
|
|