AI and Speech

Supreme Court jurisprudence suggests it would protect generated content


Historically, the Supreme Court has taken an expansive view of the First Amendment, ruling in a series of cases that constitutionally protected speech includes content as varied as video games and the creation and transmission of cable TV programming.

With automated systems now curating ­— and even generating ­— much of the content presented online and in social media feeds, the Court has begun to address whether editorial decisions made by algorithms constitute speech. 

Stuart M. Benjamin, the William Van Alstyne Distinguished Professor of Law and a scholar of the First Amendment, administrative law, and telecommunications, considered Supreme Court jurisprudence and how it might view algorithmically-based outputs in an article published more than a decade ago. In “Algorithms and Speech,” 161 University of Pennsylvania Law Review 1445, he argues that, with a few exceptions, automated decisions on what content appears on web pages and search engine results are constitutionally protected speech.

“In every case in which the Court has applied the First Amendment, abridgement of substantive communication has been the issue,” he writes. “So long as humans are making substantive editorial decisions, inserting computers into the process does not eliminate the communication via that editing.”

“So long as humans are making substantive editorial decisions, inserting computers into the process does not eliminate the communication via that editing.”

Stuart Benjamin

Benjamin says his argument that algorithmic output is analogous to previously affirmed forms of speech still stands, and the output of AI tools like ChatGPT is the same result that could be achieved by humans, given enough time.

“This is not artificial general intelligence. Large language models are not reasoning on their own. Fundamentally, these are still programs that are created by humans,” Benjamin says. “I think that if there are attempts at regulating large language models like ChatGPT, then you’re going to have the very same First Amendment questions that would apply if you’re regulating the algorithms that Google uses. So I think that there is going to be a hurdle to government regulation.” 

In a pair of cases heard in the October 2022 term, Gonzalez v. Google and Twitter v. Taamneh, the Court rejected claims that the technology companies were liable for aiding and abetting violent and lethal terrorist attacks that they claim stemmed from content posted on the companies’ platforms by third parties and amplified through their algorithms. Despite the cases’ First Amendment implications, the Court declined to address whether the content was protected speech, focusing on culpability under anti-terrorism laws. But its unanimous opinion did indicate a reluctance to impose restrictions in an area that is, as yet, unregulated.

“Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally,” Associate Justice Clarence Thomas wrote in the unanimous opinion. “And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered.”

In the October 2023 term, the Court will hear two cases in which lower courts split on the question of whether activity on public officials’ social media accounts constitutes protected speech. In O’Connor-Ratcliff v. Garnier, members of a California school board who used their personal social media accounts to communicate about their work blocked parents from viewing and commenting on their pages. In Lindke v. Freed, a city manager in Michigan deleted a Facebook user’s comments and blocked him from commenting further after he criticized the official’s policy decisions.

The Court will also hear two cases involving state regulation of social media. NetChoice v. Paxton and Moody v. NetChoice involve 2021 laws passed in Texas and Florida, respectively, that aim to prevent “deplatforming” of individuals by curtailing the editorial discretion of large platforms like Facebook and X (formerly Twitter). The Texas law, which was upheld by the Fifth U.S. Circuit Court of Appeals, prohibits platforms from discriminating on viewpoint, with a few exceptions, and imposes transparency requirements, while the Florida statute, blocked by the 11th Circuit, requires an extensive explanation of any decision involving the censorship of a user, and prohibits platforms from banning political candidates.

While both laws have political undertones — they were passed by states with strongly conservative legislatures ­­— Benjamin predicts that the Court will agree with the 11th Circuit that they are unconstitutional, in keeping with its historically broad view of protected speech and reluctance to regulate editorial decision-making, whether by human or machine.

“This will be the most grappling with these First Amendment issues involving editing that the justices have done in a while, and it will give us a sense as to where their jurisprudence may be heading,” Benjamin says. “More specifically, although this is not a case about AI, it will give us some insights about how they would understand regulation of AI.” 

Back to main feature

Share this article:

Cover Fall 2023 - Lady Justice with AI Symbols

Fall 2023
Volume 42 | No. 2