Skip to Main Content.
  • Young woman using smart phone,Social media concept.

    Can States Control Your Social Media Scroll? A Supreme Court Win for the First Amendment

Moody v. Netchoice, LLC, 144 S. Ct. 2383 (July 1, 2024)

In Moody v. Netchoice, LLC, the U.S. Supreme Court held that private social media platforms are protected by the First Amendment when they publish third-party content, including when they exercise discretion to promote, demote, block, or assign labels to third-party posts. This “editorial process” is entitled to First Amendment protection even when it is deployed through algorithms, so long as the algorithm is an expression of a human-designed standard.

The Supreme Court in Moody addressed laws from two states that each seek to limit social media companies from moderating content by filtering, prioritizing, and labeling content posted on their platforms. Those laws, passed in Florida and Texas, were enacted in response to legislators’ claims that social media outlets were favoring liberal posts over conservative ones. Netchoice―an internet trade association―challenged both laws on their face, on First Amendment grounds. In each state, the district court issued preliminary injunctions.

But the intermediate appellate courts handled those injunctions differently. Reviewing the Florida law, the U.S. Court of Appeals for the Eleventh Circuit upheld the injunction, relying on First Amendment standards and precedent. By contrast, the U.S. Court of Appeals for the Fifth Circuit reversed the Texas injunction, holding that the social media platforms’ actions are not speech and thus do not implicate the First Amendment at all. The Supreme Court granted certiorari to resolve this circuit split.

Justice Kagan, writing for a 6-3 majority, first addressed the procedural impact of NetChoice’s decision to assert facial challenges against both laws, emphasizing the high standard a plaintiff must meet to succeed on a facial constitutional challenge to a state law. The majority left injunctions against the two state laws in place and remanded the cases back for further analysis under the proper standard for such a facial challenge.

The Supreme Court explained that facial challenges rooted in the First Amendment require the trial court to determine whether the law’s unconstitutional applications are substantial compared to its constitutional applications. To make that determination, the lower courts should have evaluated the full scope of the laws and their respective applications. But the lower courts in Moody focused solely on the application of the laws to the largest social media platforms and wrongly “treated these cases more like as-applied claims than like facial ones.” So, the lower courts will now be required to analyze the full scope of the two laws on remand.

Significantly―and despite words of caution in separate concurring opinions by Justices Jackson, Thomas, and Alito―the Supreme Court did not end its analysis there. Rather, the majority opinion went on to provide important guidance regarding the application of the First Amendment to social media content moderation. The majority was also sharply critical of the Fifth Circuit’s approach, expressly noting that the court of appeals erred when it ruled that content choices made by the major platforms did not constitute speech at all—and further erred when it ruled that states were free to regulate social media content curation without any First Amendment constraints. This guidance, Justice Kagan explained, was necessary to make certain the Fifth Circuit did not repeat the same errors on remand.

Justice Kagan’s First Amendment analysis relied on a line of cases beginning with Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974). In that case, the Supreme Court analyzed a Florida law requiring newspapers to give a political candidate a right to reply when a newspaper criticized that candidate. The Supreme Court invalidated the Florida law because forcing a paper to print what it would otherwise not print intruded on the First Amendment right to editorial discretion. Following the same reasoning, Justice Kagan concluded that a state cannot force private social media companies to feed third-party digital content to their users by mandating changes to its content-moderation standards.

To guide the lower courts, Justice Kagan identified the following three First Amendment principles that must be followed: (1) when an entity is involved in selecting and curating speech for publication, the First Amendment protects that entity from regulations that require it to accommodate speech it would prefer to exclude; (2) this governing principle does not change just because the entity includes most forms of speech and excludes only a few; and (3) the government cannot compel speech by asserting an interest in improving or better balancing the marketplace of ideas. In other words, “a State may not interfere with private actors’ speech to advance its own vision of ideological balance.”

The Supreme Court left open the possibility that some types of automated content moderation may lack First Amendment protection if the process is not guided by human instruction or standards. In her concurring opinion, Justice Barrett stated that AI-generated curation or an algorithm that just feeds a user what it wants, without reliance upon human-devised standards, may lack First Amendment protection. Nonetheless, the court ruled that the editorial practices in the record before it fell well within the kind of “expressive” decision-making that deserves First Amendment protection.

Key Takeaways

  • The First Amendment protects private social media platforms’ editorial discretion to select and moderate third-party posts that are viewed by its users.
  • The First Amendment even protects social media platforms’ curation of content that is guided by algorithms so long as the algorithm represents an expression of independent standards designed by humans.
  • States will likely continue passing laws attempting to regulate online platform content curation, particularly given the high bar established for facial challenges. The Supreme Court left open the question whether AI content moderation or other digital content curation, which is not guided by human standards, is protected by the First Amendment.
  • Future challenges to laws regulating social media content moderation will likely target the law as applied, rather than through a facial challenge.
  • While the Supreme Court seemingly closed the door on social media content moderation laws that are designed to suppress or compel speech, the door remains open for laws to be passed that regulate platforms based upon other interests, such as consumer safety.

Frost Brown Todd’s appellate advocates have a proven track record of success in appeals involving questions of first impression, bet-the-company judgments, and decisions that shape the rules under which our clients will operate well into the future. For more information, please contact the authors or any attorney with the firm’s Appellate Practice Group.


Want more U.S. Supreme Court coverage?

Explore our full wrap-up and analysis of the most consequential rulings for businesses and industries during the 2023-24 U.S. Supreme Court term.

SCOTUS Collection | GO>>