The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

Missouri AG Thinks Supreme Court Ruling Lets Him Control Social Media Moderation (It Doesn’t)

DATE POSTED:May 15, 2025

Missouri Attorney General Andrew Bailey apparently thinks he gets to be editor-in-chief of every social media platform. In his latest attack on free speech rights, Bailey has announced a “first-in-the-nation rule” that would force social media companies to let users choose third-party content moderators rather than using the platforms’ own moderation systems.

There’s just one tiny problem: this completely ignores what the Supreme Court explicitly said about government control of content moderation just months ago in Moody v. NetChoice. Even crazier, Bailey claims that his new rule is based on the ruling in Moody.

As a reminder, the Supreme Court’s ruling in last year’s Moody v. NetChoice case, the Justices made it quite clear that the First Amendment protects social media content moderation decisions, and that the state has no business telling companies how to moderate. Justice Kagan, in the majority opinion, signed onto by Chief Justice Roberts, Justice Kavanaugh, and Justice Barrett, made it clear that what social media companies do in content moderation is quintessential protected First Amendment activity no different than editors of a newspaper choosing what to publish:

To the extent that social media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party expression are subject to judicial review for compliance with the First Amendment.

In short, content moderation is protected by the First Amendment, and states are not able to simply ignore that. Indeed, the ruling had even more explicit words for the Fifth Circuit, which had ruled earlier (in an absolutely nutty ruling) that states could easily pass laws that told websites how to moderate. The Supreme Court made it clear that such a claim was utter nonsense:

But it is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are “not speech” at all, so States may regulate them free of the First Amendment’s restraints. 49 F. 4th, at 494; see supra, at 8. And even if those activities were expressive, the court held, Texas’s interest in better balancing the marketplace of ideas would satisfy First Amendment scrutiny. See 49 F. 4th, at 482. If we said nothing about those views, the court presumably would repeat them when it next considers NetChoice’s challenge. It would thus find that significant applications of the Texas law—and so significant inputs into the appropriate facial analysis—raise no First Amendment difficulties. But that conclusion would rest on a serious misunderstanding of First Amendment precedent and principle. The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook’s and YouTube’s main feeds. (And our analysis of Texas’s law may also aid the Eleventh Circuit, which saw the First Amendment issues much as we do, when next considering NetChoice’s facial challenge.) But a caveat: Nothing said here addresses any of the laws’ other applications, which may or may not share the First Amendment problems described below

Indeed, the Supreme Court said that the Fifth Circuit’s attempt to block social media companies from moderating in a particular way directly would violate the First Amendment:

Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection. It prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities. Still more, the law—again, in that specific application—is unlikely to withstand First Amendment scrutiny. Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.

So, you’d have to be pretty fucking bad at reading to think that this case somehow blesses the idea that the government can decide how social media companies can moderate.

Enter Missouri Attorney General Andrew Bailey. Bailey is no stranger to attacking the free speech rights of those he disagrees with.

Last week, Bailey announced a new rule, based on his reading of the Moody decision, that would effectively make him Missouri’s Chief Content Moderation Officer. Under the guise of “protecting free speech,” Bailey is attempting to use Missouri’s consumer protection laws to force social media companies to let users bypass their moderation systems entirely:

Missouri Attorney General Andrew Bailey today announced the filing of a first-in-the-nation rule under the Missouri Merchandising Practices Act that targets corporate censorship and secures freedom of expression for social media users. The rule requires Big Tech platforms to allow Missouri users to choose their own content moderators rather than being forced to rely on the biased algorithms of monopolistic tech giants.

“Big Tech oligarchs have manipulated the content Missourians see online and silenced voices they don’t like. That ends now,” said Attorney General Bailey. “With this rule, Missouri becomes the first state in America to take real, enforceable action against corporate censorship. I’m using every tool to ensure Missourians—not Silicon Valley—control what they see on social media.”

The rule—codified as 15 CSR 60-19—clarifies that it is an unfair, deceptive, or otherwise unlawful practice for social media platforms to deny users the ability to choose an independent content moderator. Platforms must now provide a choice screen upon account activation and at regular intervals, must not favor their own moderation tools, and must allow full interoperability for outside moderators chosen by users.

If this sounds familiar, it should. It’s exactly the kind of government interference in content moderation that the Supreme Court just said states can’t do.

Here’s the truly ironic part: third-party content moderation is actually a great idea. I should know — I wrote a pretty well-known paper advocating for exactly that approach, and I now serve on the board of Bluesky, currently the only major social platform embracing this model.

But there’s a world of difference between believing companies should adopt better moderation practices and claiming they’re breaking the law by not doing so. Bailey’s attempt to force this change through government mandate is not just legally backwards — it’s exactly the kind of state interference in editorial decisions that the First Amendment was designed to prevent.

Bailey’s claim that Moody somehow supports his position is particularly brazen. He specifically cites the Court’s mention of “competition laws” as justification:

This regulation is grounded in the Supreme Court’s guidance from Moody v. NetChoice, which recognized the authority of state governments to enforce competition laws in the interest of free expression.

But Bailey either didn’t read or deliberately ignored the actual context. The Court only mentioned competition laws to explicitly contrast them with content moderation mandates. Here’s what they actually said:

the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas. Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access. Cf., e.g., Turner I, 512 U. S., at 647 (protecting local broadcasting); Hurley, 515 U. S., at 577 (discussing Turner I ). But in case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. The regulations in Tornillo, PG&E, and Hurley all were thought to promote greater diversity of expression. See supra, at 14–16. They also were thought to counteract advantages some private parties possessed in controlling “enviable vehicle[s]” for speech. Hurley, 515 U. S., at 577. Indeed, the Tornillo Court devoted six pages of its opinion to recounting a critique of the thencurrent media environment—in particular, the disproportionate “influen[ce]” of a few speakers—similar to one heard today (except about different entities). 418 U. S., at 249; see id., at 248–254; supra, at 14–15. It made no difference. However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.

The Court couldn’t be more clear: while states can enforce genuine competition laws, they absolutely cannot use that power as a backdoor to control content moderation decisions. They even added a footnote specifically addressing attempts to twist competition law precedent (like Turner, which conservatives have long despised) to justify content moderation mandates:

Texas claims Turner as a counter-example, but that decision offers no help to speak of. Turner did indeed hold that the FCC’s must-carry provisions, requiring cable operators to give some of their channel space to local broadcast stations, passed First Amendment muster. See supra, at 15. But the interest there advanced was not to balance expressive content; rather, the interest was to save the local-broadcast industry, so that it could continue to serve households without cable. That interest, the Court explained, was “unrelated to the content of expression” disseminated by either cable or broadcast speakers. Turner I, 512 U. S. 622, 647 (1994). And later, the Hurley Court again noted the difference. It understood the Government interest in Turner as one relating to competition policy: The FCC needed to limit the cable operators’ “monopolistic,” gatekeeping position “in order to allow for the survival of broadcasters.” 515 U. S., at 577. Unlike in regulating the parade—or here in regulating Facebook’s News Feed or YouTube’s homepage—the Government’s interest was “not the alteration of speech.” Ibid. And when that is so, the prospects of permissible regulation are entirely different.

So either Missouri AG Andrew Bailey cannot read a basic Supreme Court decision, or he assumes no one else can.

Perhaps the most telling irony in all of this? If Bailey succeeds, his rule would force his good friend Elon Musk — for whom Bailey has enthusiastically conducted censorial investigations designed to chill speech — to allow third-party moderation on ExTwitter. Something tells me neither Bailey nor Musk have thought through the implications of trying to become Missouri’s content moderation czar.