The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

Elon Musk Wins An Actual First Amendment Fight, Blocks Bad California Transparency Law

DATE POSTED:September 5, 2024

We’ve explored multiple times now why Elon Musk is no friend to free speech. He has regularly threatened and sued others for their free speech, and indeed, he has an ever-growing list of such lawsuits. But every once in a while, he gets one right, and this time, he’s helped get a bad California law declared unconstitutional. Sometimes, the worst person you know really does make a good point.

For the past two years, we’ve been among the very few commentators calling out the problems of California’s AB 587. It was positioned as a “social media transparency” bill, and many people insist that it should be fine because “transparency is good.” But as we pointed out, it would make it nearly impossible to deal with bad actors. Among other things, it would limit the ability of websites to adjust their trust & safety practices, because each change would open the site up to legal risk for “not living up to” the trust & safety policies they had sent the Attorney General earlier.

Also, mandated transparency is generally unconstitutional, save for a few limited circumstances. As we’ve discussed, most people point to the Zauderer case from 1985, which held that you could mandate certain types of transparency around factual information about advertising that was uncontroversial. But states have been desperately trying to stretch that definition to mean that anything can be mandated by transparency laws.

Indeed, our one big complaint with the 11th Circuit’s ruling that found nearly all of Florida’s social media content moderation law unconstitutional was the bit where they said “except for the transparency part, which is fine under Zauderer.”

After California passed AB 587 (ignoring folks like myself and Eric Goldman pointing out its problems), a bunch of random folks, including the Babylon Bee and Tim Pool, sued over the law. But the suit was filed by a lawyer without much understanding of the issues, and the complaint was bad. That lawsuit was easily dismissed, with the court noting that the lawyer failed to show how the plaintiffs had actual standing to bring the lawsuit.

It was clear that a much better plaintiff would be an actual social media platform that was covered by the law. However, most tech companies are increasingly afraid to challenge these kinds of laws, as many of the larger ones realize they can comply while the laws make things harder for upstart competitors.

In the meantime, though, we’ve already seen groups using 587 to try to pressure companies into changing their moderation practices. This kind of effort reveals the true intent of 587 is not about transparency, but about using that transparency as a weapon to pressure companies to moderate categories of content that the state of California doesn’t like.

And thus, this was one rare case where Elon Musk came to the rescue of the First Amendment. I’m guessing that the folks at the Babylon Bee probably asked him to challenge the law, after their own lawsuit failed. Musk brought in some significant First Amendment firepower in Floyd Abrams and actually filed what seemed like a very strong lawsuit challenging the law.

Unfortunately, the district court didn’t buy it and dismissed the challenge. However, ExTwitter appealed, and this week the Ninth Circuit (with the same panel that threw out California’s Age Appropriate Design code, since both cases were heard the same day, one after another) has now similarly overturned the lower court’s ruling and noted that 587 appears to be unconstitutional.

For the reasons below, we hold that the Content Category Report provisions likely compel non-commercial speech and are subject to strict scrutiny, under which they do not survive. We reverse the district court on that basis

The court admits that laws regulating commercial speech are more likely to be allowed, but that doesn’t mean that lawmakers can go crazy. Here, they seemed to go a little crazy (as we predicted).

State legislatures do not have “freewheeling authority to declare new categories of speech outside the scope of the First Amendment.”

And here, they point out that the requirements of AB 587 aren’t even really about commercial speech at all. They’re really about moderating everyone’s speech, but were framed in a way to make courts think they’re about commercial speech. But this panel sees through the ruse:

Here, the Content Category Reports are not commercial speech. They require a company to recast its contentmoderation practices in language prescribed by the State, implicitly opining on whether and how certain controversial categories of content should be moderated. As a result, few indicia of commercial speech are present in the Content Category Reports.

First, the Content Category Reports do not satisfy the “usual[] defin[ition]” of commercial speech—i.e., “speech that does no more than propose a commercial transaction.” See United Foods, Inc., 533 U.S. at 409; see also IMDb.com Inc. v. Becerra, 962 F.3d 1111, 1122 (2020) (“Because IMDb’s public profiles do not ‘propose a commercial transaction,’ we need not reach the Bolger factors.”). The State appears to concede as much in its answering brief.

To the extent our circuit has recognized exceptions to that general rule, those exceptions are limited and are inapplicable to the Content Category Reports here. For example, as identified by the First Amendment and Internet Law Scholars amici, we have characterized the following speech as commercial even if not a clear fit with the Supreme Court’s above articulation: (i) targeted, individualized solicitations, see Nationwide Biweekly Admin., Inc. v. Owen, 873 F.3d. 716, 731–32 (9th Cir. 2017); contract negotiations, see S.F. Apartment Ass’n v. San Francisco, 881 F.3d 1169, 1177–78 (9th Cir. 2018); and retail product warnings, see CTIA II, 928 F.3d at 845. Though it does not directly or exclusively propose a commercial transaction, all of this speech communicates the terms of an actual or potential transaction. But the Content Category Reports go further: they express a view about those terms by conveying whether a company believes certain categories should be defined and proscribed.

It’s that last bit that really catches the attention of the judges. They point out that while it’s true that social media companies have terms of service that might be commercial speech, the transparency mandates of 587 require them to take a stand on specific types of content, some of which may be politically sensitive.

The Content Category Report provisions would require 9 a social media company to convey the company’s policy views on intensely debated and politically fraught topics, including hate speech, racism, misinformation, and radicalization, and also convey how the company has applied its policies. The State suggests that this requirement is subject to lower scrutiny because “it is only a transparency measure” about the product. But even if the Content Category Report provisions concern only transparency, the relevant question here is: transparency into what? Even a pure “transparency” measure, if it compels non-commercial speech, is subject to strict scrutiny… That is true of the Content Category Report provisions. Insight into whether a social media company considers, for example, (1) a post citing rhetoric from on-campus protests to constitute hate speech; (2) reports about a seized laptop to constitute foreign political interference; or (3) posts about election fraud to constitute misinformation is sensitive, constitutionally protected speech that the State could not otherwise compel a social media company to disclose without satisfying strict scrutiny. The mere fact that those beliefs are memorialized in the company’s content moderation policy does not, by itself, convert expression about those beliefs into commercial speech. As X Corp. argues in its reply brief, such a rule would be untenable. It would mean that basically any compelled disclosure by any business about its activities would be commercial and subject to a lower tier of scrutiny, no matter how political in nature. Protection under the First Amendment cannot be vitiated so easily

The Ninth Circuit calls out the lower court for basically skipping the hard work of this analysis:

The district court performed, essentially, no analysis on this question. In fact, the district court acknowledged that the Content Category Reports “do not so easily fit the traditional definition of commercial speech” as they “are not advertisements, and social media companies have no particular economic motivation to provide them.”

The court then points out that the Zauderer test here clearly does not apply. It does note that both the Fifth and the Eleventh Circuits found the Zauderer test useful in determining that transparency provisions were fine in the Texas and Florida social media laws, but the Ninth Circuit is unimpressed.

But neither the Fifth nor Eleventh Circuit dealt with speech similar to the Content Category Reports. Unlike Texas HB 20 or Florida SB 7072, the Content Category Report provisions compel social media companies to report whether and how they believe particular, controversial categories of content should be defined and regulated on their platforms. Neither the Texas nor Florida provisions at issue in the NetChoice cases require a company to disclose the existence or substance of its policies addressing such categories

I actually think both of the Texas and Florida law transparency provisions have real problems that the courts ignored, so it’s a little disappointing that the Ninth Circuit here is trying to distinguish them rather than saying that, actually, those other courts were wrong.

The end result, though, is that strict scrutiny applies to this law, and it can’t pass it.

At minimum, the Content Category Report provisions likely fail under strict scrutiny because they are not narrowly tailored. They are more extensive than necessary to serve the State’s purported goal of “requiring social media companies to be transparent about their content-moderation policies and practices so that consumers can make informed decisions about where they consume and disseminate news and information.” Consumers would still be meaningfully informed if, for example, a company disclosed whether it was moderating certain categories of speech without having to define those categories in a public report. Or, perhaps, a company could be compelled to disclose a sample of posts that have been removed without requiring the company to explain why or on what grounds

And thus the law is put on hold. The case is remanded back to the lower court to determine if other aspects of the law are severable and can survive.

And, yes, this is a rare case where Elon Musk actually stood up for free speech and won (at least so far). Hopefully this leads to other courts looking much more skeptically at transparency mandates as well.