The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 

How To Misrepresent A Supreme Court Hearing: National Review Edition

DATE POSTED:March 21, 2024

I wrote a long post on Monday about the oral arguments in the Murthy v. Missouri case. I highlighted how skeptical most of the Justices seemed regarding the arguments from the states, especially given the extensive problems in the record, which multiple Justices picked up on. Most other legal expert commentators came to a similar conclusion, that the Supreme Court seemed skeptical of the argument from the states.

So I was kind of surprised to see Brent Skorup from the Cato Institute writing a piece for the National Review that suggested the Supreme Court was poised to dismantle the “censorship-industrial complex” and claimed that the Justice Department defended said “censorship” in the oral arguments.

Because none of that actually happened.

I had certainly been curious to see how those who had been triumphantly trumpeting this case as proof of a grand “censorship industrial complex” would respond to how the hearing went. Most seem to be in varying states of denial. Many seem angry, insisting that the Justices just didn’t understand or didn’t look at the details (when the reality appears to be the opposite).

Some, instead, focused on the very problematic comments from Justice Ketanji Brown Jackson that seemed to suggest she was leaning way too far in the other direction. These comments suggested that maybe the government should have more leeway in pressuring private companies to take down speech. As we called out in our original writeup, this line of questioning did seem extremely problematic. However, there is a more generous interpretation: that she was noting that the determining factor is if it can pass strict scrutiny or not, and the argument from the states didn’t even leave room for that possibility. That is, it wasn’t necessarily support for coercive behavior, but rather pointing out that there could, in theory, be cases where coercive power is allowed if it passes strict scrutiny (I have problems with that theory, but if she’s just pointing out that Missouri’s test doesn’t leave that open, it’s a fair point).

But Skorup’s NRO piece is just bizarrely disconnected from reality. It comes across as what one would write if you had not actually read any of the briefings in the case, nor listened to the oral arguments, but rather simply imagined what might have happened based on a very distorted, and not very factual, understanding of the case.

First of all, the framing is simply incorrect. It starts out like this:

In oral arguments on Monday, the U.S. Department of Justice urged the Supreme Court to let government officials, including federal law-enforcement agencies, tell social-media company officials, in secret, what content to delete.

Except… that’s not even close to true. The DOJ’s position was actually that they had not told social media companies what to delete. They expressly admitted that if they had done that, it would be a First Amendment violation. Like, literally, here is what the Principal Deputy Solicitor General said in the oral arguments:

…we don’t say that the government can coerce private speakers. That is prohibited by the First Amendment.

The DOJ explicitly admitted that if it was trying to coerce private speakers, that would violate the First Amendment. They repeatedly pointed out that there was no actual evidence presented in the case that it had coerced anyone. So it’s both bizarre, and wrong, to claim that the DOJ “urged the Supreme Court to let government officials… tell social-media company officials, in secret, what content to delete.”

No one made that argument at all. Skorup and the National Review are lying to their readers.

And it gets worse.

The plaintiffs presented damning evidence, including internal government emails and testimony from government officials. They documented federal officials’ immense pressure on social-media companies, including profane emails and vague threats from White House officials to Facebook officials to remove vaccine “disinformation,” as well as messages from the FBI to several social-media companies with spreadsheets of accounts and content that the agency wanted removed. The FBI followed up on its requests at quarterly meetings with companies, keeping internal notes of which companies were complying with FBI demands. Perhaps the messages were innocent — we may never know because the FBI used encrypted communications and has not revealed their contents.

This is not what happened at all. Again, we’ve gone through pages and pages of evidence presented in this case and, as we’ve highlighted over and over again, there was no “damning evidence”. There were situations where the plaintiffs in the case took things out of context, or completely misrepresented the context.

The whole thing about the FBI sending “spreadsheets of accounts and content that the agency wanted removed,” is something that did not happen in the way presented. That would be clear if one had looked at the actual evidence or actually listened to the oral arguments. Fletcher explained the spreadsheet situation during the arguments:

… for example, when the FBI would send communications to the platforms saying, for your information, it has come to our attention that the following URLs or email addresses or other selectors are being used by maligned foreign actors like Russian intelligence operatives to spread disinformation on your platforms, do with it what you will.

Indeed, as Yoel Roth later described in writing about this, this kind of information sharing was simply as presented: “we’ve found these things, do what you want with it if you find it useful.” It was not seen as even remotely coercive nor a list of “what accounts to remove.” Efforts at dealing with large-scale foreign intelligence operatives frequently meant tracking the content to identify the source of a foreign influence campaign, not just taking content down upon receipt.

In several others, the FBI passes lists of accounts that they “believe are violating your terms of service” or “may be subject any actions [sic] deemed appropriate by Twitter.” The FBI fastidiously—and I would argue conspicuously, in the evidence presented—avoids both assertions that they’ve found platform policy violations, and requests that Twitter do anything other than assess the reported content under the platform’s applicable policies.

Receiving and acting on external reports is a core function of platform content moderation teams, and the essential nature of this work is an independent evaluation of reported content under the platform’s own policies. The fact, cited in Missouri v. Biden, that platforms only acted on approximately half of reports from the FBI shows clearly that the standards platforms applied were not wholly, or even mostly, the government’s.

Finally, it does not withstand factual scrutiny that platforms were so petrified of adverse consequences from the FBI that they uncritically accepted and acted on information sent to them by the government. The Twitter Files themselves document clearly at least two instances in which, presented with low-quality information or questionable demands, Twitter pushed back on the FBI’s requests. In one case, the FBI passes on a request—seemingly from the NSA—that Twitter “revis[e] its terms of service” to allow an open-source intelligence vendor to collect data from the Twitter APIs to inform the NSA’s activities. This request is arguably as close to jawboning as any interaction between Twitter and the FBI gets; yet, in response, I summarily dismissed not only the request for a meeting to discuss the topic, but the entire premise of the request, writing, “The best path for NSA, or any part of government, to request information about Twitter users or their content is in accordance with valid legal process.” The question was not raised again.

If you look at the actual emails from the FBI (which have been released), you see that Roth is exactly correct. They are clear that this is just information sharing, and they all involve accounts that were claimed to be part of a Russian disinformation campaign. The FBI is explicit: “For your review and action as deemed appropriate.” Not “take it down.” Just “here’s what we found, do what you want with it.”

Image

And the report in which those emails are released, from Jim Jordan’s committee in the House, admits that “Meta did not immediately take noticeable action against these accounts.” This again highlights that nothing in these communications were deemed by either side as demands for removals.

Other emails from the FBI, including ones to Twitter, also follow this pattern. In one highlighted exchange in the report, the FBI emailed Roth a list of potential Russian disinformation spreaders, and Roth called out that some appeared to not be Russian at all, but rather American and Canadian. This is not what you’d expect him to do if he was being told to just pull those down and feared retaliation if he pushed back. Roth asked for more context, and the FBI responded that it didn’t have anything else to provide and noted, again, that it was totally up to Twitter how to handle the information:

Image

During the oral arguments, the Justices seemed reasonably confused as to how this bit of information sharing was problematic. Justice Barrett seemed surprised when asking Louisiana’s Solicitor General why the FBI shouldn’t be able to share such information. This led him to admit that yes, he thinks in retrospect that the FBI “absolutely can identify certain troubling situations like that for the platforms and let the platforms take action.”

Image

You would think that an article talking about the oral arguments would… maybe point that out? Instead, it insists that the FBI’s actions must have been censorial, when even the states admitted to the Justices that maybe it wasn’t that bad.

Another example of a misrepresentation of the record, that we highlighted, was where the plaintiffs took an email from Francis Collins to Anthony Fauci, in which Collins suggested that they needed to address some misleading information about COVID by responding to it. Collins said “there needs to be a quick and devastating published take down of its premises.”

The word “published” was removed in the hands of the states and the district court. It was said that Collins demanded “there needs to be a… take down of its premises,” which the court said was proof that Collins demanded the information be taken down. That was false.

Skorup and the National Review engage in similarly misleading selective quoting.

Take this paragraph:

However, there are clear signs many U.S. government officials want to censor topics far beyond just vaccines, and that they view American minds as a theater over which their legal authority extends. For instance, the director of a federal cybersecurity and infrastructure agency noted at a 2021 event that the agency was expanding beyond protecting dams and electric substations from internet hackers to exerting “rumor control” during elections, saying, “We are in the business of critical infrastructure. . . . And the most critical infrastructure is our cognitive infrastructure.” A White House national climate adviser stated at an Axios event: “We need the tech companies to really jump in” and remove green energy “disinformation.”

Notice how carefully the quote marks are used here to imply that government officials were pushing for websites to “remove” content, but that’s not actually stated in any of the actual quotes. If you look at the actual event, the “national climate advisor” (who has no authority to regulate or punish companies in the first place) was saying that disinformation about climate change is a real threat to the planet, and that she’s hoping that tech companies don’t let it spread as far. She wasn’t talking to the companies. She wasn’t threatening the companies. This is classic bully pulpit kind of talk that is allowed on the “persuasion” side of the line.

As for the quote above it, again, when put back into context, it shows the exact opposite of what Skorup falsely implies. It’s CISA director Jen Easterly who did talk about “cognitive infrastructure,” but in context, she talks about “resiliency” to disinformation, including making sure people have more access to accurate info. Literally nothing in the discussion suggests content should be removed:

“One could argue we’re in the business of critical infrastructure, and the most critical infrastructure is our cognitive infrastructure, so building that resilience to misinformation and disinformation, I think, is incredibly important,” Easterly said. 

“We are going to work with our partners in the private sector and throughout the rest of the government and at the department to continue to ensure that the American people have the facts that they need to help protect our critical infrastructure,” she added. 

As for the whole “rumor control” effort by CISA, Skorup doesn’t seem to realize that it was set up in 2020 by the Trump administration. It was about providing more info (more speech) not removing speech. Everything about what is presented in the article is inherently misleading.

Image

Each time Skorup presents some of the evidence, he uses selective quotation to hide what was actually being talked about:

Department of Homeland Security documents obtained and released by U.S. Senator Chuck Grassley show a 2022 plan to “operationaliz[e] public-private partnerships between DHS and Twitter” regarding content takedowns. Further, red flags are present at the social-media companies: Many hire former federal officials to their “trust and safety” teams, and others have created online portals to fast-track government agencies’ content-takedown requests.

Again, it helps to look at the source documents here to understand what’s actually being discussed. The out of context line about “operationalizing public private partnerships” was entirely about the (yes, stupidly named and poorly explained) Disinformation Governance Board, which never actually did anything before being disbanded. And from the notes, the “operationalize” bit is clearly about figuring out what information (again, more speech!) Twitter would find useful in dealing with mis- and disinformation, not “what content should be taken down.” Furthermore, these were prep notes for a meeting a DHS official was having with Twitter, with no evidence that Twitter ever seriously considered working with DHS in this manner.

Facts matter. Skorup is misrepresenting them almost whole cloth.

But, what’s really perplexing is that Skorup’s version of what happened at the Supreme Court does not come even remotely close to what actually happened at the Supreme Court. Justices from Amy Coney Barrett to Sonia Sotomayor to Brett Kavanaugh to Elena Kagan all called out these kinds of errors in the states’ arguments.

Skorup mentions none of that.

Instead, he falsely claims that the DOJ “urged the Supreme Court to let government officials, including federal law-enforcement agencies, tell social-media company officials, in secret, what content to delete.” That simply did not happen. They repeatedly agreed that if that had happened it would be a problem, but focused much of the discussion on how that had not actually happened.

Honestly, reading Skorup’s piece, it felt as if it had been written prior to the oral arguments and without reading any of the relevant briefs in the case. And, maybe that’s because it had been. In researching this piece, I came across a surprisingly similar piece also written by Skorup that made many of the same claims… over a year ago. Before the case had been even decided by the district or appeals courts. Before the problems with all the evidence were widely documented. It’s almost as if he took that piece and rewrote it for the National Review, without bothering to check on anything.

This seems like a form of journalistic malpractice that you’d think the National Review would not support. But, alas, these days the National Review apparently doesn’t much care about facts or accuracy so long as a piece agrees with the narrative it wishes to push.