The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 
 

Congress Wants A Magic Pony: Get Rid Of Section 230, Perfect Moderation, And Only Nice People Allowed Online

DATE POSTED:May 23, 2024

The internet is the wild west! Kids are dying! AI is scary and bad! Algorithms! Addiction! If only there was more liability and we could sue more often, internet companies would easily fix everything. Once, an AI read my mind, and it’s scary. No one would ever bring a vexatious lawsuit ever. Wild west! The “like” button is addictive and we should be able to sue over it.

Okay, you’re basically now caught up with the key points raised in yesterday’s House Energy & Commerce hearing on sunsetting Section 230. If you want to watch the nearly three hours of testimony, you can do so here, though I wouldn’t recommend it:

It went like most hearings about the internet, where members of Congress spend all their time publicly displaying their ignorance and confusion about how basically everything works.

But the basic summary is that people are mad about “bad stuff” on the internet, and lots of people seem to falsely think that if there were more lawsuits, internet companies would magically make bad stuff disappear. That, of course, elides all sorts of important details, nuances, tradeoffs, and more.

First of all, bad stuff did not begin with the internet. Blaming internet companies for not magically making bad stuff disappear is an easy out for moralizing politicians.

The two witnesses pushing for sunsetting Section 230 talked about how some people were ending up in harmful scenarios over and over again. They talked about the fact that this meant that companies were negligent and clearly “not doing enough.” They falsely insisted that there were no other incentives for companies to invest in tools and people to improve safety on platforms, ignoring the simple reality that if your platform is synonymous with bad stuff happening, it’s bad for business.

User growth slows, advertisers go away. If you’re an app, Apple or Google may ban you. The media trashes you. There are tons of incentives out there for companies to deal with dangerous things on their platforms, which neither the “pro-sunset” witnesses nor the congressional reps seemed willing to acknowledge.

But the simple reality is that no matter how many resources and tools are put towards protecting people, some people are going to do bad things or be put in unsafe positions. That’s humanity. That’s society. Thinking that if we magically threaten to sue companies that it will fix things is not just silly, it’s wrong.

The witnesses in favor of sunsetting 230 also tried to play this game. They insisted that frivolous lawsuits would never be filed because that would be against legal ethics rules (Ha!), while also insisting that they need to get discovery from companies to be able to prove that their cases aren’t frivolous. This, of course, ignores the fact that merely the threat of litigation can lead companies to fold. If the threat includes the extraordinarily expensive and time consuming (and soul-destroying) process of discovery, it can be absolutely ruinous for companies.

Thankfully, this time, there was one witness who was there who could speak up about that: Kate Tummarello from Engine (disclosure: we’ve worked with Kate and Engine in the past to create our Startup Trail startup policy simulation game and Moderator Mayhem, detailing the challenges of content moderation, both of which demonstrate why the arguments from those pushing for sunsetting 230 are disconnected from reality).

Kate’s written testimony is incredibly thorough. Her spoken testimony (not found in her written testimony, but can be seen in the video at around 34:45) was incredibly moving. She spoke from the heart about a very personal situation she faced in losing a pregnancy at 22 weeks and relying on online forums and groups to survive the “emotional trauma” of such a situation. And, especially at a time when there is a very strong effort to criminalize aspects of women’s health care, the very existence of such communities online can be a real risk and liability.

The other witnesses and the reps asking questions just kept prattling on about “harms” that had to be stopped online, without really acknowledging that for about half of the panel, they would consider the groups that Kate relied on through one of the most difficult moments in her life as a “harm” where liability should be there, allowing people to sue whoever hosts or runs such groups.

It’s clear that the general narrative of the “techlash” has taken all of the oxygen out of the room, disallowing thoughtful or nuanced conversations on the matter.

But what became clear at this hearing, yet again, is that Democrats think (falsely) that removing Section 230 will lead to some magic wonderland where internet companies remove “bad” information, like election denials, disinformation, and eating disorder content, but leave up “good” information, like information about abortions, voting info, and news. While Republicans think (falsely) that removing Section 230 will let their supporters post racial slurs without consequence, but encourage social media companies to remove “pro-terrorist” content and sex trafficking.

Oh, and also, AI is bad and scary and will kill us all. Also, big tech is evil.

The reality is a lot more complicated. AI tools are actually incredibly important in enabling good trust & safety practices that help limit access to truly damaging content and raise up more useful and important content. Removing Section 230 won’t make companies any better at stopping bad people from being bad or things like “cyberbullying”. This came up a lot in the discussion, even as at least one rep got the kid safety witness on the panel to finally admit that most cyberbullying doesn’t violate any law and is protected under the First Amendment.

Removing Section 230 would give people a kind of litigator’s veto. If you threaten a lawsuit over a feature, some content, or an algorithm recommendation you don’t like, smaller companies will feel pressured to remove it to avoid the risk of costly endless litigation.

It wouldn’t do much to harm “big tech,” though, since they have their buildings full of lawyers, snf large trust & safety teams empowered by tools they spend hundreds of millions of dollars developing. They can handle the litigation. It’s everyone else who suffers. The smaller sites. The decentralized social media sites. The small forums. The communities that are so necessary to folks like Kate when she faced her own tragic situation.

But none of that seemed to matter much to Congress, who just wants to enable ambulance chasing lawyers to sue Google and Meta. They heard a story about a kid who had an eating disorder, and they’re sure it’s because Instagram told them to. It’s not realistic.

The real victims of this rush to sunset Section 230 will be all the people, like Kate, and also like tons of kids looking for their community, or using the internet to deal with various challenges online.

Congress wants a magic pony. And, in the process, they’re going to do a ton of harm. Magic ponies don’t exist. Congress should deal in the land of reality.