The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

In YOLO Ruling, Ninth Circuit Cracks Open Pandora’s Box For Section 230

DATE POSTED:August 27, 2024

The Ninth Circuit appeals court seems to have figured out the best way to “reform” Section 230: by pretending it doesn’t apply to some stuff that the judges there just randomly decide it doesn’t apply to anymore. At least that’s my reading of the recent ruling against YOLO Technologies.

Now, let’s start by making something clear: YOLO Technologies appears to be a horrible company, making a horrible service, run by horrible people. We’ll get more into the details of that below. I completely understand the instinctual desire that YOLO should lose. That said, there are elements of this ruling that could lead to dangerous results for other services that aren’t horrible. And that’s what always worries me.

First, a quick history lesson: over fifteen years ago, we wrote about the Ninth Circuit’s ruling in Barnes v. Yahoo. At the time, and in the years since Barnes, that ruling seemed potentially problematic. The case revolved around another horrible situation, where an ex-boyfriend posted fake profiles. Barnes contacted Yahoo and reached a Director of Communications who promised to “take care of” the fake profiles.

However, the profiles remained up. Barnes sued, and Yahoo used 230 to try to get out of it. Much of the Barnes decision is very good. It’s an early decision that makes it clear Section 230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230.

And yet, the court still sided with Barnes, noting that the direct promise from the employee at Yahoo to take care of the content went outside of traditional publishing activity and created a promise, and therefore a duty to live up to that promise.

In the fifteen years since that ruling, there have been various attempts to use Barnes to get around Section 230, but most have failed, as they didn’t have that clear promise like Barnes had. However, in the last couple of months, it seems the Ninth Circuit has decided that the “promise” part of Barnes can be used more broadly, and that could create a mess.

In YOLO, the company makes an add-on to Snapchat that lets users post questions and polls on the app. Other users could respond anonymously (they also had the option to reveal who they were). The app was very popular, but it shouldn’t be a huge surprise that some users used to harass and abuse others.

However YOLO claimed publicly, and in how it represented the service to users who signed up, that one way it would deal with harassment and abuse would be to reveal those users. As the Ninth Circuit explains:

As a hedge against these potential problems, YOLO added two “statements” to its application: a notification to new users promising that they would be “banned for any inappropriate usage,” and another promising to unmask the identity of any user who “sen[t] harassing messages” to others.

But it appears that YOLO never actually intended to live up to this, or it just became overwhelmed, because it appears not to have done it.

Now, this is always a bit tricky, because what some users consider abuse and harassment, a service (or other users!) might not consider to be abuse and harassment. But, in this case, it seems pretty clear that whatever trust & safety practices YOLO had were not living up to the notification it gave to users:

All four were inundated with harassing, obscene, and bullying messages including “physical threats, obscene sexual messages and propositions, and other humiliating comments.” Users messaged A.C. suggesting that she kill herself, just as her brother had done. A.O. was sent a sexual message, and her friend was told she was a “whore” and “boy-obsessed.” A.K. received death threats, was falsely accused of drug use, mocked for donating her hair to a cancer charity, and exhorted to “go kill [her]self,” which she seriously considered. She suffered for years thereafter. Carson Bride was subjected to constant humiliating messages, many sexually explicit and highly disturbing.

These users, and their families, sought to unmask the abusers. Considering that YOLO told users that’s how abuse and harassment would be dealt with, it wasn’t crazy for them to think that might work. But it did not. At all.

A.K. attempted to utilize YOLO’s promised unmasking feature but received no response. Carson searched the internet diligently for ways to unmask the individuals sending him harassing messages, with no success. Carson’s parents continued his efforts after his death, first using YOLO’s “Contact Us” form on its Customer Support page approximately two weeks after his death. There was no answer. Approximately three months later, his mother Kristin Bride sent another message, this time to YOLO’s law enforcement email, detailing what happened to Carson and the messages he received in the days before his death. The email message bounced back as undeliverable because the email address was invalid. She sent the same to the customer service email and received an automated response promising an answer that never came. Approximately three months later, Kristin reached out to a professional friend who contacted YOLO’s CEO on LinkedIn, a professional networking site, with no success. She also reached out again to YOLO’s law enforcement email, with the same result as before.

So, uh, yeah. Not great! Pretty terrible. And so there’s every reason to want YOLO to be in trouble here. The court determines that YOLO’s statements about unmasking harassers meant that it had made a promise, a la Barnes, and therefore had effectively violated an obligation which was separate from its publishing activities that were protected by Section 230.

Turning first to Plaintiffs’ misrepresentation claims, we find that Barnes controls. YOLO’s representation to its users that it would unmask and ban abusive users is sufficiently analogous to Yahoo’s promise to remove an offensive profile. Plaintiffs seek to hold YOLO accountable for a promise or representation, and not for failure to take certain moderation actions. Specifically, Plaintiffs allege that YOLO represented to anyone who downloaded its app that it would not tolerate “objectionable content or abusive users” and would reveal the identities of anyone violating these terms. They further allege that all Plaintiffs relied on this statement when they elected to use YOLO’s app, but that YOLO never took any action, even when directly requested to by A.K. In fact, considering YOLO’s staff size compared to its user body, it is doubtful that YOLO ever intended to act on its own representation.

And, again, given all the details, this feels understandable. But I still worry about where the boundaries are here. We’ve seen plenty of other cases. For example, six years ago, when the white supremacist Jared Taylor sued Twitter for banning him, he argued that it could not ban users because Twitter had said that it “believe[s] in free expression and believe[s] every voice has the power to impact the world.”

So it seems like there needs to be some clear line. In Barnes, there was a direct communication between the person and the company where an executive at the company directly made a promise to Barnes. That’s not the case in the YOLO ruling.

And when we combine the YOLO ruling with the Ninth Circuit’s ruling in the Calise case back in June, things get a little more worrisome. I didn’t get a chance to cover that ruling when it came out, but Eric Goldman did a deep dive on it and why it’s scary. That case also uses Barnes’ idea of a “promise” by the company to mean a “duty” to act that is outside of Section 230.

In that case, it was regarding scammy ads from Chinese advertisers. The court held that Meta had a “duty” based on public comments to somehow police advertisements, that was outside of its Section 230 protections. That ruling also contained a separate concurrence (oddly written by the same Judge who wrote the opinion, but which apparently he couldn’t get others to agree to) that just out and out trashed Section 230 and basically made it clear he hated it.

And thus, as Eric Goldman eloquently puts it, you have the Ninth Circuit “swiss-cheesing” Section 230 by punching all kinds of holes in it, enabling more questionable lawsuits to be brought, arguing that this or that statement by a company or a company employee represented some form of a promise under Barnes, and therefore a “duty” outside of Section 230.

In summary, Barnes is on all fours with Plaintiffs’ misrepresentation claims here. YOLO repeatedly informed users that it would unmask and ban users who violated the terms of service. Yet it never did so, and may have never intended to. Plaintiffs seek to enforce that promise—made multiple times to them and upon which they relied—to unmask their tormentors. While yes, online content is involved in these facts, and content moderation is one possible solution for YOLO to fulfill its promise, the underlying duty being invoked by the Plaintiffs, according to Calise, is the promise itself. See Barnes, 570 F.3d at 1106–09. Therefore, the misrepresentation claims survive.

And maybe that feels right in this case, where YOLO’s behavior is so egregious. But, it’s unclear where this theory ends, and that leaves it wide open for abuse. For example, how would this case have turned out if the messages sent to the kids weren’t actually “abusive” or “harassing”? I’m not saying that happened here, as it seems pretty clear that they were. But imagine a hypothetical where many people did not feel that the behavior was actually abusive, but the user argued that it was. Perhaps they even said this to be abusive back.

Under this ruling, would YOLO still need to reveal who the anonymous user was to avoid liability?

That seems… problematic?

However, the real lesson here is that anyone who runs a website now needs to be way more careful about what they say regarding how they moderate or do anything. Because anything they say could be used in court as an argument for why Section 230 doesn’t apply. Indeed, I could see how this could even conflict with other laws requiring websites to be more transparent about their moderation practices, but where doing so could remove 230 protections.

And I really worry about how this plays out in situations where a platform changes trust & safety policies mid-stream. I have no idea how that works out. What if when you signed up, the platform had a policy that said it would remove certain kinds of content, but later on decided to change that policy as it was ineffective. Would someone who signed up under the old policy regime now claim that the new policy regime violates the original promise that got them to sign up?

On top of that, I fear that this will lead companies to be way less transparent about their moderation policies and practices. Because now, being transparent about moderation policies means that anyone who thinks you didn’t enforce them properly might be able to sue and get around Section 230 by arguing you didn’t fulfill the duty you promised.

All that said, there is some other good language in this decision. The plaintiffs also tried a “product liability” claim, which has become a hipster legal strategy for many plaintiffs’ lawyers to try to get around Section 230. It has worked in some cases, but it fails here.

At root, all Plaintiffs’ product liability theories attempt to hold YOLO responsible for users’ speech or YOLO’s decision to publish it. For example, the negligent design claim faults YOLO for creating an app with an “unreasonable risk of harm.” What is that harm but the harassing and bullying posts of others? Similarly, the failure to warn claim faults YOLO for not mitigating, in some way, the harmful effects of the harassing and bullying content. This is essentially faulting YOLO for not moderating content in some way, whether through deletion, change, or suppression.

They also make clear, contrary to the claims we keep hearing, that an app having anonymous messaging as a feature isn’t an obvious liability. We’ve seen people claim this in many cases, but the court clearly rejects that idea:

Here, Plaintiffs allege that anonymity itself creates an unreasonable risk of harm. But we refuse to endorse a theory that would classify anonymity as a per se inherently unreasonable risk to sustain a theory of product liability. First, unlike in Lemmon, where the dangerous activity the alleged defective design incentivized was the dangerous behavior of speeding, here, the activity encouraged is the sharing of messages between users. See id. Second, anonymity is not only a cornerstone of much internet speech, but it is also easily achieved. After all, verification of a user’s information through government-issued ID is rare on the internet. Thus we cannot say that this feature was uniquely or unreasonably dangerous.

So, this decision is not the worst in the world, and it does seem targeted at a truly awful company. But poking a hole like this in Section 230 so frequently leads to others piling through that hole and widening it.

And one legitimate fear of a ruling like this is that it will actually harm efforts to get transparency in moderation practices, because the more companies say, the more liability they may face.