The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

The Evidence Just Doesn’t Support Any Of The Narratives About The Harms Of Social Media

DATE POSTED:June 13, 2022

A whole bunch of people over the last month have sent me Jonathan Haidt’s essay in The Atlantic, “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” and asked for my thoughts. Haidt’s basic premise is that the problem is social media. It’s more complex and nuanced than that, and there are some important points in the complexities and the nuances, but the takeaway remains that social media is the problem. I’ve written about half of three different responses to it, but am still working on a more complete article explaining what I think it gets wrong. So this article is not that. However, this article is about an excellent piece in The New Yorker by Gideon Lewis-Kraus that is, itself, something of a response to Haidt, with the title: “How Harmful is Social Media?

It’s absolutely worth reading — especially if you eagerly bought into Haidt’s argument. Right now it’s so easy to blame social media for basically every social problem in the world, even if all social media is doing is shining a light on those problems that always existed, but were more hidden from wider view. We’ve seen the blame social media crowd use it to deflect from dealing with larger problems with recent mass murders. We’ve seen how states like California are pushing forward with laws that take the “blame social media” claims as fact. Hell, much of the AB2408 bill that is likely to be passed into law in a matter of weeks includes language that treats the “harms” of social media as fact.

But, the problem, as Lewis-Kraus’s piece makes clear, is that the evidence doesn’t actually show that. At all. He notes that Haidt’s piece was based, in part, on an analysis of a bunch of different studies about the impact of social media. This was a collaborative project of a bunch of researchers interested in the questions about the impact of social media, and at some point, the document was made public. It’s also worth reading. But, as Lewis-Kraus found, as you dig into it, the evidence is strikingly inconclusive if you’re trying to argue that social media is bad for society. It’s also inconclusive if you’re trying to argue the opposite. Basically, the evidence is… inconclusive.

The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”

There are some areas where the research does seem to clearly suggest that the popular narrative is just flat-out wrong, even as the narrative lives on. Last fall we wrote about pretty compelling research debunking the whole “social media creates echo chambers” thinking, and that makes an appearance in this New Yorker piece as well. Of course, without echo chambers as an excuse to fall back on, some are wondering if perhaps the lack of echo chambers is actually more of a problem than the reverse. That is, if everyone is confronted with conflicting ideas all the time, a natural reaction is to pick a side and dig in, creating more of an “us vs. them” mentality.

“A lot of the stories out there are just wrong,” he told me. “The political echo chamber has been massively overstated. Maybe it’s three to five per cent of people who are properly in an echo chamber.” Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks—in the original use of the term—are rarely heterogeneous. (Haidt told me that this was an issue on which the Google Doc changed his mind; he became convinced that echo chambers probably aren’t as widespread a problem as he’d once imagined.) And too much of a focus on our intuitions about social media’s echo-chamber effect could obscure the relevant counterfactual: a conservative might abandon Twitter only to watch more Fox News. “Stepping outside your echo chamber is supposed to make you moderate, but maybe it makes you more extreme,” Bail said. The research is inchoate and ongoing, and it’s difficult to say anything on the topic with absolute certainty. But this was, in part, Bail’s point: we ought to be less sure about the particular impacts of social media.

What about the idea that social media is dangerous because it’s a giant vector of misinformation polluting our minds? Again, the research has suggested it’s not as much of a problem as people make it out to be:

But, at least so far, very few Americans seem to suffer from consistent exposure to fake news—“probably less than two per cent of Twitter users, maybe fewer now, and for those who were it didn’t change their opinions,” Bail said. This was probably because the people likeliest to consume such spectacles were the sort of people primed to believe them in the first place. “In fact,” he said, “echo chambers might have done something to quarantine that misinformation.”

This was also stuff that we’ve pushed back on in the past. The “disinformation” story is pleasing for the media to repeat, because it basically puts them in the position of being saviors. If you can’t trust the riff raff and their disinformation spewers, then clearly the “answer” would be more traditional media. So, they have every incentive to play up that angle, even if the data doesn’t much agree with it.

And then we come to everyone’s favorite passion of late: the algorithms are radicalizing us all. The theory here generally goes that social media companies want more engagement and they learned early on that more and more extreme content makes people engage for longer, and therefore, they started taking every day normal people and turning them into radical extremists. Hell, this is a major theme of the popular documentary, The Social Dilemma that is chock full of disinformation itself, including its fictionalized family in which a teenager goes from a normal everyday teen into a raging 4chan-radical in like a week. Except, again, as we’ve reported, the evidence says this just isn’t true.

And Lewis-Kraus found the same thing:

The final story that Bail wanted to discuss was the “proverbial rabbit hole, the path to algorithmic radicalization,” by which YouTube might serve a viewer increasingly extreme videos. There is some anecdotal evidence to suggest that this does happen, at least on occasion, and such anecdotes are alarming to hear. But a new working paper led by Brendan Nyhan, a political scientist at Dartmouth, found that almost all extremist content is either consumed by subscribers to the relevant channels—a sign of actual demand rather than manipulation or preference falsification—or encountered via links from external sites. It’s easy to see why we might prefer if this were not the case: algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. “These are the three stories—echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms—but, when you look at the literature, they’ve all been overstated.” He thought that these findings were crucial for us to assimilate, if only to help us understand that our problems may lie beyond technocratic tinkering. He explained, “Part of my interest in getting this research out there is to demonstrate that everybody is waiting for an Elon Musk to ride in and save us with an algorithm”—or, presumably, the reverse—“and it’s just not going to happen.”

As the article notes, many of the early studies, on which these narratives are built, don’t actually hold up to much scrutiny.

When I spoke with Nyhan, he told me much the same thing: “The most credible research is way out of line with the takes.” He noted, of extremist content and misinformation, that reliable research that “measures exposure to these things finds that the people consuming this content are small minorities who have extreme views already.” The problem with the bulk of the earlier research, Nyhan told me, is that it’s almost all correlational. “Many of these studies will find polarization on social media,” he said. “But that might just be the society we live in reflected on social media!” He hastened to add, “Not that this is untroubling, and none of this is to let these companies, which are exercising a lot of power with very little scrutiny, off the hook. But a lot of the criticisms of them are very poorly founded. . . . The expansion of Internet access coincides with fifteen other trends over time, and separating them is very difficult. The lack of good data is a huge problem insofar as it lets people project their own fears into this area.” He told me, “It’s hard to weigh in on the side of ‘We don’t know, the evidence is weak,’ because those points are always going to be drowned out in our discourse. But these arguments are systematically underprovided in the public domain.”

What the giant collection of studies, and Lewis-Kraus’s article seem to make clear, is this shit is complicated. The reality, as with so many things, is that there are many, many different factors, and many, many different variables. And, as I keep saying over and over again of late: some of it may be exacerbated by social media, but some of it also may be made better by social media. And an awful lot of it may just be shining a light on parts of human nature and society that we’ve long swept under the rug. And we shouldn’t be oversimplifying the problems, or simply blaming the messenger for them. But so many people are.

Another point that is made in the article — and one that I find myself repeatedly arguing with people on Twitter about — is the fact that social media is dynamic, not static. Much of the narrative around how awful social media is, is based on the idea that the people who run these platforms don’t care, and don’t do anything to fix potential problems. If that was ever true — and it was never actually the case — it was only marginally true in the early days, and hasn’t been in over a decade.

Nyhan argued that, at least in wealthy Western countries, we might be too heavily discounting the degree to which platforms have responded to criticism: “Everyone is still operating under the view that algorithms simply maximize engagement in a short-term way” with minimal attention to potential externalities. “That might’ve been true when Zuckerberg had seven people working for him, but there are a lot of considerations that go into these rankings now.” He added, “There’s some evidence that, with reverse-chronological feeds”—streams of unwashed content, which some critics argue are less manipulative than algorithmic curation—“people get exposed to more low-quality content, so it’s another case where a very simple notion of ‘algorithms are bad’ doesn’t stand up to scrutiny. It doesn’t mean they’re good, it’s just that we don’t know.”

The article is correct in that no one is saying there’s no problem at all. Just that we haven’t accurately figured out the real issues, or the real impact of just about anything. And when you don’t understand that, you’re certainly not going to be able to fix things.

The reality, again, is that it’s complicated. It’s really complicated. And part of that is that we’re dealing with people. Not machines. This isn’t physics. Humanity and society are messy. And there are tons of confounding variables. Anyone selling easy answers is selling snake oil. Anyone insisting that they fully have humanity figured out is lying.

But, really, it’s important that so much of the narrative, based on nonsense and myths, needs to change.