The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
 
30
 
 
 
 
 

If You’re Trying To Stop Scammers From Using Your Site, Firing The Trust & Safety Team Might Not Be The Brightest Idea

DATE POSTED:March 25, 2024

I know that some people, including Elon Musk recently, have claimed that “trust & safety” is some sort of “euphemism for censorship.”

Image

That is not true, and has never been true. The role of trust & safety has always been about building trust in the platform and making sure that users feel safe on the platform. This goes way, way beyond “ensuring compliance with the laws that already exist.”

Let’s give just one example that might highlight why you shouldn’t fire or drive out your entire trust & safety team and falsely claim that they were only there to “censor” political speech.

Just as a purely hypothetical example, let’s say that your CEO decides on a whim that showing the destination of URLs to news articles is aesthetically unpleasing when people post them. He then orders the remaining team to remove the URLs and headlines. After that fails miserably, he modifies it to make it a little clearer where the links are going to. Sort of.

But, because there’s no real trust & safety team, and whoever is left is told only to focus on “compliance with the laws that already exist,” you have no one to red team how all of this might be abused.

The end result? You have set up a profoundly stupid way of handling links that is like a fucking goldmine for scammers:

Users of the social media platform X (formerly Twitter) have often been left puzzled when they click on a post with an external link but arrive at an entirely unexpected website from the one displayed in the post.

A Twitter ad spotted below by a security researcher shows forbes.com as its destination but instead takes you to a Telegram account purportedly promoting crypto scams.

As Bleeping Computer describes (and originally called out by Will Dormann), a “verified” (lol) account posts something as an ad with a link that looks like it’s going to a reputable source (in this case, Forbes):

Image

The fact that it was boosted as an ad suggests why it has over a million views and only 191 likes. Mucho organic. But, if you click on it, it takes you to some sketchy Telegram crypto scam.

Image

How does that happen? Well, as Bleeping Computer notes, the link actually takes you to another site “joinchannelnow” (not sharing the TLD). This site then checks your user-agent and determines where to send you. If it thinks you’re a human, it sends you to the crypto shit on Telegram. If it thinks you’re a bot from Twitter trying to figure out the ultimate destination to display it… it sends you to a random Forbes article.

And, of course, this makes it all very, very ripe for scamming, whether phishing or otherwise. Much trust. Very safety.

Of course, since this is all entirely hypothetical, I’m just using it as an example of the kind of thing that a trust & safety team would likely red team and explore how such a system might be abused, and demonstrating how that role handles a hell of a lot of other things that have nothing to do with “political censorship.”

After all, we’re talking about a site where the “new owner” insisted he had to buy the place in order to stomp out scams. Given that, it would be absolutely ridiculous to fire all your trust & safety people. It would also be ridiculous to claim that they were just there for censorship, and you’re only enforcing the laws now… while then enabling scammers to take advantage of gullible people by (1) making it easy to let any old scam account get “verified,” (2) allowing them to post shit links to scam groups, and (3) enabling them to trick your system into telling people the link is to a more reputable site.

I mean, that would never happen, right? Not if you understood how trust & safety works. Especially when you have a (hypothetically) genius, visionary CEO who really is knocking down those censorship walls, and surely knows exactly what he’s doing.

And, really, if it did happen, I guess you’d deserve Lifehacker telling its users “the best way to stay safe on X is to stop using X.” But, of course, it wouldn’t happen. Because anyone with more than a few working brain cells would easily know that you need to actually have a trust & safety team that paid attention to this stuff. Right?