Controlling actual disinformation without injecting bias or stifling free speech isn’t easy. It’s amazing what a couple of days and $44 billion can do. When I saw that an anti-gun post on Twitter had been fact-checked, it was clear that they were trying something new that might also actually be fair.
The Challenge: Control Actual Disinformation While Respecting Freedom of Speech
It would be wildly inaccurate to say that Twitter’s approach to fact-checking has been fair or remotely even-handed. In my experience, they primarily did whatever they could to defend the establishment, “approved” narratives and, whenever possible, the mainstream authoritarian left.
Meanwhile, big anti-gun accounts sharing disinformation largely got to continue making baseless claims against firearms ownership. Even if every one of Twitter’s fact-checks was true and accurate (I’ll leave that one up to readers’ imaginations), the fact that they were unevenly applied is what indisputably made them biased.
On the other hand, one of my concerns with Elon Musk’s recent purchase of Twitter is that he’d create a free-for-all environment with nothing to stop the flow of intentional disinformation. Combined with the mute and block functions, we’d pretty quickly divide many Twitter users into neat echo chambers where they get a steady diet of disinformation from people with an axe to grind instead of gaining better knowledge through enlightened free speech.
At best, that would have a negative effect on politics and discourse. At worst, it could lead to IRL violence as people disconnect further from reality and start acting on imaginary threats and internet rumors, much like the Pizzagate shooter.
Perhaps worse, rampant disinformation would allow agents of authoritarian states (such as the People’s Republic of China and their two-million-strong army of “wumao”/網評猿 propagandists) to inject bad information into our political discourse. This could put our country at a disadvantage to them at the worst possible time, and is a very real risk to national security.
Given the risks, this is a problem any responsible social media operator must consider and deal with. But fair and even-handed moderation and fact-checking can’t really happen in cubicles at some corporate headquarters.
For one, there just isn’t enough manpower to address the volume of mis- and disinformation that bad actors like the Chinese government and anti-gun organizations are pumping into it. The other issue is that corporate culture tends to create its own sets of biases, leading to eager fact-checkers who push leftward to impress their likeminded bosses.
Flagging An Anti-Gun Lie
OK, maybe not a lie. You can probably chalk up the following example to a Brit’s blind ignorance. Anyway, yesterday Twitter selected me for a trial view of Birdwatch. I found this out when I saw a notice below an anti-gun tweet that set the facts straight:
Instead of “the unquestionable fact-checkers thus sayeth,” it was clear that this new fact-checking system is driven by average users. Instead of taking a hard position, the notice says “Readers added context they thought people might want to know.”
The simple notice under the tweet above states that the guns pictured are, in fact, BB guns, and aren’t regulated like firearms because they have a low risk of injury. Something most Americans would know, but apparently not a hapless BBC “journalist.” Links to back up both assertions were included, too.
Unlike past fact-checking efforts, Birdwatch doesn’t rely on the word and opinion of people working at Twitter HQ. Instead, it allows normal users to flag disinformation and provide context. Then, their “note” goes into a moderation system that’s also driven by users.
Here’s what that system looks like:
It shows you tweets that someone has flagged with a note and asks you to rate the notes.
It asks you whether each not is helpful, and you can answer “Yes”, “No”, or “Somewhat”.
It then asks you to explain why you thought the fact-checking or context note was or wasn’t helpful. In this case, I chose “No” because it provides no sources, uses biased language, and is poorly worded.
To deal with people who approve or disapprove of notes based on their own ideology or tribal affiliation rather than objective criteria, Twitter doesn’t rely on the word of one reviewer. To actually appear below a tweet, a comment needs a number of positive votes from people across the ideological spectrum (presumably based on past tweets). So, good tweets aren’t going to get a note unless there’s a broader consensus that something is wrong.
Perhaps more importantly, the offending tweet doesn’t get removed. The person sharing disinformation gets their freedom of speech, while their followers get to decide for themselves whether the person was sharing faulty information or trying to deceive people, and they get information broadly found useful to help them make their decision.
While no system is perfect, I think this puts the power of fact-checking back where it belongs…in the hands of normal platform users. It also gives us a fighting chance against all kinds of disinformation and not just things the left objects to.
How To Help Fact-Check Anti-Gun Propaganda
This system predates Musk’s purchase of Twitter, but it does appear that they’re letting more people into it since he made the sale. So, we can’t be sure whether he’ll keep it or ask for further improvements. But if they do decide to keep it, we could end up with a system that puts us all on a level free speech playing field rather than one that’s tilted to one side.
If you want to learn more about it and contribute to rating these context notes, you can check it out yourself here.