Epistemic status of this post: I am not an expert in the relevant fields, and this is not a principled suggestion after careful review of all possible research. Rather, this is me attempting to articulate an idea that seems good to the best of my understanding, in the hopes that by doing so I’ll make it easier for someone to point out what I’m missing.
I’d like to talk about fact-checking on social media. But before I do, in typical fashion, let me pose a more philosophical question.
What does it mean, in the real world, to know something is true?
Well, one’s first quibble might be that you can never know anything with certainty. You can expect that something is true with high probability, but can never actually know it for certain. This is why scientists max out at “theory” rather than “certainty.”
But let’s acknowledge that the English language is allowed to be a bit loose, and assume that when we say “I know X is true,” we really mean “I assign very high probability to X being true.”
How would we come to such a belief in a reliable way?
The ideal answer would be to say “by experiment.” Any belief should imply something testable about the world, so you note down that hypothesis, and then observe what’s actually happening in the world and whether or not it aligns with what you expected. If it does, you should increase your probability that the belief is true (assuming you properly control things to avoid your biases leaking through, but that’s a whole other conversation.)
This is the fundamental principle of science – that any belief can be tested, and that the results reality hands us must be accepted, and the belief thrown out, if the two contradict.
The problem is that, in every day life, the things we want to know about are incredibly complicated. This means that it’s hard to make clear predictions about what you expect to happen. Consider attempting to test the theory of gravity by observing the behavior of cars. You might predict a few things correctly – that the car can move downhill more easily, say – but overall a car would seem to disprove the claim that gravity constantly pulls things towards the Earth. In this case, we can test gravity by using simpler systems than cars – but what if we want to test beliefs about things like macroeconomics or political science?
The very clever people who work in complex analytical domains have spent years devising partial answers to that question. They look for ways they can test their beliefs in the lab, or for natural experimental setups which occurred ad hoc in the real world, or other tricks that allow them to confirm or update their beliefs. But most of us are not as sophisticated as these scientists are, and none of us have the expertise to form our beliefs experimentally in every possible field!
So we’re left with a problem. If we can’t come to beliefs about important questions experimentally…how the heck are we supposed to come to any beliefs about those things at all?
There are bad answers to this question – for instance, “Life would be more convenient if this were true, so I’m going to insist it is true.” Reality cannot be persuaded or tricked – it just is what it is. The only one you’re fooling in cases like these is yourself.
But there is hope! Say, hypothetically, I could identify an economist who had consistently made predictions over the past 10 years and been proven right again and again. This is still hard – I need the expertise to judge whether those experiments were reasonable tests of the theory the economist believes in. But that’s a much lower barrier to entry than having the bandwidth and expertise to create and conduct those experiments myself! Further, if I can validate that the economist’s track record is as impressive as they claim, I’ve formed a new belief – not a belief about economics (yet), but a belief that this particular economist is believable with respect to their focus within economic theory. This recognition of believability shouldn’t make me more likely to trust, say, their claims about how quantum mechanics works, but I can use knowledge of their track record to assign high probability to whatever they say next about their specialty within economics, even if I don’t understand exactly how they came to that particular conclusion.
Of course, most of us don’t have the time to develop deep familiarity with all the possible economists out there – but some folks do. And those folks have their own friends, who they relay their discoveries to. In this case, these people have second-degree believability – they build their own track record in the field of “being good at figuring out who is believable with respect to economics (or whatever other field)”. And this process can go on and on, ad infinitum.
What’s important is that, at each step, more work is done. The original scientist needs to devise and conduct experiments in a testable manner. The first-level observer needs to judge whether or not the scientist’s track record comes from honest experiments or through manipulation of data or other deceptions, intentional or otherwise. The second-level observer needs to determine whether the first-level observer’s judgement was made in an unbiased way. Etc. At each step of the process, even though we’re getting further from the original experiment, we’re actually adding more information, which should give us higher confidence in the plausibility of the experiment. (Hence the name “believability cascade” rather than simply naming it a chain or something.)
Of course, this only works if all the participants in this chain are responsible and reliable. If a group of scientists agrees to affirm each others’ work regardless of whether it makes sense, they can claim that the support of the others in the group ought to give them believability. And someone without enough knowledge might simply see a bunch of other scientists supporting this work, and assume that means it was reliable. This highlights two things to be careful with when judging believability. Firstly, you need people involved in the cascade at actually independent layers, because it’s a lot harder for the deceitful scientist to come up with a quid pro quo with an impartial observer than it is with another scientist. And secondly, we need some way of tracing back along the chain, so that anyone can jump in at any point along the chain, increasing the odds of unbiased evaluation. (It’s much harder to pass off bad science as real if all you share is the fact that 10 other scientists agree, as compared to if you also have to make your actual experiment available for others to review.)
Now, we can finally return to the question of fact-checking in social media. I’m always wary about wading into politics on this blog, but I think in this case it makes sense to address the recent hubbub around Twitter “fact checking” certain tweets (for instance, from President Trump.) The ferocity of the debate seems to hinge on the basic question of “Who the heck is Twitter to be the sole arbiter of what’s true?” (At the very least, Mark Zuckerberg claims this concern is why he refuses to address false or misleading content on Facebook.)
For the record, I agree that Twitter, and all other social media platforms, should not be the arbiter of truth. They are not specialized as scientists in the highly complex domains which many posts discuss, they have mixed incentives given their monetary goals, and the vast majority of claims made on social media would be insanely hard to test directly for their veracity in the first place. But rather than giving up on the concept of truth entirely, it seems like believability cascades could be a powerful answer.
What if Twitter’s only job was to categorize, not moderate, tweets? Each tweet could be marked as claiming something about, say, macroeconomics, or political science in the US, or European history, or whatever else. Then, any Twitter user could identify those individuals who they, in their own discretion uninfluenced by Twitter or any other platform, consider to be believable in those domains. But don’t stop there! Say I mark a politician – let’s name them “Martha,” I don’t think that has any strong connotations in US politics today – as believable with respect to epidemiology. Martha, being a humble and responsible politician, knows that she is not actually a scientific expert in epidemiology – but she has a high degree of faith in our institutions like the CDC. The CDC’s main twitter account is then perhaps maintained by a marketer, but they know that the senior scientists there actually have boots on the ground, and they run a separate Twitter account focused on pandemic response, so the marketer makes sure to mark the pandemic response account as believable. And finally, the pandemic response account knows they are the experts, so they mark themselves as believable on this subject.
By following this chain, I would actually see tweets only from the pandemic response account – they are the bottom of this chain. I wouldn’t ever see tweets from the CDC’s official marketing account, or Martha, relating to epidemiology. However, whenever I see tweets from the pandemic response account – or any other account that e.g. Martha may have indicated as believable, after following those chains to the bottom as well – there could be a button I could click to basically “trace” the believability chain. I could understand that the reason I’m seeing these tweets is because Martha believes in the CDC believes in the pandemic response unit.
This seems to me to be an incredibly powerful method for detecting veracity in the modern era. It would mean multiple believability checks at various levels – you can’t have loops of scientists all adding to each others’ believability, because someone has to mark themselves as the “root” of believability, and they’ll be fact-checked directly. Further, everyone along the believability chain sees the same root claim, providing much broader opportunity for consistent fact-checking. (This also avoids a huge bias in human thinking, which is that we often “double-count” evidence. For instance, if News Channel A reports something, and then News Channel B reports based only on having seen News Channel A’s report, rather than having independently verified what is happening, our brains will natively count this as “two independent news channels are both claiming this is true.” But by tracing the believability cascades down, we can see that there was actually only one root source of this information, and update our overall confidence in that claim accordingly.) What’s more, this is exactly the sort of technical solution which is perfectly within the wheelhouse of social networking sites, as opposed to asking them to become master fact-checkers of all things. (It bears an obvious resemblance to thinks like follows and retweets which already exist, though it would require some important tweaks to create a reliable believability cascade out of those elements.)
I’ll admit, I have another reason for liking this idea, which is that philosophically, I dislike the attitude of asking someone else to do your thinking for you. So I find it much more attractive to ask sites like Twitter to provide us the tools to quickly gather all purported evidence so we can review it, rather than asking them to tell us what conclusion we’re supposed to draw from that evidence.
Now, I’m sure you have many examples in your head of large groups of people continuously reinforcing a shared “delusion” with each other – maybe a political perspective you don’t like, or a scientific claim you feel has been debunked, or whatever else. And I’ll confess, believability cascades don’t remove this possibility. Again, this isn’t a tool that lets you abdicate your responsibility to be a critical thinker. All these believability cascades do is make it obvious where each claim comes from, so that you can meaningfully draw your own conclusions based on the real evidence available – rather than having to sift through near-infinite echos, misinterpretations, baseless assertions, and other noise that gets in the way of forming clear conclusions.
Do I really believe this will fix all the world’s problems? Heck no. But do I believe it would be significantly better than both our current state, as well as the world where we ask random social media companies to dictate what gets to count as true? Absolutely.