Making predictions is fun. It satisfies something deep for many people to speculate about the significance of certain actions, the hidden goals and values of different people, and often, the future itself.
Usually, these predictions are plausible – after all, making completely wild and indefensible predictions is usually a pretty boring activity. The thrill of making predictions comes from the feeling that you’re discovering something – that you might be ahead of the curve on realizing some new important truth.
But plausibility is not probability.
Plausibility does not require probability.
Indeed, plausibility doesn’t even necessarily correlate to probability.
The difference between these two terms comes from the fact that the human brain is terrible at intuitive math. We develop feelings about likelihood based on built-in heuristics and gut feelings, not Bayesian statistics. What this means is that plausibility has to do not with whether or not the prediction is good, but on how the listener feels about it. More concretely:
Plausibility comes from your ability to make a person nod and say “hmm, that seems like it could happen.”
Probability comes from your ability to make testable predictions which later are revealed to have been right.
Now, when you think of a prediction, it’s easy to ask yourself whether or not the statistics of your theory are sound – but of course, it’s hard to actually assess that clearly. I see this all the time – and I myself fall victim to it as often as not – but one of the most poignant places I see people confuse these ideas is when making predictions about the future of technology. Questions of the form “what will AI be able to achieve in 5 years”, “what’s the technology we should be most excited about or scared of”, etc, all fundamentally share this problem – they’re fun to ask, fun to speculate about, but for obvious reasons extremely difficult to give a principled answer to.
Now, I don’t want to come of as saying it’s bad for people to speculate like this. It’s fun, and often it’s also educational, inspiring people to think and learn more deeply about something than they might otherwise have done so. But I do have concerns when people start looking at so-called expert predictions as guidance for policy or concerns about what’s going to happen.
The predictions which make the rounds on social media, which are the most likely for you to come across, are the most powerful stories. The ones with emotional weight to them. The ones that strike listeners as inherently plausible, and dramatic besides.
But plausibility is not probability, and I think we all benefit by building a bit of a resistance to over-indexing on these kinds of stories.
Some other things I try to keep in mind when assessing predictions:
- Remember where the kernel of the idea is coming from. For instance, consider “are we going to be attacked by a swarm of killer humanoid robots?” Where is this idea coming from? We barely have humanoid robots which can open doors in real life, much less pose a threat to an adult human. Nor would any entity – human or otherwise – set on destroying humanity find that human-shaped robots were an efficient choice to do so. This idea is purely from science fiction stories, which were optimized, as mentioned above, to be entertaining while maintaining plausibility, but which typically (with a few exceptions) discard probability. Even if the described scenario sounds bad, the fact that it’s coming from fiction rather than probability means that there are almost certainly more likely things which better deserve our attention.
- Beware predictions which are costless to make! I knew someone in high school who prided themselves on seeing every relationship coming from months away. Their secret? They made loud, public predictions that a relationship would form every single time any man and woman – or sometimes two people of the same gender – interacted with each other more than the norm. They weren’t seeing anything others weren’t, and relying on their predictions wouldn’t help you get ahead – they were simply taking the shotgun approach and hoping to milk the times they were right. You may not be able to see all the failed predictions, but if you see a trend of predictions which were easy, and which required neither work nor reputational risk, you should be able to guess that they are also less likely to be statistically reasoned.
- Last and most importantly, assess the idea itself! This far and away supersedes everything else I’ve said – in the end of the day, what actually matters is the quality of the prediction. The points I’ve made above are tools to help parse through a large number of predictions to find those most likely to be valuable – but nothing I’ve said should be taken as an argument to dismiss a prediction you don’t like based on where it came from! (And, to invert it, no matter the pedigree of someone making a bold prediction, you always need to assess the merit of what they’re saying, not merely assume they must be right!) The only true merit of a prediction is whether or not it actually holds up – no cause or vision is aided by preferring predictions which are more pleasant but don’t actually help us prepare for the future.