The limits of the ‘rational’ founder

Anyone who knows me knows that I have a strong preference for explicit, formal logic over vague intuition or tradition. The mantra is fairly simple. If you can’t explain something clearly and logically, you don’t truly understand it. It you don’t understand it, you can’t make accurate predictions about the real world using it. And if you can’t make accurate predictions, then any strategy that relies upon it is worthless.

This drive to understand ideas at the root level served me well in undergrad, when I studied math and physics, but logic only applies to concrete objects with well-defined attributes, like numbers or electrons. How do you apply logic to the complex, ever-evolving, and fundamentally uncertain world of human interaction and business?

This skill of ‘reducing the real world to a logic problem accurately’ has many names in different communities, but I tend to call it “rationality.” And when done well, rationality is extraordinarily valuable. When a stock broker poring over notes realizes that a certain option is underpriced; when a CEO sheds a low-performing division in order to save the larger company despite the division’s lustrous history; and when a consumer successfully ignores a ‘great deal’ on something they never needed in the first place, all of these people are leveraging the art of rationality. More straightforwardly, I could frame it as follows – would you rather know exactly which actions will satisfy your goals most effectively, or not? In this context, it hardly seems like a choice – rationality is obviously good.

And if rationality was a switch – turn it off or turn it on – then that would be true. But realistically, rationality is a sliding scale. At the bottom end of the scale, you never question yourself or others, and only ever go with your gut and do whatever you feel at the time. At the top end, you always choose the very best option available to you given whatever your goals are. But no human is at either end of the scale – we’re all in the middle. And the middle is complicated, because partial rationality is often worse than no rationality at all.

In my mind, there are two major, common pitfalls folks fall into when they are trying to be “rational.” The first is a failure to understand that rationality can only apply to strategies, not to goals themselves – which causes many so-called ‘rationalists’ to deem others foolish without realizing they are simply optimizing for something different. This topic deserves a blog post of its own, but for now, I’d like to focus on the second pitfall, which is perhaps best encapsulated by the heuristic of Chesterton’s Fence. This issue occurs when you identify a practice or idea which, at first glance, appears to be inaccurate or silly – say, a fence in the middle of nowhere, serving no clear purpose. The pitfall of the attempted rationalist is to say ‘well, I can’t figure out why this is here, so the odds are that whoever put this fence up is just a fool.’ Such an individual then tears down the fence and, shortly thereafter, is gored to death by the bull that had been somewhere on the other side.

This idea – that just because you don’t see why something is a good idea, that doesn’t mean it’s actually not a good idea – is the sort of thing that everyone pays lip service to. I certainly would never have denied it. But most people leave themselves a little escape hatch in the back of their mind. Surely, they think, there must be some sort of absurdity heuristic, where a sufficiently stupid behavior can be readily dismissed?

I recently came across a powerful example of why you can’t think this way, in Joseph Henrich’s The Secret of our Success. Henrich’s focus throughout the book is on cultural evolution – the ways cleverness manifests itself within our societies without any specific individual necessarily ‘solving’ their way their directly. He discusses a variety of interesting studies showing how humans evolved with biases towards certain kinds of social learning, in ways that either bypass or explicitly prohibit rationality. By far my favorite example was his discussion of using divination methods to predict the location of prey to hunt. He discusses a tribe of foragers in Canada who are attempting to track caribou, and notes that, if they consistently used the strategy of ‘return to where they saw many caribou last time’, this would only serve them well for a short time. After all, the caribou would quickly learn – both through direct intelligence, but also due to the artificial selection pressure of the losers being hunted – not to return to an area after humans had been there.

Henrich notes that this is basically a version of a common game theory setup known as the Matching Pennies game. And there is exactly one Nash equilibrium in such a game – both sides need to make their choices randomly.

This sheds new light on the use of divination. We know, with modern science, that the way a bone cracks has no correlation with where caribou are – indeed, it has virtually no correlation with anything meaningful in the world. But what if that was exactly the point? What if divination of this type was effectively being used as a primitive random number generator, allowing these civilizations to thrive far more successfully than they would have by simply chasing the caribou directly.

On its own, this is a powerful depiction of Chesterton’s Fence – imagine how near-perfect at rationality you would have to be, if you were teleported into this community, to NOT try to stop them from using a divination practice you ‘know’ to be useless? Very few people in the world have the requisite balance of intelligence, curiosity, and humility to avoid breaking something important in a situation like this. But what struck me most deeply was actually a comment Henrich made after.

What Henrich noted is that, in cases like this, it’s entirely expected that every single member of the community understands divination incorrectly. They all believe it tells them where caribou will be, not that it’s part of a game-theoretic optimal strategy for repeated mismatching games. (Henrich offers other examples of this too – in Fiji, pregnant women are banned from eating sharks, which we know now to be a good idea due to chemicals in the sharks that increase the odds of birth defects. But when pressed to explain the taboo, these women ultimately declared that eating shark would cause their babies to be born with shark skin.) You might be tempted to laugh at these individuals, but I challenge you to really think about last time someone asked you why you believed a certain politician was the right one, a particular execution strategy would be more successful, or to justify anything else. Had you really thought through exactly why you believed those things, from first principles, before anyone ever asked? Or is the real reason you believe them something like ‘vague intuition based on lots of learned experience’, but the pressure to give a more legible answer caused you to grab at some plausible-sounding specific claims when asked?

This, finally, leads me to talk about my own experience as a founder. Even before I started Modulate, I would become immensely frustrated with colleagues or bosses who could not explain their thinking to me. I was, and am, eager to learn about what I don’t know – and given my technical background, that especially meant asking lots of questions about marketing, sales, product design, and business strategy. I was extremely fortunate to find a few mentors with a gift for sharing their wisdom in a way that made sense to me, but these folks were a small fraction of the frankly immense talent I had the good fortune to surround myself with. And this has largely been the same at Modulate – we have an incredible team, and I’m learning a great deal from them, but I’m also painfully aware of how much more wisdom each of my colleagues has, which I’m still struggling to truly understand for myself.

For a long time, my strategy here was, quite frankly, to just pile on the pressure and ask even more questions. “But why do you think that’s the right thing to do?” “What if we did this instead, what specifically do you expect to go wrong there?” “Last week you told me to do X instead of Y, now it seems like you’re suggesting Y instead. What exactly is different about this situation?” Occasionally, I would hit upon the right way of framing a question such that the person I was speaking to – invariably displaying a level of patience with me that I’m immensely grateful for – would discover a way to phrase their answer which I could truly grok. But usually, this process failed – for a reason that, until recently, I didn’t quite have a model for. But now I do – it failed because they didn’t know the answer. The best I was going to get was “because I don’t want my baby to have shark skin.” And getting that answer is arguably more dangerous than getting no answer at all – because if I were to declare “you’re a fool, babies can’t get shark skin, so I guess I can just ignore your suggestion”, I would end up choosing the wrong strategy!

I suppose the best way to wrap this up into a clear takeaway would be to say that understanding why something is the right answer is a lot less correlated to simply knowing what the best answer is than I would have naively expected. Just as the art of divination evolved through cultural evolution – where no single practitioner understood why it was a good idea – so is it possible that many of the most opaque business practices evolved the same way. That an experienced salesperson doesn’t know why they prefer to explain the product this way isn’t evidence that they are wrong. Even if their explanation sounds obviously wrong, it doesn’t mean they are wrong. The only arbiter of right and wrong are outcomes – which strategy is actually successful.

And what you can say for sure about the traditional way is that it’s been at least successful enough to last this long. In many cases, that’s a matter of centuries in one of the mostly hotly competitive arenas nature has ever devised – human capitalistic society. Compared to your next clever idea, I’d say that deserves more than a little weight.

Before I end this post, though, I should acknowledge – this essay might come across a bit depressing. There’s a reading of this that says “improving things is hopeless, just give up and do whatever is traditionally done.” But not only is this the exact opposite of entrepreneurship, but it’s also obviously wrong in the real world (by far the more important of the two.) After all, we have improved on traditions, over and over again, for centuries. That, too, is a longstanding tradition. So, while I could probably write a whole other blog post just on specific tactics to continue to learn and innovate without violating Chesterton’s Fence, I think it’s important to at least briefly mention a few of them here.

  1. Default to the traditional way. You are allowed to defy tradition, but not without a point of comparison. Start out doing it the ‘normal’ way even if you feel like that’s silly – and keep it going until you start to viscerally feel real pain, manifested in the form of business outcomes that are distinctly worse than you feel should be possible. Only then can you start to design new techniques with any expectation of them working out.
  2. Hire experts, empower them, and then ignore them. The job of the established expert is to know what the traditional approach is and to execute it reliably. But as discussed above, that does not make them reliable advocates for the traditional approach – in fact, no single person in the world may fully understand why it makes sense. Use experts as a backstop, to ensure that your company does no worse than average; and by all means, pick their brains to understand what the traditional way is. But when it comes to innovating better ways, preach all you want, but the shamans are never going to purchase your random number generation computer to implement their caribou-hunting strategy directly. They don’t want to – they want to use divination, because internally, they each believe that’s the right strategy. This is where the necessary arrogance of the entrepreneur comes in – if you’re getting traditional experts in a non-scientific field to endorse your revolutionary way of doing things, it’s probably much less revolutionary than you thought.
  3. Experiment, experiment, experiment. Gleefully take every single disagreement or question, especially from the traditional experts, as an opportunity to actually test your beliefs. Think carefully about the tolerance your company has for imperfect approaches, and then take advantage of that slack. Your goal is not to get four people into a room to reason their way to the ‘true’ answer – none of you will be able to introspect on your own beliefs well enough to really balance them all together. Rather, your goal is to get as many distinct, potentially good strategies on the table as you can get, and then to test them and figure out which ones actually work.

Rationality is still, frankly, an essential art to hone if you hope to become a successful entrepreneur. But entrepreneurs are arrogant creatures by nature, and rationality is among the most dangerous tools to attempt to wield beyond your true level of expertise. So stay aware of ways that your insistence on explicit logic may be holding you back, and remember – you get no points for the beauty of your explanations. The only points that matter are the ones you get when you actually succeed, whatever sorts of reasoning you may have relied upon to do so.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.