I was recently having a sweaty conversation about the construct and use of models while sitting in a sauna with a friend. We were talking about how the aggregate polling forecast turned out so wrong on the 2016 Presidential Election. I took the position that models are important to have (or else we would just be making random guesses) but can be dangerous if you become overconfident in your model(s). Let’s hash this out.
A Poker Model
An example of what I mean by models should help you wrap your mind around what definition I’m talking about.
Let’s say your want to start playing poker. Without any model in place, you would play the game entirely randomly. Sometimes you might fold without looking at your hand. Sometimes you might go all-in without looking at your hand. Sometimes you might fold an AA pre-flop. And on and on. Every action would be completely random. There would be no rhyme or reason to your actions.
You play this way for awhile and you realize you are a terrible poker player. So you decide to learn a bit more about the game. You learn that some hands are better than others. With this new found insight, you form a model: you decide that you will only play a hand if it has at least 2 face-cards, everything else you will aggressively fold.
With this basic model in place, you find that your play improves over your no-model, random play.
Great, you have realized the power of having a model in place. You further realize that you can tinker and improve your model. You start memorizing the odds of winning certain hands based on table position and number of players. You start scouting your competition and memorizing their playing style, their tells, and what makes them tick. You incorporate all of this data into you vastly improving model for playing poker. Soon, you are the World Poker Tour champion. Great success. Your adoption of a poker model has propelled you to the pinnacle of poker.
The example of poker is to define in general what I am talking about when I use the word “model”. I think we would all agree that we all have many, many models running in our heads. Some are good and some are bad. Some were built purposefully while others just happen to be things we casually picked up from others. Some forecast better than others.
We talk a lot about mental models around here and the Munger concept of “mental models” most definitely is apt for what we are talking about. You do want as many useful models in your head in order to synthesize, interpret, and see reality as clearly as possible.
What brought about my initial hesitation in the sauna was the Munger concept of the “man with a hammer syndrome” where you essentially believe your expertise in one area is applicable in all areas. For example, if you were a heart surgeon, you have a fantastic model in place through years and years of education and training on how to successfully perform heart surgery. However, this does not mean that you can then turn around and fix all the ills of the body through heart surgery.
What is highly useful and relevant in one area is completely useless in another. Therefore, while models are a necessity, they can be a double-edged sword if not thought through properly.
Another downside on the concept of models is being extremely confident in making predictions in highly complex systems. I’m reminded of the spectacular implosion of Long Term Capital Management in the 1990s. This was a team comprised of some of the brightest and most decorated economists, mathematicians, and traders on Wall Street. This was like the All-Star team of finance. They built an incredibly sophisticated trading model that worked… until it didn’t. Oops.
The models, as built, were utterly inadequate at describing the world, as they did not taken into account liquidity. Not to the appropriate degree. More important, the events surrounding LTCM showed that the liquidity of an asset was a tenacious variable, hard to define, and prone to large swings.
The problem was even more complex than that. LTCM itself, by owning certain positions, affected the eventual liquidity of securities. It created a self-reinforcing problem that the models could not address. The models needed to consider that others where using, well, models.
Wall Street learned very little following September 1998. By the time Lehman defaulted and triggered the next financial crisis (and my loss on the Brazilian rates) the same model-based methodology had again taken a primacy in markets, culminating in massive investments in complex mortgage securities.
Again, it’s not that we shouldn’t build and use models to guide us through the world, because we must. The danger lies in the hubris we can fall into with the models we build, especially elegant and complex ones that check out in theory, but may not in the real world. And when such elegant and complex models are being used to make consistent, accurate predictions in a highly complex system, it may not workout long term because there are too many “unknown unknown” inputs and outputs in the system – the model is susceptible to breakdown in the event of unknown and unknowable variables entering the picture.
Models guide us from birth on how to navigate life. These are fairly simple, crude models we pick up from observation or lessons from our peers and elders. There isn’t much complexity to these models and they usually serve us well enough in the environment in which they are useful.
Once we become more “educated” we are able to take skills in mathematics, and other disciplines, to create much more complex models. In the pure sciences, these models – for example, the theory of gravity – stand up very well because they adhere to a set of consistent rules and laws, allowing the model(s) to make consistent and accurate predictions.
However, when it comes to human beings, we are not mere atoms and molecules following the same consistent rules and laws in all places at all times. We are complex, combustible, and at times highly irrational beings. That’s just one human being: put many together and you have created a highly complex system that probably cannot be modeled accurately over a long enough time period.
Well, actually, let me rephrase that. I think you could collect enough data on humans – individually and as a group – to make highly accurate predictions on what someone might do next and so on and so on. I think the big danger lies in the fact that this won’t be consistent at all times and all places. An action that might be predictable 95% of the time still leaves open a 5% chance that the prediction will turn out false. No harm and no foul when the predictions are not staked on anything of important.
But when you have high stakes in the game (your assets) in a highly complex system (the markets), falling into hubris and bad incentive structures with your models can be deadly because not all datapoints are known (or perhaps even knowable), participants do not at all times and all places follow the rules and laws of the game, and incentives could cause you to reach for just a little bit more for a lot more risk.
*After having written this up, I’m still a bit hazy of where exactly I was going with this – in terms of thinking of models, their uses, and their limitations – so if you want to open up critical discussion on this, I would much appreciate it! Also, I jotted down some additional thoughts here.