How to Think

12 minute read
Ideas
Parrish is the entrepreneur and wisdom seeker behind Farnam Street and the host of The Knowledge Project Podcast, where he focuses on turning timeless insights into action. His new book is Clear Thinking: Turning Ordinary Moments Into Extraordinary Results

I wrote a response on quora recently to the question ‘how do I become a better thinker’ that generated a lot of attention and feedback so I thought I’d build on that a little and post it here too.

***

Thinking is not IQ. When people talk about thinking they make the mistake of thinking that people with high IQs think better. That’s not what I’m talking about. I hate to break it to you but unless you’re trying to get into Mensa, IQ tests don’t matter as much as we think they do. After a certain point, that’s not the type of knowledge or brainpower that makes you better at life, happier, or more successful. It’s a measure sure, but a relatively useless one.

If you want to outsmart people who are smarter than you, temperament and life-long learning are more important than IQ.

Two of the guiding principles that I follow on my path towards seeking wisdom are: (1) Go to bed smarter than when you woke up; and (2) I’m not smart enough to figure everything out myself, so I want to ‘master the best of what other people have already figured out.’

Acquiring wisdom, is hard. Learning how to think is hard. It means sifting through information, filtering the bunk, and connecting it to a framework that you can use. A lot of people want to get their opinions from someone else. I know this because whenever anyone blurts out an opinion and I ask why, I get some hastily re-phrased sound-byte that doesn’t contextualize the problem, identify the forces at play, demonstrate differences or similarities with previous situations, consider base rates, or … anything else that would demonstrate some level of thinking. (One of my favorite questions to probe thinking is to ask what information would cause someone to change their mind. Immediately stop listening and leave if they say ‘I can’t think of anything.’)

Thinking is hard work. I get it. You don’t have time to think but that doesn’t mean you get a pass from me. I want to think for myself, thank you.

***

So one effective thing you can do if you want to think better is to become better at probing other people’s thinking. Ask questions. Simple ones are better. “Why” is the best. If you ask that three or four times you get to a place where you’re going to understand more and you’ll be able to tell who really knows what they are talking about. Shortcuts in thinking are easy, and this is how you tease them out. Not to make the other person look bad – don’t do this maliciously – but to avoid mistakes, air assumptions, and discuss conclusions.

Another thing you can do is to slow down. Make sure you give yourself time to think. I know, it’s a fast-paced internet world where we get some cultural machoism points for answering on the spot but unless it has to be decided at that very moment, simply say “let me think about that for a bit and get back to you.” The world will not end while you think about it.

You should also probe yourself. Try and understand if you’re talking about something you really know something about or if you’re just regurgitating some talking head you heard on the news last night. Your life will become instantly better and your mind clearer if you simply stop the latter. You’re only fooling yourself and if you don’t understand the limits of what you know, you’re going to get in trouble.

***

Learning how to think really means continuously learning.

How can we do that?

First we need a framework to put things on so we can remember, integrate, and make them available for use.

A Latticework of Mental Models, if you will.

Acquiring knowledge may seem like a daunting task. There is so much to know and time is precious. Luckily, we don’t have to master everything. To get the biggest bang for the buck we can study the big ideas from physics, biology, psychology, philosophy, literature, and sociology.

Our aim is not to remember facts and try to repeat them when asked. We’re going to try and hang these ideas on a latticework of mental models. Doing this puts them in a useable form and enables us to make better decisions.

A mental model is simply a representation of an external reality inside your head. Mental models are concerned with understanding knowledge about the world.

Decisions are more likely to be correct when ideas from multiple disciplines all point towards the same conclusion.

It’s like the old saying, “To the man with only a hammer, every problem looks like a nail.” Let’s make every attempt not to be the man with only a hammer.

Charlie Munger further elaborates:

And the models have to come from multiple disciplines because all the wisdom of the world is not to be found in one little academic department. That’s why poetry professors, by and large, are so unwise in a worldly sense. They don’t have enough models in their heads. So you’ve got to have models across a fair array of disciplines.

You may say, “My God, this is already getting way too tough.” But, fortunately, it isn’t that tough because 80 or 90 important models will carry about 90% of the freight in making you a worldly wise person. And, of those, only a mere handful really carry very heavy freight.

These models generally fall into two categories: (1) ones that help us simulate time (and predict the future) and better understand how the world works (e.g. understanding a useful idea from like autocatalysis), and (2) ones that help us better understand how our mental processes lead us astray (e.g., availability bias).

When our mental models line up with reality they help us avoid problems. However, they also cause problems when they don’t line up with reality as we think something that isn’t true. So Beware.

In Peter Bevelin’s masterful book Seeking Wisdom, he highlights Munger talking about autocatalysis:

If you get a certain kind of process going in chemistry, it speeds up on its own. So you get this marvellous boost in what you’re trying to do that runs on and on. Now, the laws of physics are such that it doesn’t run on forever. But it runs on for a goodly while. So you get a huge boost. You accomplish A – and, all of a sudden, you’re getting A + B + C for awhile.

But knowing is not enough. You need to know how to apply this to other problems outside of the domain in which you learned it.

Munger continues:

Disney is an amazing example of autocatalysis … They had those movies in the can. They owned the copyright. And just as Coke could prosper when refrigeration came, when the videocassette was invented, Disney didn’t have to invent anything or do anything except take the thing out of the can and stick it on the cassette.

***

What models do we need?

I keep a running list that I’m filling in over time, but really how we store and sort these are individual preferences. The framework is not a one-stop-shop, it’s how it fits into your brain.

How can we acquire these models?

There are several ways to acquire the models, the first and probably best source is reading. Even Warren Buffett says reading is one of the best ways to get wiser.

But sadly if your goal is wisdom acquisition, you can’t just pick up a book and read it. You need to Learn How To Read A Book all over again. Most people look at my reading habits (What I’m Reading) and think that I speed read. I don’t. I think that’s a bunch of hot air. If you think you can pick up a book on a subject you’re unfamiliar with and in 30 minutes become an expert … well, good luck to you. Please go back to getting your opinions from twitter.

Focus on the big, simple ideas.

Focus on deeply understanding the simple ideas (see Five Elements of Effective Thinking). These simple ideas, not the cutting-edge ones are the ones you want to hang on your latticework. The latticework is important because it makes the knowledge useable – you not only recall but you internalize.

But the world is always changing … what should we learn first?

One of the biggest mistakes I see people making is to try and learn the cutting-edge research first. The way we prioritize learning has huge implications beyond the day-to-day. When we chase the latest thing, we’re really jumping into an arms race (see: The Red Queen Effect). We have to spend more and more of our time and energy to stay in the same place.

Despite our intentions, learning in this way fails to take advantage of cumulative knowledge. We’re not adding, we’re only maintaining.

If we are to prioritize learning, we should focus on ideas that change slowly – these tend to be the ones from the hard sciences. (see Adding Mental Models to Your Toolbox)

The models that come from hard science and engineering are the most reliable models on this Earth. And engineering quality control – at least the guts of it that matters to you and me and people who are not professional engineers – is very much based on the elementary mathematics of Fermat and Pascal: It costs so much and you get so much less likelihood of it breaking if you spend this much… And, of course, the engineering idea of a backup system is a very powerful idea. The engineering idea of breakpoints – that’s a very powerful model, too. The notion of a critical mass – that comes out of physics – is a very powerful model.

To help further prioritize learning

From : What Should I Read?

Knowledge has a half-life. The most useful knowledge is a broad-based multidisciplinary education of the basics. These ideas are ones that have lasted, and thus will last, for a long time. And by last, I mean mathematical expectation; I know what will happen in general but not each individual case.

Integrating Knowledge

(Source: Adding Mental Models to Your Toolbox)

Our world is mutli-dimensional and our problems are complicated. Most problems cannot be solved using one model alone. The more models we have the better able we are to rationally solve problems. But if we don’t have the models we become the proverbial man with a hammer.

To the man with a hammer everything looks like a nail. If you only have one model you will fit whatever problem you face to the model you have. If you have more than one model, however, you can look at the problem from a variety of perspectives and increase the odds you come to a better solution.

No one discipline has all the answers, only by looking at them all can we come to grow worldly wisdom.

Charles Munger illustrates the importance of this:

Suppose you want to be good at declarer play in contract bridge. Well, you know the contract – you know what you have to achieve. And you can count up the sure winners you have by laying down your high cards and your invincible trumps.

But if you’re a trick or two short, how are you going to get the other needed tricks? Well, there are only six or so different, standard methods: You’ve got long-suit establishment. You’ve got finesses. You’ve got throw-in plays.

You’ve got cross-ruffs. You’ve got squeezes. And you’ve got various ways of misleading the defense into making errors. So it’s a very limited number of models. But if you only know one or two of those models, then you’re going to be a horse’s patoot in declarer play…

If you don’t have the full repertoire, I guarantee you that you’ll over-utilize the limited repertoire you have – including use of models that are inappropriate just because they’re available to you in the limited stock you have in mind.

As for how we can use different ideas, Munger again shows the way …

Have a full kit of tools … go through them in your mind checklist-style. … [Y]ou can never make any explanation that can be made in a more fundamental way in any other way than the most fundamental way.

When you combine things you get lollapalooza effects — the integration of more than one effect to create a non-linear response.

A two-step process for making effective decisions

There is no point in being wiser unless you use it for good. You know, as Aunt May put it to Peter Parker, “with great power comes great responsibility.”

(Source: A Two-step Process for Making Effective Decisions)

Personally, I’ve gotten so that I now use a kind of two-track analysis. First, what are the factors that really govern the interests involved, rationally considered? And second, what are the subconscious influences where the brain at a subconscious level is automatically doing these things-which by and large are useful, but which often misfunction.

One approach is rationality-the way you’d work out a bridge problem: by evaluating the real interests, the real probabilities and so forth. And the other is to evaluate the psychological factors that cause subconscious conclusions-many of which are wrong.

This is the path, the rest is up to you.

This piece originally appeared on Farnam Street.

Join over 50,000 readers and get a free weekly update via email here.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.