Book Review : The Great Mental Models (General Thinking Concepts)
This article was originally published on my website — https://kislayverma.com/books/book-review-the-great-mental-models-general-thinking-concepts/
The old saying goes, “To the man with a hammer, everything looks like a nail.” But anyone who has done any kind of project knows a hammer often isn’t enough.
The more tools you have at your disposal, the more likely you’ll use the right tool for the job — and get it done right.
The same is true when it comes to your thinking. The quality of your outcomes depends on the mental models in your head. And most people are going through life with little more than a hammer.
The Great Mental Models: General Thinking Concepts is the first book in The Great Mental Models series designed to upgrade your thinking with the best, most useful and powerful tools so you always have the right one on hand.
This volume details nine of the most versatile, all-purpose mental models you can use right away to improve your decision making, productivity, and how clearly you see the world. You will discover what forces govern the universe and how to focus your efforts so you can harness them to your advantage, rather than fight with them or worse yet — ignore them.
This first volume in the Great Mental Models series was, in all honesty, a complete letdown for me. I have been an avid reader of Shane Parrish and his FS blog over the last few months. It is a fantastic place to learn things, and to learn about how to learn things. The blog brings an extremely rich and nuanced perspective to everything it covers.
This book, on the other hand, felt like those abridged versions of great classics that are published for young children. While it covers some very extremely important and versatile modes of thinking, the material is so shallow as to be only suitable for those who have never heard these words before. The deep analysis and application of the simple concepts to deep problems which reading FS blog such a delight are entirely missing from the book.
The following mental models are covered:
- The map is not the territory
- Circle of competence
- First principles thinking
- Thought experiment
- Second-order thinking
- Probabilistic thinking
- Occam’s razor
- Hanlon’s razor
If you google these words, you will get articles with deeper insights into each of these than the book provides. I was hoping that there would at least be some examples that would demonstrate the fundamental nature of these models and how they can be applied to seemingly unrelated issues. But I did not find those.
All this said, there is nothing wrong with the book. It is written simply and well, and if you have not heard of the models mentioned above (or have heard the names but never thought of them explicitly as mental models), then this is a good place to start. At the very least it gives some great place to start googling about widely-applicable thought patterns. And unlike some other books (like Alain De Botton’s Consolations of Philosophy), it does not trivialize the concepts it covers. It sets them up nicely and encourages the reader to explore further
Reading books like this makes me wonder where the problem actually lies when it comes to critical thinking. My feeling is that a lot of people know about these models but fail to identify their applicability in real scenarios and so the whole point of knowing them is lost. Perhaps a better book would be an exercise book of sorts with real life problems that you could try to apply the models on and debate your solution with a community? That might help some more muscle memory in applying these concepts properly. The problem is not the lack of knowledge, I think, but clear headed application of existing knowledge.
I have included my highlights of the book below. This should give you a taste of what the book talks about and reads like.
- A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.
- In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality.
- The fundamentals of knowledge are available to everyone. There is no discipline that is off limits — the core ideas from all fields of study contain principles that reveal how the universe works, and are therefore essential to navigating it.
- Being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play.
- To see a problem for what it is, we must first break it down into its substantive parts so the interconnections can reveal themselves.
- Most problems are multidimensional, and thus having more lenses often offers significant help with the problems we are facing.
- Understanding must constantly be tested against reality and updated accordingly. This isn’t a box we can tick, a task with a definite beginning and end, but a continuous process.
- Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions.
- Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback — the feedback we need to update our beliefs about reality.
- We’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism.
- If we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.
- In the real world you will either understand and adapt to find success or you will fail.
- The world does not act on us as much as it reveals itself to us and we respond
- Better models mean better thinking.
- The degree to which our models accurately explain reality is the degree to which they improve our thinking.
- What successful people do is file away a massive, but finite, amount of fundamental, established, essentially unchanging knowledge that can be used in evaluating the infinite number of unique scenarios which show up in the real world.
- If a model works, we must invest the time and energy into understanding why it worked
- The Map is not the Territory — the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted.
- A map may have a structure similar or dissimilar to the structure of the territory.
- An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly. We may call this characteristic self-reflexiveness.
- We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.
- Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.
- In a true science, as opposed to a pseudo-science, the following statement can be easily made: “If x happens, it would show demonstrably that theory y is not true.”
- A theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.
- First principles thinking identifies the elements that are, in the context of any given situation, non-reducible.
- everything that is not a law of nature is just a shared belief.
- First principles thinking helps us avoid the problem of relying on someone else’s tactics without understanding the rationale behind them.
- As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.
- Thought experiments are more than daydreaming. They require the same rigor as a traditional experiment in order to be useful.
- One of the real powers of the thought experiment is that there is no limit to the number of times you can change a variable to see if it influences the outcome.
- Experimenting to discover the full spectrum of possible outcomes gives you a better appreciation for what you can influence and what you can reasonably expect to happen.
- The rigor of the scientific method is indispensable if we want to draw conclusions that are actually useful.
- Thought experiments tell you about the limits of what you know and the limits of what you should attempt. In order to improve our decision-making
- Very often, the second level of effects is not considered until it’s too late. This concept is often referred to as the “Law of Unintended Consequences” for this very reason.
- the UC Santa Barbara ecologist and economist Garrett Hardin proposed his First Law of Ecology: “You can never merely do one thing.”
- High degrees of connections make second-order thinking all the more critical, because denser webs of relationships make it easier for actions to have far-reaching consequences.
- Being aware of second-order consequences and using them to guide your decision-making may mean the short term is less spectacular, but the payoffs for the long term can be enormous.
- Second-order thinking needs to evaluate the most likely effects and their most likely consequences, checking our understanding of what the typical results of our actions will be. If we worried about all possible effects of effects of our actions, we would likely never do anything,
- The theory of probability is the only mathematical tool available to help map the unknown and the uncontrollable.
- Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to
- Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass.
- Our lack of perfect information about the world gives rise to all of probability theory,
- The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new.
- For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor.
- In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.
- We can think about three categories of objects: Ones that are harmed by volatility and unpredictability, ones that are neutral to volatility and unpredictability, and finally, ones that benefit from it.
- We notice two things happening at the same time (correlation) and mistakenly conclude that one causes the other (causation).
- in any situation where change is desired, successful management of that change requires applied inversion.
- Florence Nightingale is often remembered as the founder of modern nursing, but she was also an excellent statistician and was the first woman elected to the Royal Statistical Society in 1858.
- Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving
- Occam’s Razor is a great tool for avoiding unnecessary complexity by helping you identify and commit to the simplest explanation possible.
- Writing about the truth or untruth of miracles, Hume stated that we should default to skepticism about them.
- With limited time and resources, it is not possible to track down every theory with a plausible explanation of a complex, uncertain event. Without the filter of Occam’s Razor, we are stuck chasing down dead ends.
- Sometimes unnecessary complexity just papers over the systemic flaws that will eventually choke us.
- irreducible complexity, like simplicity, is a part of our reality. Therefore, we can’t use this Razor to create artificial simplicity. If
- Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity.
- The explanation most likely to be right is the one that contains the least amount of intent.
- we’re deeply affected by vivid, available evidence, to such a degree that we’re willing to make judgments that violate simple logic. We over-conclude based on the available information. We have no trouble packaging in unrelated factors if they happen to occur in proximity to what we already believe.
- Hanlon’s Razor, when practiced diligently as a counter to confirmation bias, empowers us, and gives us far more realistic and effective options for remedying bad situations.
Read Next : Learn some popular technology concepts from first principles
If you like technology, organizations, books, and learning new things, I cover them all in my weekly newsletter. Sign up to receive the next articles right in your inbox. Check out the archive — https://kislayverma.com/newsletter-archive/