Deimos-One Mad Scientist halo

Mental Models (Part 1)

Deimos-One Mad Scientist mental models

Use these mental models to improve your problem-solving and decision-making skills and overcome common [smoov.bra.in2023] reasoning errors. 

As an economist by way of education (not profession), I was introduced to Charlie Munger’s now famous speech, The Psychology of Human Misjudgment (which is quite arguably the magnum opus on why we behave the way we do) in grad school, and this introduced me to behavioral economics, which then led me to discover mental models and how to use them to dominate the competition.

Or, in more formal (less hostile) terms: how to apply the models to business, investing, and personal growth 🙂

Its been a while since grad school, but the more I read and learn, the more I’ve come to realize that these models have near infinite possibilities and can be applied in a variety of areas, from basic life choices to solving the unanswered complex questions of the universe.

But before we get into the meat, what exactly is a mental model?

A mental model is just a fancy word for a principle that you can use to try to explain things and/or make better decisions.

It’s a simplified “cognitive framework” that can help you understand and interact with the world by providing a structured way to organize information and make good choices.

There are tens of thousands of mental models out there (every discipline has their own set), but you don’t need to know all of them.

You just need to know the basics.

The meat.

As Munger says, “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.”

This post will discuss a few of the mental models that have been repeatedly useful to me over the years.

I use quite a few of them every day at the office (from spaceflight probability to hiring decisions to general strategy) and others outside the office (e.g. investing, personal life, and, of course fantasy football).

Note: as you read through this post you will come across various links to books that I personally recommend reading/studying to increase you knowledge of the topic currently in discussion.

Disclaimer: this is an incomplete list and biased from my own experience and knowledge, this is not financial or life advice, use at your own risk 🙂

Ready?

Ok, let’s dive into it.

Mental Models to Help You Think Like A Genius:

1. Probabilistic Thinking

The modern world is facing a severe crisis of critical thinking and cognitive reasoning.

We exist in a culture of irrationality, fear, doubt, and uncertainty.

Nobody knows how to think anymore.

People are lost.

Confused.

The confusion leads to anxiety and stress.

And stress leads to bad decisions.

Bad decisions are everywhere.

To make things worse, there are scammers, fraudsters, and hucksters lurking in the shadows.

Lurking and waiting to prey on the insecurity and fear that arises from a world full of uncertainty and unknowns.

Many times they come across as nice, helpful, and innocent.

They’ll offer you interesting remedies to “fix” your anxiety and stress.

They offer the answers to all of your problems and confusion.

Often by promoting courses or “mantras” or selling supplements, potions and elixirs.

Or by telling you to simply “think positive” and that everything will be OK.

“Have faith, little one.”

You can sell anything to fools if you market it properly.

They are wolves in sheep’s clothing.

And now “uncertainty” has become a dirty word.

We’re only allowed to think “positive”.

Everything else is “negative”.

So we default to the Binary.

What is the Binary, you ask?

Binary thinking is the tendency to see things in terms of absolutes (good versus bad or right versus wrong).

The Binary is simplified mindset where a person sees issues in terms of two options that are often mutually exclusive.

In the Binary, everything is either black or white, left or right, yes or no, on or off, hot or cold.

And using Binary Thinking to make sense of the world and solve hard problems often leads to bad outcomes.

Especially in a stressful world full of irrationality, anxiety, fear, doubt, and uncertainty.

Will you get struck by lightning today?

Hit by a car?

Robbed by bandits?

Win the Powerball lottery?

Maybe you will, maybe you won’t.

We’ll never totally know the outcome until the day is over, which is not particularly useful when we have to make our decisions in the morning.

Sure, Binary Thinking works in certain situations (sometimes), but for the most part, especially in the cold reality of our dynamic, complex, uncertain world, Binary Thinking is often subpar.

Using the Binary is a common [smoov.bra.in2023] runtime error. This happens when you use your brain on default settings.

Using default settings makes it easier to fall into the trap of optimism bias (more on this later), risk miscalculation, and self-destruction.

You see, in the real world, the future is inherently unpredictable because not all variables can be known — and even the tiniest error imaginable in the data can quickly throw off your predictions.

And because the future is not 100% deterministic, we need to figure out a way to navigate uncertainty and complexity so we can have a better understanding of the events that could impact us in a positive or negative way (and their likelihood).

This is where Probabilistic Thinking comes in.

In the office, we use probability theory just about every day to solve hard problems.

We’re a team of nerds who are deep in the mud with predictive modeling/analysis every day (building predictive algorithms, tinkering with AI and machine learning and the likes).

And from being knee deep in it on a day-to-day basis, I know firsthand just how difficult it is to predict the future.

Even if you’re a data science genius, predicting the future is incredibly difficult.

Most of the time the best you can do is just create “estimates” by coming up with simple, realistic, useful probabilities.

“So, how do I do that?”

And what is probabilistic thinking?

Probabilistic Thinking is when you try to estimate (using logic and math) the likelihood of any specific outcome.

This can help improve decision making accuracy.

If you live on Earth (Solar System 1) and are not some alien reading this from some far away planet, you should be aware that every moment in our world is determined by an infinitely complex set of factors.

This complexity can lead to uncertainty, anxiety, and fear.

Feelings that most humans experience every day (to a certain degree).

Probabilistic thinking, on the other hand, can reduce uncertainty, anxiety, and fear, and help you solve some very hard problems — because probabilistic thinking can give you the ability to see through the dense fog of madness and uncertainty and show you the most likely outcomes.

When you know the outcomes (holy sh*t can you predict the future now?) not only will your decisions be more accurate and effective, but you’ll also be able to avoid pitfalls like recency bias and emotional decision-making suboptimality.

You see, the right answer is like two people. Quant is the nice one. Logic causes all the trouble. They fight.” —Jamin Thompson

Probabilistic Thinking is a much more scientific and quantitative approach (as opposed to Binary Thinking) when it comes to making difficult decisions in uncertain or unpredictable situations because it considers the odds, probabilities, and likelihoods of a multitude of various outcomes.

It acknowledges that many real-world situations are characterized by inherent variability and unknowns — and it seeks to quantify and manage this uncertainty through the use of probability theory.

Are you lost in the woods?

Decision forest too thick?

How do you get out of here?

Do you take this path or that one?

Each choice could lead to a different outcome.

How do you figure out the probability of each one?

Are there complex formulas?

What does probability even mean anyway?

That’s a great question. I’m glad you asked that question.

Probability means POSSIBILITY.

Probability is a measure of the likelihood that an event will occur in a random experiment.

The value is expressed as a number between 0 and 1, where 0 means the event is an impossible one and 1 indicates a certain event.

You see, many events cannot be predicted with absolute certainty. We can only predict the chance an event has to occur (or how likely they are going to happen).

The higher the probability of an event, the more likely it is that the event will occur.

Take a coin toss, for example. When you toss a coin, you will either get heads or tails. These are the only two possible outcomes (H, T). But, if you toss two coins, then there will be four possible outcomes [(H, H), (H, T), (T, H), (T, T)].

Note: this is basic probability theory (also used in the probability distribution) where you learn the possibility of outcomes for a random experiment. To find the probability of a single event, first we need to know the total number of possible outcomes. 

That said, when it comes to Probabilistic Thinking, there are two critical factors to consider: 

  • The probability of a certain outcome.
  • The magnitude of a certain outcome.

But, how do I calculate the probability of an outcome?

Without turning this into a very boring math essay full of complex formulas that make no sense, here is all you really need to know.

Probabilistic thinking pretty much boils down to the following three critical aspects:

  • Bayesian thinking
  • Fat-tailed curves
  • Asymmetries

Let us begin with Bayesian Thinking.

(aka The Bayesian view of probability or Bayes’ Theorem)

Bayes’ Theorem is a fundamental concept in probability theory and statistics.

Named after 18th-century statistician, philosopher and minister Thomas Bayes, Bayes’ theorem describes the probability of an event, based on prior knowledge or conditions that may be related to the event.

It works by giving you a way to update and improve your probability estimates when new evidence becomes available.

In simple terms: Bayes Theorem measures the plausibility of an event when you have incomplete information.

It starts with a statement of knowledge prior (usually this comes in the form of a prediction).

*yes, this can really come in handy if you are trying to win your fantasy football league*

I digress.

For example: to improve the state of knowledge, an experiment is designed and executed to “get new evidence”.

Both the prior and the experiment’s results have a joint distribution (the probability of two events happening together) that leads to a new and improved belief.

This is demonstrated in the following equation:

P(A|B) = [P(B|A) P(A)] / P(B)

Where P = probability, A = prior knowledge, and B = new evidence.

And where P(A) = probability the prior belief is true.

P(B) = probability that the new evidence is true.

P(A|B) = probability of the prior belief if the new evidence is true.

P(B|A) = probability of the evidence if the prior belief is true.

It’s a deceptively simple calculation, although it can be used to easily calculate the conditional probability of events where intuition often fails.

Unrelated, intuition usually fails.

Side note for the AI enthusiasts: although Bayes Theorem is widely used in the field of probability, it is also heavily used in machine learning as well.

The core concept of Bayes focuses on the fact that as we encounter new information, we should probably take into account what we already know when we learn something new.

This helps us use all relevant prior information as we make decisions.

It’s particularly useful when dealing with conditional probability, where the likelihood of an event is dependent on the occurrence of another event.

Poker players and entrepreneurs both embrace the probabilistic nature of decisions. When you make a decision, you’ve defined the set of possible outcomes, but you can’t guarantee that you’ll get a particular outcome. —Annie Duke (Poker Champion)

As mentioned previously, we live in a world that is full of uncertainty.

And uncertainty leads to stress.

Stress leads to bad results.

These are critical points to understand if you truly want to make better decisions.

A huge part of Probabilistic Thinking involves befriending uncertainty, which can be incredibly hard.

To overcome this, we need to acknowledge that uncertainty exists, that it is everywhere, and make it our friend.

To do this you must learn to live in “chaos” and embrace the unknown.

Let go of your ego and your biases and accept the conditions of the situation.

Let your mind be free.

Be ok with saying “I’m not sure”.

Accept that you will never (even if you are a data science genius or fantasy football expert) know all the facts in any given situation and that there are no guarantees of a specific outcome.

Next, after you figure out what you think may happen, ask yourself, “what else might happen?” and then decouple from notions of ‘good’ and ‘bad’ in decisions from those outcomes.

It’s also important to keep in mind that when you are swimming in the deep waters of uncertainty and complexity, there is also (very likely) a degree of luck involved, so it’s possible for you to make a ‘bad’ decision that leads to a positive outcome.

And that’s OK.

It’s MUCH easier to accept randomness when things don’t go our way.

So, with that in mind, instead of focusing on outcomes/results, learn to trust the process and try to reflect on your previous decisions from a probabilistic perspective.

Got all that?

Great.

Now, the next thing to remember is to never (for any reason) express 100% certainty.

This is the ultimate rookie move, and if you have made it this far, you definitely aren’t a rookie anymore.

So, start doing this today: if you want to make a prediction (fantasy football, the weather, stock market, economy, world war 3, whatever), get in the habit of assigning levels of certainty to your predictions, rather than boldly claiming something will just ‘simply happen’.

Always do your due diligence and estimate the percentage chance it will happen based on your available data/facts.

Note: it’s important that when you’re confronted with the possibility that you’re wrong that you don’t engage in any sort of cognitive gymnastics to try to hold onto that false belief (even though you want to). To become a true mental master, you’ll have to come to terms with the fact that there are countless things you’re not right about in our present moment in spacetime. 

If you’ve got all that, the next step is to update your probabilities.

You have been presented with new information. What should you do?

You must be open to it and consider any emerging facts.

Then, use the data to update your predictions.

Note: this process can be painful and difficult and involves challenging and disrupting your biases. 

And then, FINALLY, after you have made your predictions and updated them, THEN you must find the confidence to act (based on your current knowledge) to run the numbers and understand all the probable outcomes, while also accepting the fact that you could always be wrong.

deimos-one data science mental models

Artist depiction of a Deimos-One Data Scientist Modeling Simplicity in a Stochastic Environment

And that’s (mostly) it in a nutshell.

Probabilistic Thinking (or Bayes’ theorem) is a powerful tool for making informed decisions and updating beliefs in the face of uncertainty and changing evidence.

It’s a pragmatic approach to decision-making that recognizes the limitations of certainty and embraces the tools of probability to navigate uncertainty.

Using it will help you avoid common [smoov.bra.in2023] reasoning errors and empower you to make more informed decisions.

As an added bonus, you’ll also get better at managing risk and developing strategies that can hedge against uncertainty and complexity.

If you master this style of analysis, you’ll have a new weapon in your war chest that will help you understand not only the world, but also yourself (decisions, thoughts, emotions, etc.).

That is true power.

Next up, fat-tailed curves.

So, what exactly is a “fat-tail”?

fat tails bell curve mental models

The statistical term ‘fat tails’ refers to probability distributions that have a relatively high probability of extreme outcomes.

Just think: your last relationship.

Or, that memecoin you put your whole savings in.

A fat-tailed distribution exhibits a large skewness, which consists of a thick end or “tail” toward the edges of the distribution curve.

Fat-tails also imply strong influence of extreme observations on expected future risk.

Fat-tails are similar (but slightly different) to its more traditional “do-gooder” older brother, the normal distribution curve.

The normal distribution curve is shaped like a bell (it’s also known as the bell curve), and it typically involves two basic terms: (1) the mean (the average) and; (2) the standard deviation (the amount of dispersion or variation).

Usually the values here cluster in the mean and the rest symmetrically taper off towards either extreme.

Kind of like ordering “hot wings” with the mildest sauce there is.

The fat-tail, on the other hand, is sort of like wild and crazy hot wings with Carolina reaper peppers and homemade hot sauce from the back of someone’s barn.

You know that sh*t is gonna be SPICY.

But hey, let’s compare wings, shall we?

At first glance both types of wings (mild/extreme hot) seem similar enough (common outcomes cluster together bla bla) but when you look closely (can you identify a Carolina reaper without Googling?) they always have very distinctive traits to tell them apart.

If you want to use comic books as a reference instead of hot wings, take Loki and Thor, for example. They are “brothers” but their difference is usually in the physical difference (Thor is the jacked one).

With these distribution curves, it is also appearance based — the difference is in the tails.

Here are some important distinctions: 

In a bell curve the extremes are predictable. There can only be so much deviation from the mean.

Fat tails have positive excess leptokurtosis (fatter tails and a higher peak at the mean), which means there is no real cap on extreme events (positive or negative).

In a normal distribution (bell curve), on the other hand, the extremes are more predictable.

The more extreme events that are possible, the longer the tails of the curve get.

Does that make sense?

Great.

You’re doing so great.

Sure, you could make the argument that any one extreme event is still unlikely, but due to the massive number of options, you probably won’t be able to confidently rely on the most common outcomes as a representation of the average.

And the more extreme events that are possible (think millions or even billions) the higher the probability that one of them will occur.

When it comes to fat-tails, you know crazy things are going to happen (with near certainty), like your fantasy football team having multiple starters sent to IR on the same day, you just have no idea of knowing when.

So, how can a common pleb like myself use this knowledge to my advantage?

That’s a great question, I’m glad you asked that question.

Suppose you hear on the news that you had a greater risk of falling out of bed and cracking your head open than being killed by war.

The stats, the priors (seem) to back it up: over 30,000 people died from falling injuries last year in your country and only 200 or so died from war.

Should you be more worried about falling out of bed or World War 3?

A lot of hucksters and actors who play economists on TV use examples like these to “prove” that the risk of war (World War 3 in this case) is low.

They say things like “there are very few deaths from this in the past and the numbers back this up so why even worry?”

Looking at it on the surface, it may appear (to the untrained eye) that the risk of war is low since death data shows recent deaths to be low, and that you have a greater risk of dying from falling, it all makes sense, right?

The problem is in the fat tails.

The shape of the curves often tell a very different story.

Think of it like this: the risk of war is more like your bank account, while falling deaths are more like weight and height. In the next ten or twenty years, how many outcomes/events are possible?

How fat is the tail?

Think about it.

No really.

Think.

If the risk of World War 3 is high, it would follow a more fat-tailed curve where a greater chance of extreme negative events exists.

On the other hand, dying from falling (like the time I fell out of the top bunk of my bunkbed as a kid) should follow more of a bell curve, where any outliers should have a well-defined scope.

It may take a bit of practice, thinking, and application, but trust me, it’s simple once you get it.

If you don’t understand it now, keep studying and you will.

Remember, the important thing to do when you are trying to solve a hard problem is not just sit around and try to imagine every possible scenario/outcome in the tail. This is an impossible task for any human running [smoov.bra.in2023] to figure out.

Instead, use the power of the fat-tail to your advantage.

Doing this will not only put you in a great position to survive and thrive in the wildly unpredictable future — but it will also help you think clearly and make good decisions — so you can always stay one step ahead in a world we don’t fully understand.

Finally, that leaves Asymmetries.

Now that you have basic probability under your belt, it’s time to embrace a more advanced concept that many experts call “metaprobability” — aka the probability that your probability estimates are any good.

Yikes.

Let’s be honest, your probability estimates probably suck (at least right now because you’re just starting out) but for the purposes of this exercise let’s pretend that they are brilliant.

For argument’s sake, let us consider the possibility.

You are a common plebeian genius who has access to brain tools and weapons only possessed by super advanced civilizations.

The internet, advanced machine learning, AI, GPT-4, Amazon Books, and many other tools from the wonderful world of advanced data science wizardry are all at your fingertips.

Look at us.

Who would have thought?

Not me.

But with all these advanced tools is it now possible to accurately predict the future?

Let us see.

We will run the analysis using default settings in [smoov.bra.in2023]

First, we evaluate the known params:

-I exist
-The world exists
-The world is chaos
-Chaos leads to war
-War is unpredictable
-Unpredictability leads to uncertainty
-Uncertainty leads to stress
-Stress kills

bla bla bla

Now, let us consider a function of a complex variable f(z) = wut + tf where we assign falling out of bed, world war 3, nuclear attack, terrorism, disease, famine, global warming, AI, a specific weight to form a probabilistic argument to estimate the chance that a violent kinetic event will occur, and/or the event’s place in spacetime.

Let us import the vars into our shitty bathroom formula:

tf² [(x²) + (y²) + y(z²)] + ded
P = ——————————————
Σi (7x – war + wut²)

Solving for wut + tf we can conclude with a 69.699% probability that shit happens and that shit will continue to happen until shit happening ceases to occur.

I digress.

We have discussed economic hucksters and Decepticon intellectuals in previous posts.

Bold predictions.

Fancy charts.

Long exposition.

Pseudo quant.

Devoid of logic.

Immovably committed to bold positions because it makes you sound smarter than being humbly realistic.

They make bold predictions but are wrong more often than they are right.

Bla bla bla.

The reason these guys are wrong so often has to do with asymmetries.

For example, if you observe a common huckster in action: the expensive suit, the nicely polished pitch deck, the fancy charts, the slick haircuts, etcetera…

These guys will look you right in the eye in a pitch meeting and tell you you can expect to “achieve a rate of return of 30% or more per year” or even more.

And most of them never hit their projections.

Ever.

Why?

Not because they don’t have ANY of their projections correct, it’s because they got so many incorrect.

They overestimated their confidence in their probabilistic estimates — and they do this consistently, meeting after meeting, year after year.

A common [smoov.bra.in2023] runtime error.

Note: we all know the stock market usually only returns about 7% a year in the United States, but for some reason (unexplained by modern science) we continue to listen to and bet on the smooth talking 30% guy. 

With that in mind, here is what you need to do.

Write this down: my probability estimates and predictions are more likely to be wrong when I am “over-optimistic” instead of “under-optimistic”.

And a lot more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side.

Remember this fact.

Put it on the fridge so you never forget it.

Why is this so important, you ask?

The reason for is simple: a lot of uncertain outcomes are inherently asymmetric.

They have longer downside tails than upside.

And a common [smoov.bra.in2023] runtime error is to lock in and focus on the “obvious” or “most likely” outcome — and then forget to crunch the numbers to figure out the real expect impact of multiple asymmetries together.

mental models

Image: Sketchplanations.com

Things that never happen (or rarely happen): an investor who wanted to hit a 30% annual return but instead hit 45% over multiple years.

When you do the analysis (you can go crunch the numbers yourself) you’ll find that most guys end up closer to 9 or 10 percent.

Maybe 12 percent if they’re lucky.

Remember, your estimation errors are asymmetric, skewing in a single direction — this is often the case with probabilistic decision-making.

Whew, that was a lot.

You got all that?

To sum things up, just remember: when you get too over-confident and fall into the trap of blind “over-optimism” it often leads to errors and bad outcomes.

But, if you can build your data analysis skills up to a ’99 Madden awareness level’ you’ll be able to recognize overestimations in your probabilistic estimates so you can plan more carefully during high-stake situations and make rational (winning) decisions when you are faced with high levels of uncertainty, ambiguity, and incomplete knowledge.

You will be a god among men.

2. Second-Order Thinking 

These days, it can be tempting to make emotional decisions with small upside that provide instant gratification.

This is often referred to as first-level thinking, which is simplistic and superficial by nature.

First-level thinking does not consider the negative future consequences of a decision made today.

First-level thinking focuses on solving an immediate problem, with little or no consideration of the potential consequences.

But most decisions require a much deeper level of thought and mental exploration. They require you to look beyond the immediate and the obvious; to dig deeper.

For example:

An investor with first-level thinking may think that a crypto company with a rapidly growing online following will lead to an inevitable (somehow correlated) rise in share price.

On a similar note, a person on a diet may conclude that the best choice for a hungry stomach is a delicious bacon cheeseburger (I’ve done this).

In both cases, the potentially negative future consequences of each choice have not been fully thought out and evaluated.

It happens all the time: some decisions seem like dubs at first, but then turn out to be huge L’s as time goes on.

What seems like a good memecoin to buy today turns out to be a huge dud months later.

What looked like a good decision before is now a bad one.

Most people stop at first level thinking, but you don’t have to.

Instead, give second-order thinking a shot and see if your decision making can improve.

So, what exactly is Second Order Thinking?

In his exceptional book, The Most Important Thing, Howard Marks explains the concept of second-order thinking, which he calls second-level thinking:

“First-level thinking is simplistic and superficial, and just about everyone can do it (a bad sign for anything involving an attempt at superiority). All the first-level thinker needs is an opinion about the future, as in “The outlook for the company is favorable, meaning the stock will go up.”

First-level thinking is very linear.

It’s fast and easy.

Simplistic and superficial.

Just about anyone can do it.

It is based on default settings.

It is the result of looking for something that solves the problem at hand without considering the tradeoffs or consequences.

For example, you’re thirsty so let’s drink an ice cold beer.

First-level thinking is very basic, and just about everyone can do it (a bad sign for anything involving an attempt at superiority).

Second-order thinking is a lot deeper, more complex and convoluted.

More deliberate.

It requires a much deeper engagement of the mind.

Second-order thinkers analyze the statistics, facts, and information, but also question the reasoning behind them.

Second-order thinkers have a strong understanding of bias.

Second-order thinkers have strong metacognition.

Second-order thinkers think about their own thinking — and then think about thinking about the thinking they are thinking — often analyzing multiple variable at the same time.

Second-order thinking is a mental model that can help you properly assess the implications of your decisions by considering future possibilities/consequences. It encourages you to think outside the box and prepare for every eventuality.

So, how do I start thinking in the second order?

A good place to start would be asking yourself the following:

  • What is the probability that I’m right?
  • What is the range of likely future outcomes?
  • Which outcome do I think will occur?
  • What does the consensus think?
  • How does my expectation differ from the consensus?
  • And then what?
  • What do the consequences look like in 10 minutes? 10 months? 10 Years?

Key Point: While first-level thinking evaluates the direct outcomes of a single decision, second-order thinking delves into the broader and deeper implications that can arise as a result of those outcomes.

Unlike first-level thinking, where everyone reaches the same conclusions, second-order thinking gives you the ability to navigate complex situations with greater perspective and insight.

It’s a model that that considers all future possibilities.

It can prepare you for any eventuality.

It goes beyond simple cause-and-effect reasoning.

It considers the entire chain of reactions and unintended consequences and shields you from the human default tendency to make the most obvious choice.

When you look beyond the immediate and obvious, you’ll be able to make better decisions that will lead to more positive outcomes, both now and in the future.

Looking back at some of your past decisions in a new light (from the lens of having just read through this) first-level thinking may sound really stupid, but our minds are programmed (aka [smoov.bra.in2023] default settings) to search for the easiest solution and/or the solution that usually appears obvious.

For this reason, most people really struggle to look beyond their initial assumptions to make the optimal decision.

This is especially true when we are stressed, in a time crunch, inexperienced in the field at hand, over-optimistic (see mental model #1 again), experiencing strong emotions, have psychological biases, or are isolated from other points of view.

At the end of the day, second-order thinking can help you step outside your comfort zone and objectively analyze and assess the implications of your decisions by considering the future consequences.

It will help you make sure you’re certain and comfortable with the long-term consequences of your decisions today.

This mental model is particularly useful in times of uncertainty and/or in rapidly changing environments, where the ability to foresee multiple potential outcomes can help you avoid disaster and lead to more successful outcomes.

Bonus Chance: second-order thinkers typically outperform first-level thinkers because they can see solutions to problems that others can’t.

3. Opportunity Cost

There is no such thing as absolute victory, there is no perfect solution, there are only trade-offs. All you can do is try to get the best trade-off you can get.

We live in a world of trade-offs.

There is no such thing as a perfect outcome.

And the concept of opportunity cost (e.g. there is no such thing as a free lunch) is the king of all trade-offs.

This is a fundamental concept that’s taught in Basic Economics.

So, what exactly is an opportunity cost?

In economics, opportunity cost refers to the loss of potential gain a person could have received but passed up in pursuit of another option.

Opportunity cost is a fundamental concept in economics and decision-making.

When making a choice between two or more alternatives, the cost of that decision is not just the immediate financial expense but also the potential value or benefits you lose by not choosing the next best alternative.

In other words, opportunity cost is the value of the next best option that you give up when you make a decision.

It helps you evaluate the trade-offs involved in your choices.

For the nerds, you can express opportunity cost conceptually as follows:

Opportunity Cost = FO – CO
where:
FO = Return on best forgone option
CO = Return on chosen option

For example:

Let us consider an example of opportunity cost in the context of selling a stock now or waiting to sell it in three months.

Suppose you have 1,000 shares of Deimos-One stock, and the current market price per share is $100.

You have two options:

Option 1: Sell the stock today at $100 per share.
Option 2: Wait and sell it in three months hoping that the stock price will go up.

To calculate the opportunity cost, we will need to compare the returns of both options.

Option 1: 1,000 shares x $100 per share = $100,000 (current value)

Option 2: 1,000 shares x $120 per share = $120,000 (future value) *let’s assume the stock price has increased*

Now, calculate the opportunity cost:

Opportunity Cost = Future Value (Option 2) – Current Value (Option 1)
Opportunity Cost = $120,000 – $100,000 = $20,000

So, if you choose Option 1 and sell the stock today, your opportunity cost is $20,000.

This means that by selling now, you’re giving up the potential $20,000 in profit you could have earned if you had waited for three months and sold the stock at the higher price.

This opportunity cost scenario shows the trade-off between realizing immediate gains and waiting for potentially higher returns.

Opportunity cost is a powerful mental model to help you make more informed decisions as it encourages you to consider not only the benefits of your chosen option but also what you’re sacrificing by not choosing an alternative.

It’s always a wise move to think about the potential costs that arise because you chose in favor of one option and thus against every other option.

4. Randomness 

There aren’t always cause-effect relationships.

A lot of stuff is just random.

For some reason (unexplained by modern science), the human brain has a lot of trouble comprehending this fact (another common runtime error?) but the truth is: a lot of the world is made up of random, non-sequential, non-ordered events.

Humans are often “fooled” by randomness.

A function of [smoov.bra.in2023] perchance?

A common plebian reasoning error?

This needs to be studied further.

I digress.

We attribute causality to things we have zero control over.

And if you get fooled by randomness and get tricked into a false sense of pattern seeking — you will get finessed into seeing things as being more predictable than they actually are, and eventually make a critical error.

It happens all the time.

So, what exactly is “randomness”?

In previous blogs, we have discussed how hucksters and posers do not understand probabilistic processes, or how to model simplicity in stochastic environments.

Randomness is a mental model/cognitive framework that recognizes the inherent uncertainty and unpredictability present in various aspects of life and decision-making.

Here’s the thing: we live in an unpredictable world.

It’s a world with a dense decision forest.

There are many steps/variables/tricks/outcomes.

It can be thick and hard to see.

There is complexity.

Chaos.

Stochasticity.

Uncertainty.

Risk.

Danger.

But, the winners are often able to see 2-3 steps ahead.

They can see through the trees to infinity.

The losers can barely see what’s in front of them.

You see, many outcomes are not entirely deterministic — a lot of it is subject to chance and probability.

The winners just know how to calculate it.

Here are a few key principles to understand when it comes to randomness:

Probabilistic Thinking: Embracing randomness and uncertainty (see model #1 above) means thinking in terms of probabilities. Instead of expecting deterministic outcomes, we must consider the likelihood a situation can have multiple possible outcomes, each with its own chance of occurring. Understanding these probabilities is critical for understanding the distribution of possible outcomes and making informed decisions based on that information.

Statistical Reasoning: Randomness often follows statistical patterns. Finding these patterns can help successfully navigate through a dense decision forest. By applying statistical methods to analyze data, considering patterns and distributions rather than relying on intuition or a “gut feeling” can help you make informed judgments even when faced with uncertainty.

mental models

Image: Sketchplanations.com

Stochastic Processes: One must recognize that many processes are driven by stochastic, or random, elements, and it helps to study, understand, and be able to model complex systems.

Irrationality Awareness: Randomness highlights human biases and irrational tendencies when dealing with uncertainty. In order to navigate through the decision forest, we must be aware of these biases and encourage ourselves to make rational decisions.

Uncertainty Awareness: Nobody knows it all. Many events are inherently uncertain and cannot be predicted with certainty. Recognizing this can help avoid overconfidence, not only in your predictions but also in your decisions.

Decision Under Uncertainty: As mentioned above, not only must we have uncertainty awareness, but we must be able to make decisions when outcomes are uncertain. It helps to make decisions based on expected value or risk-reward analysis.

Risk Assessment / Prediction Limits: Every decision you make can have a range of outcomes, and randomness often adds risk and uncertainty into decision-making processes. So, one must calculate risk probability. But not all events can be accurately predicted, some degree of uncertainty will always be present. Recognizing the limits of prediction is key.

We use Monte Carlo (MC) Simulation (a basic mathematical technique that predicts possible outcomes of uncertain events) in the office quite often to simplify randomness and predict future outcomes.

MC can be a very useful tool to identify all the possible outcomes of an event (e.g. spacecraft launch, payload drops, landing, etc.) making it a lot easier to measure risk so we can make good decisions under uncertain initial conditions.

We can run MC simulations millions of times (it generates a bunch of “what-if” test scenarios) by treating every input variable and its intrinsic uncertainty as a probability distribution function.

This allows us to get precise, quantitative risk analyses of incredibly complex projects (such as a payload drop over a distant planet) and mission critical risk assurance for the spacecraft.

What we usually end up with is a comprehensive and quantifiable statistical picture of what could happen, its likelihood of happening, and any errors associated with such an occurrence.

It is a great tool to hedge your bets against the cruel pimp slap of an unforeseen random event.

That said, MC also kicks ass due to the fact that unlike normal forecasting models (or traditional single-point risk estimates) MC is able to build a model of possible results by leveraging a probability distribution (like a uniform or normal distribution) for any variable that has inherent uncertainty.

Then, it recalculates the results over and over (thousands and thousands of times) each time using a different set of random numbers between the minimum and maximum values, generating a large number of likely outcomes.

And thanks to the internet and tools like YouTube and GPT4, you don’t have to be a master data scientist to set up and run basic MC simulations.

This is a very simple method/experiment that you can learn quickly and use to solve any problem that includes an element of uncertainty or randomness in its prediction.

Or, problems that may have a probabilistic interpretation or be deterministic in principle.

At the end of the day, we live in a confusing world with tons of uncertainty, and understanding randomness is essential if you want to manage risk and navigate uncertainty.

The basic framework is built on rational decision-making, statistical thinking, probability and risk analysis that you can use to make better decisions in all aspects of life, from everyday choices to complex financial and life decisions.

Put it in your war chest and use it to your advantage.

5. Occam’s Razor 

“Never multiply unnecessarily.”

Occam’s razor (also known as the “law of parsimony”) is a foundational mental model in reasoning and problem-solving. It underscores the principle of simplicity as a guiding criterion for choosing among competing hypotheses or explanations.

It states that one should not increase (beyond reason) the number of entities that are necessary to explain anything. All things being equal, the simplest solution is often the best one. The principle is attributed to 14th-century philosopher/theologian William of Ockham.

It can be summarized as follows:

Among competing hypotheses, the one with the fewest assumptions should be selected.

Or

Simple explanations are preferrable to complex ones.

Simple theories are easier to verify.

Simple solutions are easier to execute.

Always choose the less complex explanation/option.

Occam’s razor mirrors the colloquial phrase “KISS” (Keep It Simple Stupid).

The general idea behind “KISS” is that if you have two or more explanations that work equally well (ceteris paribus) in answering a question, the simplest hypothesis is generally the most optimal.

As discussed in previous articles, we live in an era where we have more knowledge and technical capabilities to ask and answer more questions than ever before.

But…

The more questions we ask, the more answers we get.

And, the more answers we get, the more questions arise.

Every day we are asking and answering more questions, in more fields, and arriving at even more questions.

And we’re doing it at unprecedented speeds.

We are, at an alarming rate, asking and answering more and more questions, which in turn, allows us to make more decisions and have more agency.

The problem is, you and I (and just about every human on this planet) is still running [smoov.bra.in2023] which severely limits our cognitive abilities and decision making skills.

Our core programming does not have the capacity to process information at this volume and scale.

Note: if this does not make sense to you, it’s likely a runtime error… [smoov.bra.in2023] needs more CPU and RAM to process fully but the sim params have prevented this.

Sure, we have a lot more options now (and that’s a good thing).

But, with more options come more choices.

And with more choices come more decisions to make.

And with more decisions to make, the more complex the solutions become.

So, paradoxically, in an attempt to gain more knowledge and understanding by building the technologies we thought would help make our lives easier, we seem to be doing the opposite of Occam’s Razor.

The question of utility seems to be getting farther and farther away.

We live in a world where we have unprecedented choice freedom and agency.

But with all of these options (multiplied unnecessarily) are we causing more work for our brains?

Are we wasting hours of our lives pondering over trivial decisions?

Just think about the amount of time it takes to find a TV show to watch on Netflix, or how long it takes to figure out what food you want to order on Uber Eats.

mental models

Image: Sketchplanations.com

As a self-described deep thinker myself, I’m not going to sit here and chastise you for thinking but sometimes thinking can be suboptimal.

Alan Watts may have said it best when he said: “Don’t think too much.”

A person who thinks all the time has nothing to think about except thoughts. So, he loses touch with reality, and lives in a world of illusions.

By thoughts, I mean specifically, chatter in the skull. Perpetual and compulsive repetition of words, of reckoning and calculating.

I’m not saying that thinking is bad. Like everything else, it’s useful in moderation.

A good servant but a bad monster.

And all so-called civilized peoples have increasingly become crazy and self-destructive because, through excessive thinking, they have lost touch with reality.

We are wasting our brain power on trivial matters and useless activities.

This is causing decreased cognitive ability and mental exhaustion.

Occam’s Razor, on the other hand, encourages clarity, parsimony, and simplicity in reasoning.

It encourages you to choose straightforward, elegant explanations that minimize unnecessary complexities and assumptions.

It promotes rationality and economy in the formation of hypothesis.

It aids in prediction and empirical testing and contributes to the efficiency and progress of scientific inquiry.

This mental model’s power lies in its capacity to foster clarity of thought, drive scientific progress, and help you make informed and rational choices when faced with uncertainty and complexity.

Use it to your advantage.

Final Thoughts

Well, that’s all I have for now.

Mental models (especially in the way presented here) can be a mouthful to take in at first, but if you spend some time with them, studying them, and using them in application (not just theory) you’ll eventually get the hang of them.

It’s all about the application and the context, you have to use them in the right way at the right time. It’s a science; you’ll have to understand them well and practice using them for them to have full effect.

Personally, I find mental models to be particularly useful when I’m trying to come up with cool ideas or trying to solve hard problems.

That said, I’ve rambled on enough for today.

I hope you gained some valuable insight from the words written here.

It’s my hope that you can leave the internet somehow smarter than you were when you first logged on.

So, what Mental Model has helped you the most?

Let me know on X/Twitter.

Follow me for more shitty analysis: twitter.com/jaminthompson.

Click Here to read Mental Models Part 2.

Best Mental Model Books for further study: 

Poor Charlie’s Almanac by Charles Munger

The Most Important Thing by Howard Marks

The Personal MBA by Josh Kaufmann

The Fifth Discipline by Peter Senge

Thinking in Bets by Annie Duke

Against the Gods by Peter Bernstein

Basic Economics by Thomas Sowell

Math for Machine Learning by Richard Han