**Use these mental models to improve your problem-solving and decision-making skills and overcome common [smoov.bra.in2023] reasoning errors. **

Greetings, my fellow plebeians.

I hope you are enjoying this series so far.

In case you missed the first three installments, you can check them out here:

**Mental Models:** Part 1 | Part 2 | Part 3

And if you’re keeping score at home, you’ll know we have discussed 9 Mental Models so far.

Today is the grand finale, number 10.

But you don’t have to stop at just ten.

There are thousands of mental models out there that you can study (it would take years for me to cover them all) that can help you avoid [smoov.bra.in2023] reasoning errors and suboptimal outcomes.

This series is just an intro to get you started.

Think of it as a starter-kit overview of the ‘Big Models’ that you need to master in order to gain a basic understanding of how the world works, how to manage uncertainty and risk, and how to make better decisions.

It was designed to highlight some of the most important models that apply broadly to life and are useful in a wide range of situations.

So, today we’re going to talk, discuss, and analyze Game Theory, which is quite arguably, the most important model of the series so far.

This will be the final chapter in our Mental Models series, so, let’s pick up where we left off in Part 3 and continue our journey into the mind-bending (err I mean super awesome) realm of mental models, where the horizon of understanding stretches endlessly before us.

And now, here is Part 4:

**10. Game Theory**

*“Don’t ever play games with me,”* she said.

*“Everything is a game, baby,”* he said.

In today’s multifarious world of free enterprise and big data, the relentless pursuit of strategic advantage has compelled modern enterprises (from tech giants to financial institutions) to lean on heavy analytical frameworks to decode complex interactions, probabilities, and decisions.

One such framework that has transcended from the dark realm of theoretical economics to widespread mainstream applicability in various sectors is the Theory of Games.

Or, better known in modern times as: Game Theory.

**So, what is Game Theory?**

Game theory, at its core, is the study of mathematical models of strategic interaction among rational decision-makers.

Or, in other words:

It’s the science of strategies which come under the probability distribution. It determines logical as well as mathematical actions that should be taken by the players to obtain the best possible outcomes for themselves in a game.

The focus of game theory is the game, which serves as a model of an interactive situation among rational players.

The key to game theory is that one player’s payoff is contingent on the strategy implemented by the other player.

Game Theory operates around three principal components: Players, Strategies, and Payoffs.

**Players**: The decision-making entities in the game.**Strategies**: A comprehensive action plan under different scenarios.**Payoffs**: The outcomes for different combinations of strategies.

Game Theory essentially involves the strategic interaction (strategy) between two or more participants (players) that results in a set of circumstances (the game) — arriving at either a good or bad outcome for the players (the payoff).

**It consists of the following assumptions:**

- All players know the rules of the game.
- All players are rational decision makers.
- All players will strive to maximize their payoffs in the game (acting according to their personal self-interest).

Initially founded as a tool for economic modeling by Von Neumann & Morgenstern in 1944, game theory has found uses in various sectors including, evolutionary biology, war, politics, psychology, engineering, and business.

It also has potential applications in your personal life and in the workplace.

In his book “Game Theory“, Morton Davis describes it as following:

*“The theory of games is a theory of decision making. It considers how one should make decisions and to a lesser extent, how one does make them. You make a number of decisions every day. Some involve deep thought, while others are almost automatic. Your decisions are linked to your goals—if you know the consequences of each of your options, the solution is easy. Decide where you want to be and choose the path that takes you there. When you enter an elevator with a particular floor in mind (your goal), you push the button (one of your choices) that corresponds to your floor. Building a bridge involves more complex decisions but, to a competent engineer, is no different in principle. The engineer calculates the greatest load the bridge is expected to bear and designs a bridge to withstand it. When chance plays a role, however, decisions are harder to make. … Game theory was designed as a decision-making tool to be used in more complex situations, situations in which chance and your choice are not the only factors operating. … (Game theory problems) differ from the problems described earlier—building a bridge and installing telephones—in one essential respect: While decision makers are trying to manipulate their environment, their environment is trying to manipulate them. A store owner who lowers her price to gain a larger share of the market must know that her competitors will react in kind. … Because everyone’s strategy affects the outcome, a player must worry about what everyone else does and knows that everyone else is worrying about him or her.”*

When employed as a mental model, Game Theory provides simplified structures to dissect the chaotic web of interactive decisions. It can give you (and your company) the ability to streamline choices and predict how other parties will react in given situations.

So, now that you understand the basic premise of the theory, now you’re probably wondering what is meant by “games” and how these games are played.

Taking a page from Game Theory and Strategy:

Game theory is the logical analysis of situations of conflict and cooperation.

The “game” is a waltz between conflict and cooperation.

**More specifically, a game is defined to be any situation in which:**

- There are at least two players. A player may be an individual, but it may also be a more general entity like a company, a nation, or even a biological species.
- Each player has a number of possible strategies, courses of action which he or she may choose to follow.
- The strategies chosen by each player determine the outcome of the game.
- Associated to each possible outcome of the game is a collection of numerical payoffs, one to each player. These payoffs represent the value of the outcome to the different players.

So, in simple terms, you can define a game as a model that represents a situation in which multiple agents (called players) make decisions that result in outcomes with payoffs for each player.

Game theory is the study of how those players should rationally play games.

At the end of the game, each player would like the result to be an outcome which gives him as large a payoff as possible.

AKA: You play to win the game.

**So, what kind of games do players play?**

That’s a great question, I’m glad you asked that question.

Let us first cover some basic lingo.

Games in matrix form can only represent situations where people move simultaneously.

Within games of this nature, we typically see the following params:

- The sequential nature of decision making is suppressed.
- The concept of ‘time’ plays no role.

These are called simultaneous games. These are games where the decisions of players are simultaneous: both you and the other ‘player’ choose at the same time. The simplest example of this type of game is probably ‘rock, paper, scissors’.

But many situations involve players choosing actions sequentially (over time), rather than simultaneously.

These are called sequential games.

Sequential games are strategic interactions where players make decisions in turns, knowing the moves of those who acted before them.

Chess is a good example of a sequential game.

Sequential games are commonly represented using extensive-form game trees (we will discuss trees a bit later) which capture the sequence of actions and associated payoffs.

Within a sequential game, solution concepts like backward induction and subgame-perfect Nash equilibrium are used to find optimal strategies for each player.

**This means that:**

- Players who move first can significantly influence the game.
- Players who move later on in the game have additional information about the actions and reactions of other players.
- Players make their next action conditionally based on additional information received during the game.

**Note for the nerds:** a Subgame Perfect Equilibrium (SPE) is a Nash equilibrium with the property that all players play best responses after each history of the game. You can solve for SPE by using backward induction.

Sequential games are a form of the non-cooperative game theory, which if you’re just starting out, should be the first game theoretical concept you try to master.

**Potential Landmine:** while sequential games allow for better prediction and planning that can lead to better decision making, they tend to ignore the extremely dynamic conditions of strategic situations.

That said, non-cooperative games have been associated with a multitude of challenges and difficulties (just ask any business school or MBA student) and a prime example of this (that is taught in lectures all over the world) is a concept many scholars like to call Prisoner’s Dilemma.

**The Nuances of Prisoner’s Dilemma:**

Let’s imagine a fictional story using two characters, Jack and Oliver, childhood friends turned professional criminals, who meticulously planned a jewelry heist that involved robbing a high-security vault storing diamonds worth millions of dollars.

Their plan was perfect, and the heist went off without a hitch, or so they thought.

Not even an hour after the robbery, they found themselves surrounded by police and were subsequently arrested.

The police didn’t have enough evidence to convict them for the heist itself (the diamonds were hidden securely) but they were caught with some burglary tools (not definitely incriminating but definitely suspicious) and were suspects only because a partial license plate match was captured by a traffic camera near the heist location. A weak case at best. But, if either of them talked and provided evidence against the other, that would be a different story altogether.

They were both separated and placed in individual interrogation rooms, a thick wall of silence between them.

**The Interrogation and Offer**

The detectives presented each suspect with the same proposition: The evidence for the heist was circumstantial; but, if either of them confessed and testified against the other, that person would get full immunity.

But full immunity comes at a price, you will walk free, but your buddy would take the fall and face 15 years in prison.

If both suspects stay silent, however, the police would only be able to book them on minor charges, resulting in 1-year sentences for each.

And it they both confessed, each would serve 5 years.

**The Dilemma**

The situation can be presented using a traditional payoff matrix:

Oliver Silent (Cooperate) | Oliver Confesses (Defect) | |
---|---|---|

Jack Silent | -1, -1 | -15, 0 |

Jack Confesses | 0, -15 | -5, -5 |

In this example, the first number in each cell represents Jack’s sentence in years, and the second number represents Oliver’s sentence.

- Both Silent (Cooperate/Cooperate): 1 year each.
- One Confesses, One Silent (Defect/Cooperate): 15 years for the one who remains silent, 0 years for the one who confesses.
- Both Confess (Defect/Defect): 5 years each.

**Jack’s Calculations:** Jack (let’s assume he is a risk-averse individual) begins to calculate his best strategy. If his buddy Oliver remains silent and they both get 1 year, that would be the best collective outcome. But, on the other hand, the lure of walking away free is strong. Plus, if Oliver confesses and Jack remains silent, he’s looking at 15 years. This terrifying possibility makes Jack strongly consider confessing, as it minimizes his worst-case scenario to 5 years. What to do?

**Oliver’s Calculations:** Oliver (let’s assume he is more of a gambler), thinks along similar lines but is more optimistic that Jack will remain silent. Afterall, Jack is his ride-or-die since childhood, right? Oliver sees the mutual benefit of both serving only 1 year but can’t completely ignore the risk of Jack betraying him. That is a real possibility. Who can you trust when it comes to time in jail? Usually, you can’t trust anyone. So, like Jack, he leans toward confessing to hedge against the worst-case scenario of receiving a 15-year sentence.

**So, how do you break the dilemma? **

The dilemma intensifies as both realize their optimal individual strategies (confessing) conflict with the optimal group strategy (staying silent). If either had a way to credibly commit to staying silent, both would benefit. Unfortunately, self-interest and the inability to communicate make this situation very difficult (and stressful).

**The tipping point occurs** when both players recall their childhood pact to never betray each other. Though it was made in youthful innocence, it now bears the weight of very adult consequences. So, both players, both longtime buds, now consider this pact and wonder if it’s strong enough to withstand the pressure and high stakes of the situation.

As time runs out, they both are forced to make their decisions. Drawing upon their friendship and the realization that betrayal would irrevocably damage something invaluable, they both choose to stay silent. When the detectives return to collect their decisions, they are disappointed, realizing the strongest weapon they had was the uncertainty each suspect had about each other’s decisions.

**In the end:** Jack and Oliver each receive 1-year sentences on minor charges, a far cry from the 15 years they risked or even the 5 years had they both betrayed each other. Their friendship remains intact, and they serve their time knowing they faced the Prisoner’s Dilemma and emerged victorious as a unit. Perhaps, they will also return to the spot where they stashed the treasure after they served their sentences and find it is still there, untouched.

By now you are probably beginning to see the irony of this dilemma.

If Jack and/or Oliver act selfishly and do not cooperate (one of them rats out the other), they will be worse off than if they were to act unselfishly and cooperate together (both of them stay silent).

**Table 1: The Payoff Matrix**

Oliver Silent (Cooperate) | Oliver Confesses (Defect) | |
---|---|---|

Jack Silent | -1, -1 | -15, 0 |

Jack Confesses | 0, -15 | -5, -5 |

**Table 2: Individual vs Collective Payoffs**

Strategy | Jack's Payoff | Oliver's Payoff | Collective Payoff |
---|---|---|---|

Both Silent | -1 | -1 | -2 |

One Confesses | 0 or -15 | 0 or -15 | 0 or -30 |

Both Confess | -5 | -5 | -10 |

By understanding these numbers, it becomes evident that although individual rationality pushes each towards betrayal, the collective rationality suggests cooperation as the mutually beneficial strategy.

This dichotomy lies at the core of the Prisoner’s Dilemma, serving as a model for various scenarios in economics, politics, and even in interpersonal relationships where trust and the temptation to betray compete.

Some real-life examples include: two countries competing in an arms race, businesses engaging in a price war, farmers increasing their crop productions, a text from a lover that says “if you really loved me…”

You get the idea.

So now we will briefly go back to where we started and tie a few things together:

*“Don’t ever play games with me,”* she said.

*“Everything is a game, baby,”* he said.

In a marriage, for example, the division of household chores can be framed as a classic Prisoner’s Dilemma game. Both partners benefit most when they both contribute, but each has an individual incentive to be lazy and get a free ride on the other person’s effort.

Within this game, mutual cooperation maximizes joint happiness, but individual incentives may lead to suboptimal outcomes. And it is a game where both parties benefit from mutual investment but risk emotional loss if the other doesn’t reciprocate.

The challenge here lies in aligning individual incentives with collective well-being to reach an optimal, cooperative equilibrium, and many of these situations, equilibrium is never reached.

Many such cases.

So (in theory) for the best outcome(s) to happen, players should cooperate.

This leads us into another theory that’s called the Theory of Moves.

According to the Theory of Moves, shifts in outcomes can be largely predicted on how the play starts.

We will come back to this in a moment.

**But first, let’s define what the Theory of Moves is: **

The Theory of Moves, developed by economist Steven J. Brams, is a dynamic extension of classical game theory that considers the sequential decision-making of players. It assumes players think ahead and anticipate the consequences of their moves and the countermoves this will induce from other players.

In this framework, players continuously adapt their strategies over time, instead of making a one-shot decision. The goal is to identify stable states where no player has an incentive to unilaterally deviate or change their strategy, given the subsequent adjustments and reactions of the other players.

**Now, let’s circle back and apply it to the Jewelry Heist Scenario from above: **

Here, Jack and Oliver would each consider the consequences of their initial choices and anticipate subsequent actions based on the Theory of Moves. This may involve reconsidering their decisions, especially if they suspect that the other might change their strategy.

**The Theory of Moves & Iterative Decision-Making:**

**Initial Decision:**Both players initially decide to remain silent.**Thinking Ahead:**Jack considers that Oliver might change his decision to confess in the next round to minimize his sentence.**Countermoves:**Knowing this, Jack may preemptively decide to confess, setting off a chain reaction where Oliver may then also confess.**Stable Outcome:**Eventually, both players may settle on a stable set of moves where neither has an incentive to change his decision further.

**Table 3. The Theory of Moves Iterative Decision Chart **

Iteration | Jack's Move | Oliver's Move | Total Sentence for Jack & Oliver |
---|---|---|---|

1 | Silent | Silent | 2 years |

2 | Silent | Silent | 15 years for Oliver, 0 for Jack |

3 | Confess | Confess | 10 years (5 each) |

4 | Confess | Confess | 15 years for Jack, 0 for Oliver |

5 | Silent | Silent | 2 years |

**Table 4. Stability Matrix**

Oliver Silent | Oliver Confesses | |
---|---|---|

Jack Silent | Stable | Unstable |

Jack Confesses | Unstable | Stable |

Here, a “Stable” cell indicates that given the other’s choice, neither player would change their decision in the next move.

Using this framework in our heist example, Jack and Oliver would contemplate not just the immediate consequences of cooperating or selling out the other, but also how such choices would impact future rounds of interaction, aiming to identify a stable outcome that neither would subsequently have an incentive to deviate from.

At the end of the day, the Theory of Moves can bring several dynamic elements into static games like the Prisoner’s Dilemma. By considering the chain of moves and countermoves, it extends the analysis beyond immediate payoffs to more strategic, long-term considerations.

**The Black Door**

In a less traditional game theoretical framework, my coach back from my days as an athlete at Clemson University used to always remind us of a story he liked to call “The Black Door.”

If you have never heard the story, or may be confused as to how it applies to Game Theory (or life, sports, etc.) here is the story as it was told to me:

*Several generations ago, during one of the most turbulent of the desert wars in the Middle East, a spy was captured and sentenced to death by a General of the Persian Army. The General, a man of high intelligence and compassion, had adopted a strange and unusual custom in his dealings with prisoners of war. He permitted the condemned person to make a choice. The prisoner could either face the firing squad or pass through a Black Door. As the moment of execution drew near, the General ordered the spy to be brought out of his cell. He has the enemy spy placed against the wall and the firing squad takes aim and readies themselves to shoot upon the given order. He slowly walks up to the spy and says, “I’m going to give you a choice about your fate. You can take the firing squad that is ready to carry out your sentence, or you can take what waits for you behind that Black Door.” The spy asks, “What is behind the Black Door?” The General replies, “I can’t tell you. It is your choice.” The spy starts to imagine the possibilities of a long and painful death. Perhaps there are tigers on the other side of the door that will tear him to shreds. Perhaps it will be snakes or another frightening and horrible death. After some contemplation, he confirms to the general that he is ready to take the quick and simple method of execution via the firing squad. Not long thereafter, a volley of shots rang out in the courtyard and the execution is carried out swiftly. And the execution is carried out swiftly. Afterward, a young corporal who had witnessed the whole thing walked up to the General and asked, “What is behind the black door?” The General replies, “Freedom. But I’ve known only a few men brave enough to take it. You see how it is with men, they will always prefer the known way to the unknown. It is a characteristic of people to be afraid of the unknown. Yet, I gave him his choice.”*

This story is an illustrative example in game theory that (depending how it’s told) can showcase the game theoretical concepts of:* sequential rationality, information asymmetry, decision under uncertainty, backward induction, and subgame perfect equilibrium.*

**Sequential Rationality:** The prisoner must ask a series of questions (internally and externally) that will result in a consistent and logical answer, to determine whether the general is a liar or a truth-teller. This encapsulates the idea of making rational choices in a sequence — each decision to ask a question is based on the general’s possible reactions.

**Information Asymmetry:** The general has complete information about the door, while the prisoner is in the dark. This creates a strategic imbalance that the prisoner must overcome by cleverly formulating his question(s) and reasoning path in the decision tree.

**Decision Under Uncertainty:** The prisoner’s decision has to be made under uncertainty and is thus a mixed strategy. He doesn’t know the type of general he is up against (liar or truth-teller), he doesn’t know what’s behind the black door, but assumes it is mysterious with unknown consequences. This resembles many real-world strategic decisions where all parameters are not fully known.

**Backward Induction:** The optimal strategy for the prisoner may be to use inversion (or work backwards) from the end outcome to the beginning. Instead of following this reasoning path, the prisoner assumed that the general, being rational, would only offer the black door if it led to an outcome at least as bad as confession. Anticipating this, he opted for the firing squad, landing himself in a known but suboptimal situation.

**Subgame Perfect Equilibrium:** In this scenario, the prisoner would consider the general’s strategy at every possible future decision point, including the possibility that the general has strategically made the black door option equally or more unappealing. Assuming the prisoner and the general both follow their respective strategies (the general either lies or tells the truth; the prisoner asks the optimal question), a subgame perfect equilibrium is reached. Neither has an incentive to deviate from their strategy in any subgame, making it a stable outcome. Given this reasoning path, the prisoner chooses the firing squad, as this becomes the optimal strategy when considering the game’s subgames, where the prisoner cannot improve his situation by unilaterally deviating.

The Black Door story is a conceptual model that helps to understand the intricacies of strategic decision-making, particularly in situations involving sequential moves and information asymmetry.

In the bestselling book “Thinking Strategically“, Dixit and Nalebuff may have put it best: *“Everyone’s best choice depends on what others are going to do, whether it’s going to war or maneuvering in a traffic jam. These situations, in which people’s choices depend on the behavior or the choices of other people, are the ones that usually don’t permit any simple summation. Rather we have to look at the system of interaction.”*

**Something to ponder:** are most people willing to choose a death they are familiar and more comfortable with than risk the unknown?

**Some Additional Game Theory Applications:**

**1. Game Theory in Space Exploration and The Race for Extraterrestrial Resources**

Space exploration is a complex, multi-faceted endeavor that involves various stakeholders, including governments, commercial companies, and international organizations — it’s a business that has evolved from a quest for knowledge and exploration into a complex web of cooperation and competition over resources, technological innovation, territorial dominance, and political influence.

And with the numbers of private companies and countries entering the space arena growing rapidly, the game has grown more complicated than ever before.

The decisions made by these entities are not isolated; they are interconnected in intricate ways that can be difficult to see under the lens of a casual observer — but they can be modeled using game theory.

Using game theory, we can create a framework to understand how these players interact, and how they make decisions and influence each other’s strategies — especially when resources are limited, and the stakes are high.

**This game can be broken down into the following: **

**The Players**

**Governments:**Primarily interested in national security, scientific discovery, and international prestige (nations like the U.S., China, Russia, EU countries, etc.)**Commercial Companies:**Focus on profitability through mining, tourism, or providing launch services (private entities like SpaceX, Blue Origin, Deimos-One, etc.)**International Organizations:**Aim for collaborative missions, scientific research, and maintaining international law (NASA, Roscosmos, ESA, JAXA, CSA, etc).

**Primary Objectives of Players: **To acquire valuable space resources (Moon Ice, Helium-3, etc.), establish bases on celestial bodies, develop new technologies, and maintain geopolitical influence.

**The Game (Objectives/Strategies/Payoffs)**

**Objectives**

**Resource Allocation:**How to allocate limited resources like launch vehicles, manpower, and technology.**Mission Selection:**Choosing between various mission profiles like Moon landing, Mars exploration, or asteroid mining.**Collaboration vs Competition:**Deciding when to collaborate with other players and when to compete.

**Strategies**

**Cooperative Strategies:**Joint missions, sharing technology, and pooling resources.**Non-Cooperative Strategies:**Going solo on missions, exclusive contracts, or engaging in space races.**Mixed Strategies:**A combination of cooperative and non-cooperative strategies depending on the situation.

**Payoffs**

**Scientific Discovery:**Gaining new knowledge that could benefit humanity.**Economic Gains:**Through mining, tourism, or other commercial activities.**Strategic Advantage:**In terms of national security or international standing.

**Game Theoretical Frameworks Used**

**Zero-Sum Game:**In a zero-sum game, one player’s gain is another player’s loss. This embodies the competitive side, where gains by one player are offset by losses to another, often seen in races for territorial claims.**Non-Zero-Sum Game:**In a non-zero-sum game, all players can benefit. This represents the cooperative aspect, where multiple parties can benefit from shared technologies and resources. This is often the case in collaborative missions.**Dynamic Game:**In real-world scenarios, space exploration is a dynamic game with sequential moves. Countries and entities may initially cooperate but later compete as objectives evolve.**Nash Equilibrium:**A set of strategies where no player has anything to gain by changing their strategy while the other players keep theirs unchanged. This could be a mixed strategy where countries collaborate on technological advancements while competing for resources, as this yields the highest expected payoffs for all players.

**Decision Trees**

Decision trees can be used to model the sequential decisions made by players, especially in complex missions involving multiple stages and decision points.

In our Lab, we use decision and game trees (or decision forests in more complex situations) every day to map out the types of scenarios we encounter during high stakes aerospace missions.

**But first, what the hell is a decision tree?**

A Decision Tree is a map (made up of nodes and branches) of the possible outcomes of a series of related choices. It allows an individual or organization to weigh possible actions against one another based on their costs, probabilities, and benefits.

Decision Trees are used to break down a complex decision-making process into a series of simpler decisions, leading to a predicted outcome or action. They can also be used to drive informal discussion or map out an algorithm that mathematically predicts the best choice.

Data nerds love to use decision trees in machine learning and statistics to make decisions based on multiple conditions.

A decision tree typically starts with a single node, which branches into possible outcomes.

Each of those outcomes leads to additional nodes, which branch off into other possibilities.

This gives it a treelike shape.

Node live on these trees.

**There are three different types of nodes:** chance nodes, decision nodes, and terminal nodes.

**A chance node**, which shows the probabilities of certain results.**A decision node**, which shows a decision to be made.**A terminal node**, which shows the final outcome of a decision path.

Usually, we assign each decision node to one player.

When the decision node of a player is reached, the player chooses a move.

When a terminal node is reached, the players obtain payoffs: an assignment of payoffs for each player.

**Quick summary: **

- Decision trees are typically used to map out scenarios involving only one player.
- Game trees are designed to handle scenarios with multiple players.

Got all that?

Great.

It’s important to keep in mind that these aren’t miracles that can solve every problem.

These are just tools.

And these tools are not infallible, they do not take the decision out of the player’s hands. Instead, they provide additional clarity to help the player find the best strategy given the known alternatives. This way, the decision-maker isn’t left with a simple guess, but instead, with a likely probability.

This allows us to cleverly model simplicity in stochastic environments.

That said, every decision tree is based on certain assumptions.

The goal is to limit these to the most useful and relevant assumptions for the scenario at hand.

This provides simplicity to the model.

Otherwise, you’d have total chaos.

For example, if you included every situation imaginable, such as trying to calculate every probability and see every possible future like Dr. Strange (e.g. the remote chance of famine, nuclear war, immediate population collapse, etc.) you dramatically limit the probability of evaluating and/or choosing a more relevant chance.

Game trees, on the other hand, offer additional advantage, they allow players to make better decisions by forcing them to consider the actions and reactions of every player involved.

When participating in a sequential game (as discussed above) the decision-maker can get rid of a lot of uncertainty just by creating a list of all players, their actions and reactions, and the decision-maker’s response to each one.

Using a game tree makes it much easier to keep track of these variables, and helps a player create a gameplan (and not overlook alternatives) which greatly reduces the chance of surprise.

The last thing you want to be in any high-stakes competition is surprised by your opponent.

**Table 5: Objectives and Payoffs**

Players | Acquire Resources | Develop Tech | Establish Bases | Geopolitical Influence |
---|---|---|---|---|

USA | 25 | 20 | 15 | 10 |

China | 30 | 15 | 10 | 5 |

SpaceX | 40 | 30 | 20 | 0 |

Deimos-One | 35 | 25 | 15 | 0 |

*Units represent utility or payoff for achieving each objective. Higher is better.*

**Strategies and Outcomes**

Players can either go it alone to maximize individual gain, or they can cooperate with others to achieve common goals.

**Cooperate on Technology**: Pool resources to accelerate technology development.**Compete for Resources**: Engage in a race to claim valuable space resources.**Mixed Strategy**: Cooperate on some fronts while competing on others.

**Table 6: Strategies and Expected Payoffs**

Players | Cooperate on Tech | Compete for Resources | Mixed Strategy |
---|---|---|---|

USA | 15 | 20 | 20 |

China | 10 | 10 | 15 |

SpaceX | 25 | 5 | 30 |

Deimos-One | 20 | 5 | 25 |

*Expected payoffs for each strategy. Higher is better.*

**Real-World Applications & Scenarios**

**Moon Exploration**: The Artemis Accords is an example of countries trying to create a Nash equilibrium by agreeing on principles for moon exploration.**Moon vs Mars:**Governments and Private Companies have to decide whether to invest in Moon missions as a steppingstone to Mars or go directly to Mars.**Asteroid Mining**: This is often viewed as a first-mover advantage scenario. Whoever stakes a claim first reaps the highest payoff. Companies have to decide whether to invest in the technology needed for asteroid mining, which has high risks but also high potential rewards.**International Collaboration:**Countries have to decide whether to collaborate on international missions or compete against each other.

**Implications and Conclusions**

**Incentive to Cooperate**: In many instances, there’s a higher cumulative payoff for cooperation, especially in technological developments which are capital-intensive.**First-Mover Advantage**: In terms of resources, there’s often a strong incentive to move quickly and stake claims, leading to a zero-sum competitive game.**Complexity Over Time**: As new players enter and technological advances occur, the game’s complexity increases, requiring more nuanced strategies and alliances.

In the dynamic and highly strategic landscape of space exploration, game theory offers valuable insights into how countries and private entities can optimize their objectives, both in cooperation and competition.

As the stakes in space continue to rise, understanding these theoretical underpinnings will be crucial for making strategic decisions that yield the best outcomes for all players involved.

**Strategic Choices in Space Alliances**

As more private companies enter the domains of space exploration and galactic warfare, they face crucial decisions concerning partnerships and resource allocation. For example, Deimos-One could form a strategic alliance with another tech giant to co-develop a new stratospheric platform for detecting and transporting Moon ice.

Game Theory can be utilized to analyze whether forming such an alliance would be mutually beneficial or if going solo would yield higher payoffs.

**Table 7: Strategic Choices in Space Alliance**

Decision | Deimos-One Payoff | Tech Giant Payoff | Combined Payoff |
---|---|---|---|

Form Strategic Alliance | 80 | 75 | 155 |

Go Solo | 70 | 65 | 135 |

Here, the combined payoff of forming an alliance is 155, which is higher than going solo. Thus, Game Theory would suggest that the alliance is the optimal decision for both parties.

How a firm interacts with other firms plays an important role in shaping sustainable value creation. Here we not only consider how many companies interact with their competitors, but how companies can co-evolve. Game Theory is one of the best tools to understand interaction. Game Theory forces managers to put themselves in the shoes of other players rather than viewing games solely from their own perspective. The classic two-player example of game theory is the prisoners’ dilemma. — Michael J. Mauboussin

**Final Thoughts**

If you have made it this far, you have made it to the end of yet another long-boring analysis on Game Theory and subgame perfection.

For the sake of being boring, however, let us consider, is there a Nash Equilibrium in total global annihilation?

World War 3 is in fact, just around the corner.

So, just for the hell of it, let us consider the scenario of total global annihilation in high stakes war games.

#### Nuclear Deterrence and Mutually Assured Destruction

At the time of this writing, tweeted under my hand, this 27th day of October, anno Domini 2023; where we sit Neanderthal on one hand and Singularity on the other, where any such event of significance and power (nuclear or otherwise) may catapult us forward into the future or a thousand years back into the dark ages, no man yet has ever possessed the knowledge to know things unseen, but often, will speak prophecy about the end according to his taste.

I work in probabilities, but I cannot 100% accurately predict the future.

No one can.

The reason for this is simple: it’s due to the paradox of prophetic brilliance, which means **the fool must be intelligent enough to recognize that he is, in fact, a fool.**

It is in a sense, a meta-layer of ignorance — the ignorance of your own ignorance.

A massive contradiction.

And yet, here we sit.

All fools.

Strategically posturing in a bipolar world.

Many such cases.

One of the most cited examples of this strategic posturing is the Cold War standoff between the United States and the Soviet Union. In this two-player game, the strategic posture essentially boiled down to “mutually assured destruction” (MAD). The game examined the benefits and risks of pre-emptive strikes versus a strategy of restraint and deterrence.

#### Table 8: Nuclear Deterrence Payoffs

Decision | USA Payoff | USSR Payoff | Combined Payoff |
---|---|---|---|

Both Deter | -10 | -10 | -20 |

One Pre-empts | -100 | -5 | -105 |

Both Pre-empt | -100 | -100 | -200 |

In an obscurantist world with imperfect information, where not all actors are rational, where rational actors can have lapses in judgment, and where tension and emotion can reach apogee, it can be incredibly difficult to evaluate risks and predict outcomes properly and/or effectively.

In this case, the payoffs represent the catastrophic consequences of nuclear war. Both nations face an incentive to deter rather than pre-empt, resulting in a Nash equilibrium of mutual deterrence.

That said, given the infinitely stochastic nature of our planet (a realm devoid of consistency or asymptotic normality), it can be quite difficult to accurately predict the future and make good decisions given the: **(1)** intricate interplay of the variables (known and unknown); **(2)** constraints of our inherent cognitive biases which limit our predictive abilities; **(3)** natural psychological drive to overestimate the likelihood of catastrophic events due to availability heuristic/fear/sensationalism; **(4)** fact that many phenomena of the cosmos (on Earth and elsewhere in the universe) remain mysterious and beyond our current comprehension.

To add to the complexity and uncertainty, humans are a species unlike any other in the known universe.

Their behavior is not random.

Human behavior is systematic and predictable — making the species predictably irrational.

This may destabilize the system away from its Nash equilibrium.

To add to the complexity and uncertainty, the humans are a war-like species (constantly locked in never-ending battles over scarce resources) so their home planet is constantly under threat of annihilation (whether self-inflicted or otherwise), adding additional layers and dark tunnels to your reasoning path.

You see, on Earth (Solar System 1) the future is inherently unpredictable because not all variables can be known — and even the tiniest error in your analysis can quickly throw off your predictions.

On Earth, the home of irrationality and stochasticity, chaos and uncertainty rule the day.

And because the future on Earth is not 100% deterministic, decision makers need to find ways to shine light on the reasoning path, to help navigate through the dark tunnels of uncertainty and complexity and gain a better understanding of the events that could have a significant impact on a multitude of potential futures, whether positive or negative.

This is where your newfound knowledge of game theory will come in handy.

In times of uncertainty, game theory should come to the forefront as not only a strategic tool, but also as a mental model to navigate complexity in high stakes decision making situations.

Sure, there are some who consider game theory to be more theoretical than practical.

But many of them are simply academically unable to convert theory to application.

Their models are sloppy.

Their implementations are very linear.

And their strategy is designed to offer a single, overly precise answer to a very complex problem.

The results are usually subpar.

A common [smoov.bra.in2023] runtime error.

The key, I have found, to building a working model that is applicable in real-life situations is to (1) develop a range of outcomes (assuming your players are rational); and (2) map out the advantages and disadvantages of each option.

This way, you can avoid “shitty analysis syndrome” where you get a binary “yes or no” to a situation that is not so black and white.

This often requires you to develop a comprehensive and strategic framework to support your decision-making process, which will take you much deeper into the decision tree.

**User Advisory:** developing a working model that extends beyond a useless theoretical analysis is intended for seasoned vets and those willing to go deep into the pain cave. User discretion is advised.

One of the worst things I have observed (especially in the corporate setting) is that uncertainty can paralyze decision making, and, perhaps worse, compel managers and stakeholders to base their actions on gut feelings and not much else.

These “gut feelings” better known as intuition bias; will get you killed in high stakes games.

But, a working game theoretical model, however, can provide clear information to the decision-making process, but only if it is modeled in a way where the inputs are detailed enough to make the methodology practical and a wide range of probable scenarios can be properly analyzed.

This can generate good results, even in complex environments.

In our Lab, we have applied these types of models to many types of environments, with positive results.

In our Earth observation platform, for example, we have examined the dynamics of airspace control with adversarial elements in near space environments. In particular, scenarios where our HALO vehicle is at risk of being jammed or interfered with by adversarial airships or drones.

Noting that downside risk needed to be minimized, we used the Minimax equilibrium to find optimal paths that could curtail the worst-case scenario for the airship. This is crucial for mission-critical applications where the downside risk needs to be minimized.

We also looked at strategic options for the AI powered prioritization of specific observational tasks using a real-time auction mechanism, especially when the airship is battling high-speed winds. Each task had different priorities and requirements, and HALO needed to be able to allocate its limited propulsion energy efficiently while contending with 275+ mph (239 kts / 442 km/h) jet stream winds.

This was modeled as a Vickrey-Clarke-Groves (VCG) mechanism where each payload on board would bid for priority in observational tasks. HALO’s AI acted as an ‘auctioneer’ and allocated propulsion energy based on the bids and stochastic elements in near space (e.g., wind speed).

Pareto optimality can also be applied here. A Pareto-optimal solution would allocate energy in such a way that no single task can improve its observational quality without adversely affecting another task. The AI was used to find such Pareto-optimal solutions in real-time.

The energy allocation was then adjusted dynamically based on real-time wind data and state of each observational task. This ensured that the most critical tasks were prioritized when wind conditions were most challenging, thereby maximizing the overall value of the observations.

Both of these scenarios were modeled using a game-theoretic approach, allowing us to achieve multiple goals, even in stochastic environments.

But like I mentioned earlier, game theory is just a tool, and a tool is only as good as the person using it. These models are usually only helpful if company decision makers can extract information that can help them make informed decisions based on a wide range of actions by the players (or competitors) involved.

That said, when you are starting out building your own model, or trying to figure out the proper solution steps for a challenging problem, it can help to break down the complexity into basic elements such as:

- Are the players in this game rational?
- Are the players acting according to their self-interest?
- Do the players all understand the rules of the game?
- Can we reach Nash equilibrium?

If you cannot meet these parameters, you should determine options to optimize for: rationality, self-interest, game rules, Nash.

Within those key variables, you should be able to determine the necessary tradeoffs and optimizations required to improve your model and shift it to be more aligned to make data-driven (rational) decisions.

Well, I think I’ve rambled on enough for today.

I hope you gained some valuable insight from this series and will leave just a bit smarter than you were when you started.

So, what Mental Model has helped you the most?

Let me know on X/Twitter.

Follow me for more shitty analysis: twitter.com/jaminthompson.

Here is Part 1, Part 2, and Part 3 in case you missed them.

**Best Mental Model Books for further study: CLICK HERE**