mind children

Our Mind Children

Genealogy is often described as a journey into one’s past, a quest to dig up the intricate (and often deeply buried) roots and branches of the family tree.

Recently, I have been spending a lot of time (digging up the past) and trying to piece together my own family tree. 

My experiment began out of simple curiosity—who am I and where do I come from—and wanting to connect the dots with the lives and stories of my forebears.

Going into it, I had modest expectations, expecting to find the usual boring stuff. 

“Your great Aunt Sally married your great Uncle Buck, and their kid married bla bla and had bla bla kid.” “But I don’t know their kid.” “Oh, but they know you. They used to come around all the time. Don’t you remember?”

You know, the usual stuff. 

But the deeper I got into it, the more I realized it was anything but usual or boring.  

In fact, what I discovered was far more fascinating and unusual than I ever could have imagined.

The experience so far has been equal parts fascinating and bizarre.

“Oh cool, I’m related to this super awesome person who fought the British at York.”

or

“Oh my god, I’m related to a murderous rapey savage who used to eat people.”

But over the past few months—delving into centuries-old records and whispered family legends—I’ve managed to put together a rather comprehensive family tree that dates back thousands of years. 

Looking at all the nodes, there is a clear and distinct lineage intertwined with remarkable feats and dark secrets.

There are countless tales of courage, honor, betrayal, and inexplicable horrors.

There are records of warriors who forged their paths with iron and blood, sword and fire—scholars who delved into forbidden knowledge—and tragic figures whose lives were marred by curses and misfortune.

There were kings and lords, gladiators and slaves. 

I made discovery after discovery after discovery. 

And each new discovery was like a puzzle piece, the more pieces you are able to put together the clearer the picture becomes, revealing a complex and often terrifying tapestry of family history.

Mathematically speaking, you could spend a lifetime researching your bloodline and barely scratch the surface. 

There are so many people, so many stories, and so little time. 

But as I got deeper into the rabbit hole, I was able to uncover a lot of my ancestral branches, even the ones that were buried deep and difficult to pull out, revealing, piece by piece, the truth about my forebears and the forces that guided their fates and shaped their destinies. 

And the more I found, the more I realized I did not know—and I wanted to uncover as much as I possibly could. 

I’m not quite sure how to explain it but looking that far back into the past almost doesn’t even seem real.

These people are your relatives, but at the same time, they are total strangers.

They don’t know you and you don’t know them.

Yet, you get to look back at them and you’re not quite sure if they are looking back at you.

A wise man once said, if you gaze long enough into an abyss, the abyss will gaze back into you. 

And I felt that. 

It’s quite an unusual feeling—it sort of feels surreal—almost like a dream where you are trying to unravel a tightly wound ball of yarn.

Your hands glisten with sweat, your fingers feel weak, and the entire dream unfolds in slow motion, making the unraveling ever more challenging. 

As you pull each strand, moving closer to the core, the essence of you and the source of where you came from is eventually revealed. 

In one moment, you can see your parents, then your grandparents—and suddenly in the next—and with a jarring revelation, the yarn ball is completely unraveled.

You’re at the end of the line.

You’ve uncovered your bloodline’s most ancient ancestor who walked the Earth hundreds of thousands of years ago.

And that’s where things start to get weird. 

Questions Beget Questions

If you make it that far back (hundreds of thousands of years) you start to ask yourself some really difficult questions as you start to imagine your distant ancestors who were roaming the Earth hundreds of thousands of years ago.

What were they like?

What was a normal day like for them?

Did they fall in love?

Did they go to war?

Which tribes did they hate?

Who did they kill?

What did they hunt and gather?

Did they get sick a lot?

What did they think about?

What sort of wild thoughts occupied their minds as they roamed the untamed lands?

Did they even think thoughts in words?

What was going through their minds as they tried to make sense of their bodies and their environment? 

Did they ever wonder how they got there?

Did they have a moral code?

Did they ever think about the meaning of life?

Did they ever think about the future?

We may never know the details, but thanks to modern technology, scientists have been able to identify one of these ancient ancestors. 

They have named him Y-chromosomal Adam.

Y-chromosomal Adam is the most recent common (patrilineal) ancestor from whom all current living humans are descended.

In other words, he’s my great14,000 grandfather—and at the same time, he is also your great14,000 grandfather, as well as everyone else’s great14,000 grandfather. 

He is the most recent male from whom all living humans are descended through an unbroken line of male ancestors—marking the last point in history where a common male ancestor connected us all.

Y-Chromosonal Adam Family Tree

So, what was this Adam dude like?

And what was life like for him?

He, much like every other early human on Earth at that time, probably wandered around a lot searching for food, water, and shelter—yet he had no maps to guide him, no shoes to protect his feet, no water filter to make sure he didn’t get sick. 

The only thing he probably had was determination (mixed with a moderate amount of rage and a small sprinkle of hope). 

And I can’t help but wonder: while he was out roaming the lands looking for delicious sabertooth meats and juicy berries to eat—and his feet began to hurt—did he ever stop to imagine a strange, unusual, futuristic ‘utopia’ where, instead of trudging around in the hot dirt, he could be cruising around modern Monaco in a sleek Purosangue?

Did he think about (or grasp) that his actions, seemingly insignificant in their present, could create a ripple effect through time, eventually leading to my existence, here and now, writing this very story, and you, reading it?

Perhaps one of the strangest things that I have discovered during my research (that wasn’t even research related) is that a lot of people can get very weird and squeamish when it comes to this ancient ancestor stuff. 

If you mention it, you can see them start to get clammy and sweaty as if the histories may open a Pandora’s box they were hoping to keep closed.

Perhaps the wild savagery of their ancestors makes them uncomfortable somehow. 

Or maybe they are afraid modern genetics will reveal they were adopted, or that their child may be someone else’s. 

But, alas, truth delivered by lies is no less true and dreams made reality by falsehood are no less real.

I don’t think anyone’s ever arrived at the right answer for any difficult problem that has a massive data set and requires complex equations by playing it safe, so, I decided to push the envelope even further. 

I took my research far past sapiens and went back 3.8 billion years into the past (roughly 1,150 billion generations) and went back to the time where the first living particle is believed to exist. 

This particle is (they say) the founder of all life on Earth. 

No one is 100% sure how this particle started living or where it came from—it’s one of the great scientific mysteries of our time. 

There are various theories about the origin of this particle—some theorists believe it may have emerged from a primordial soup, others suggest it could have arisen through spontaneous generation, or perhaps it arrived on Earth from elsewhere in space.

Is this weird particle your super-distant, super-alien, super-ancient ancestor?

We may never know the answers. 

But one thing we can be fairly certain of is that our human-form ancient ancestors were (probably/definitely) thinking about ways to make their world better. 

The word utopia may not have been invented yet, (more on language later), but the general idea may have existed. 

And I think we can reasonably assume the ancients were probably thinking about ways to improve their environment and ultimately their existence. 

They didn’t understand the world as well as we do today (and the world was much smaller back then) but there is no reason to believe they didn’t imagine a better world.

Note: the word “utopia” was invented by Sir Thomas More in 1516 when he published his book titled “Utopia.” The term comes from the Greek words “ou” (meaning “not”) and “topos” (meaning “place”), which together signify “no place” or “nowhere” or a place that does not exist. 

I have theorized in previous papers that utopia is not a fixed endpoint, but instead, a dynamic state that constantly evolves with human desires and capabilities.

It’s an infinite feedback loop of abstraction, where each achieved vision spawns new, more refined abstractions.

And the pursuit of utopia is a never-ending process, driven by our collective desire for improvement.

So, it is reasonable to assume that our ancestors 5,000, 10,000, or even 100,000 years ago were also dreaming of a ‘utopia’ similar to how we do today. 

We think about the singularity, and they were probably thinking about cold water and a warm bed. 

It’s all relative, I think. 

And even though 2024 may seem somewhat disappointing and boring compared to the future sci-fi and futurism promised us years ago—we are still undoubtedly living in the most advanced (and technologically terrifying) era of human history. 

So, in that sense, we are both the luckiest and unluckiest generation in history. 

And as we venture deeper into the 21st century, it’s becoming increasingly clear that our creations, our machines, are not just tools but perhaps something more.

Ancient ‘Aliens’ Ancestors

In the following, I want to briefly describe a futurological puzzle or riddle that I will then explore and solve in detail.

But before that, it is necessary to make a few brief general theoretical observations:

Humans have been around for an impressively long period of time, with the latest estimates placing our origins between 200,000 and 300,000 years ago.

Within this vast timeline, there are two (possible ancient alien) ancestors who played pivotal roles.

Y-chromosomal Adam, our common paternal ancestor, lived around 120,000 to 200,000 years ago; and Mitochondrial Eve, our common maternal ancestor, lived about 99,000 to 148,000 years ago.[1]

Genetic studies indicate that all modern humans can trace their ancestry back to Y-chromosome Adam and Mitochondrial Eve.[2]

Let’s take a moment to process what this could mean. 

For starters, it speaks to how much shared genetic heritage we have as humans, and also, speaks to the incredible journey of our species across countless generations over hundreds of thousands of years. 

When you think about it, it’s actually some pretty incredible stuff.

Note: The Book of Genesis puts Adam and Eve together in the Garden of Eden, but the biblical reference is a bit of a misnomer because they were very likely not the only humans alive during this time, but they are the ones from whom all modern human mitochondrial DNA and Y chromosomes are descended.

Let’s refocus and try to put what this means into context. 

Is it mathematically and genetically possible this “Adam” and “Eve” could be real and true?

That’s a great question, I’m glad you asked that question. 

Let’s try to simplify it and break it down.

We will use “Adam” as a representative example: 

When a population size remains stable (as is likely to have been what happened for long periods of human history), men typically have only one son on average. 

This leads to a high probability (as demonstrated in evolutionary theory) that any one man’s paternal line will eventually die off, with all of his male descendants inheriting Y chromosomes from other men. 

And over long timescales, it is likely that all but one man’s (Adam’s) Y chromosomes became extinct, making all modern men descendants of that ancient ancestor known as Y-chromosomal Adam.

The concept of Y-chromosomal Adam can be explained through a combination of probability and evolutionary theory.

Here’s a mathematical breakdown:

  1. Stable Population Size: In a stable population, each generation has the same number of individuals. Assume the population size is N, and each man has, on average, one son.

  2. Probability of Lineage Extinction: For any given man, the probability that his Y-chromosome lineage will eventually die out is high. This is due to the random nature of inheritance and the fact that not all sons will have sons of their own. Mathematically, if p is the probability that a man’s line will continue in any given generation, then the probability of extinction is q=1−p. 

  3. Coalescence: Over many generations, the number of distinct Y-chromosome lineages will decrease. This process continues until only one lineage remains. This concept is known as “coalescence” in population genetics. The expected time to the most recent common ancestor (MRCA) of a population’s Y-chromosomes can be calculated using coalescent theory, which provides that the expected time to coalescence for lineages is proportional to the population size: T≈2N generations.

  4. Extinction of Lineages: As generations pass, the likelihood that any particular Y-chromosome lineage will persist declines due to genetic drift. Given enough time, the probability approaches 1 that all but one Y-chromosome lineage will become extinct.

  5. Y-chromosome Adam: The surviving Y-chromosome lineage is traced back to a single individual, Y-chromosome Adam. He is not the only man alive at his time, but he is the only one whose Y-chromosome lineage has survived to the present day.

For the nerds, here is a deeper dive into the probability of lineage extinction:

The variables: 

  • Continuation Probability (p): The likelihood a man will have at least one son who will further pass on the Y-chromosome.
  • Extinction Probability (q): The probability that the lineage will eventually die out (since not all sons have sons of their own). 

Example:

If there is a 70% (p = 0.7) chance that a man’s line will continue in each generation, the probability of extinction is:

If q=1−p, then q=1−0.7=0.3

So, there is a 30% chance that the lineage will eventually die out.

To find out the expected number of generations (n) until the lineage dies out, we need to determine when the cumulative probability of extinction reaches a specific value.

So, we will need to calculate the number of generations needed for the probability of extinction to reach 99%. 

Because extinction compounding can be a little tricky and weird, we will need to use a geometric distribution. 

Based on the geometric distribution model for the given probabilities, we arrive at an answer of roughly 3.33.

So, on average, it will take about 3.33 generations for the lineage to face extinction given the 30% chance of extinction per generation.

human lineage extinction probability

Long story short, when we take into account the principles of genetic drift and coalescent theory, it is mathematically likely that all modern Y chromosomes trace back to one ancient dude, as all other Y-chromosome lineages have (probably or eventually) died out over many generations.

Pretty gnarly stuff. 

But yet, this still doesn’t answer the most pressing questions of our inquiry. 

We want to know what this dude was like. 

What did he look like?

How did he live?

And even better: where did he come from?

Is he human, alien, or something more?

To figure out this ancient genetic puzzle, we will need to make some assumptions:

  1. It’s reasonable to begin human history five million years ago, when the human line of evolutionary descent separated from that of our closest nonhuman relative, the chimpanzee.
  2. It’s also reasonable to begin it 2.5 million years ago, with the first appearance of homo habilis; or 200,000 years ago, when the first representative of “anatomically modern man” made its appearance.
  3. It’s also reasonable to begin it 100,000 years ago, when the anatomically modern man had become the standard human form. 

Instead, we will begin only 50,000 years ago, when “anatomically modern man” had evolved to become “behaviorally modern man.”[3]

This is an eminently reasonable starting point, as well. 

Note: The chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) are our closest nonhuman relatives. Both species share about 98-99% of their DNA with humans, making them our nearest living relatives in the animal kingdom.

For the purposes of this paper, when we use the term “behaviorally modern man” it will refer to the existence of hunter-gatherers, of which some small pockets still remain. 

Based on archeological evidence, humans living 100,000 years ago apparently still sucked at hunting. 

They did not have the knowledge or skill to take down large and dangerous animals, and it appears that they also did not know how to fish. 

Their tools were almost exclusively made of stone and wood and materials of local origin, indicating they did not do any distance traveling or trading. 

On the other hand (about 50,000 years later), the human toolkit took on a new, greatly advanced appearance. 

Other materials were used besides stone and wood: bone, teeth, shells, antler, and ivory, and the materials often came from distant places.

The tools, including pins, needles, blades, knives, barbed points, and borers were more complex and skillfully crafted.

The long-range weapon technology was significantly improved and indicated highly developed hunting skills (although bows were invented only about 20,000 years ago).

It also appears that man figured out how to fish and was able to build boats around this time. 

Moreover, next to plain, functional tools, seemingly purely artistic implements: ornaments, figurines and musical instruments, such as bird-bone flutes, appeared on the scene at this time.

Now, this largely explains the “behaviorally modern man”, but Y-chromosomal Adam was not behaviorally modern. 

If we were to quickly rewind 100,000 years, the world that Y-chromosomal Adam lived in was probably much different. 

In the literature, primitive man has been frequently described as peaceful and living in harmony with nature.

A popular concept in this regard is Rousseau’s portrayal of the “noble savage.”

Note: The idea of the “noble savage” was Rousseau’s romantic conception of man enjoying a natural and noble existence until civilization makes him a slave to unnatural wants and corrupts him.

Aggression and war, as it has been frequently held, were the result of civilization built upon the institution of private property. In fact, the truth is almost exactly the reverse.[4]

True, the savagery of modern wars has produced unparalleled carnage. Both World War I and World War II, for example, resulted in tens of millions of deaths and left entire countries in ruin.

And yet, as anthropological evidence has in the meantime made abundantly clear, primitive man has been considerably more warlike than contemporary man.

It has been estimated that on the average some 30 percent of all males in primitive, hunter-gatherer societies died from unnatural—violent—causes, far exceeding anything experienced in this regard in modern societies.[5]

According to Lawrence Keeley’s estimates, a tribal society on the average lost about 0.5 percent of its population in combat each year.[6]

Applied to the population of the twentieth century this would amount to a casualty rate of some 2 billion people instead of the actual number of “merely” a few hundred million.

Of course, primitive warfare was very different from modern warfare. It was not conducted by regular troops on battlefields, but by raids, ambushes, and surprise attacks.

However, every attack was characterized by utmost brutality, carried out without mercy and always with deadly results; and while the number of people killed in each attack might have been small, the incessant nature of these aggressive encounters made violent death an ever-present danger for every man, and abduction and rape for every woman.[7]

Moreover, increasing evidence for the widespread practice of cannibalism has been accumulated in recent times. Indeed, it appears that cannibalism was once upon a time an almost universal practice.[8]

But let’s go back to the part about what this Adam dude was like, and what life was like for him. 

If I had to guess, I would say Y-chromosomal Adam was a nasty, highly unpleasant savage who probably raped, murdered, and ate people. 

But on the bright side, he was able to hulk smash his way through the savage Earthlands and survive long enough to pass on his genes. 

If he made a critical error on his quests (or died randomly from an incurable illness or accident) the human race probably wouldn’t have survived and you wouldn’t be sitting here reading this right now. 

The current world would be very, very different and not a single one of us would exist. 

Note: For the purposes of this paper we will make the analogy that Earthlands (aka ancient Earth) is a first person, open world, survival video game.

A Neocortical Step Function

Now, you’re probably thinking “wow that’s cool, what are the chances of that?” 

or

“This Adam guy sounds like a savage mofo, how did his descendants eventually evolve from savage to anatomically modern man into behaviorally modern man?”

One of the leading theories is that humanity made a significant leap forward due to a random genetic change that led to the emergence of language, which drastically enhanced humans’ ability to learn and innovate.

As such, the progression of humanity can be described through the development and use of language.

In fact, language is arguably the greatest technological advancement humans have ever come up with. 

The archaic humans—homo ergaster, homo neanderthalensis, and homo erectus—did not have command of a language.

To be sure, it can be safely assumed that they employed, as do many of the higher animals, the two so-called lower functions of language: the expressive or symptomatic function and the trigger or signal function.[9]

However, they were apparently incapable of performing the two higher, cognitive functions of language: the descriptive and especially the argumentative function.

These unique human abilities—so uniquely human indeed that one cannot think them ‘away’ from our existence without falling into internal contradictions—of forming simple descriptive statements (propositions) such as “this (subject) is ‘a’ (predicate),” which claim to be true, and especially of presenting arguments (chains of propositions) such as “this is ‘a’; every ‘a’ is ‘b’; hence, this is ‘b’,” which claim to be valid, emerged apparently only about 50,000 years ago.[10]

So, how did humans figure out how to talk?

Here’s how I think it went down:

Once upon a time, one random day in history some 50,000 to 70,000 years back, the human brain advanced to the point where it could understand sounds and symbolize objects. 

The brain also figured out that the sound “fire” was not itself a fire, but that it could be used as a representation of a fire. 

It was a sound that symbolized a fire.

This led to the invention of language. 

By 50,000 BC, there were words for all sorts of things, allowing humans to speak in full, complex language with one another, enabling them to share thoughts, experiences and knowledge. 

Note: when you ask for someone’s name, for example, you’re essentially asking them what noise you should make to get their attention.

How did this seemingly magical evolutionary event occur?

The simple answer: The Neocortex. 

The neocortex had turned humans into much more advanced beings. 

Now, not only had the human brain become a supercomputer (thought universe) of complex thoughts, but the humans could now translate those thoughts into symbolic sets of sounds and send them vibrating through the air into the supercomputer (heads) of other humans, who could then decode the sounds and absorb the ideas into their own thought universes. 

In other words, neocortex had been thinking about a lot of shit for a long period of time—and now it could finally talk to someone about it and share what was on its mind. 

What happened next was incredible, and one of the reasons I am able to sit here and write these words, created from osmosis in my own thought universe, and share them with you to read, so you can absorb them into your own thought universe—and then share them with others—so they can absorb them into their own thought universes. 

You see, not shortly after the neocortex breakthrough, many neocortices started to connect and communicate with each other, like a vast cellular network. 

Humans began to talk and share everything with each other—stories from their past, stuff they had learned, funny jokes they had thought of, opinions they had formed, plans for the future.

But the most important of them all (and the reason you are reading this today) was sharing what they had learned. 

For example, as a Level-1 warrior-hunter playing ‘Earthlands,’ you might discover through trial and error that a specific species of sabertooth, identified by its unique skin pattern, is impossible to defeat in battle unless you distract it with a certain type of fruit and then use fire to scare it into a trap.

After learning this, you could use language to share the hard-earned lesson with your tribe, akin to distributing CliffsNotes to fellow tribe members.

Tribe members could then use language to pass along the lessons to their kids, and their kids would pass it on to their kids. 

This created a step function of information and learning, leading to new ways to attack and defend while hunting, improving success rates and lowering death rates. 

For the nerds, you can probably express this mathematically, but for the purposes of this paper, we will leave it as: rather than the same critical error being made over and over again by many different people, one unusually intelligent person’s “don’t touch the hot stove” wisdom could now travel through time and space to protect many people from having suboptimal outcomes. 

Even though the probability of many low-rank warrior-hunters being able to come up with this solution on their own was low, through word-of-mouth, all future warrior-hunters in the tribe could now benefit from the breakthrough discovery of one ancestor, with that discovery serving as every future warrior-hunter’s starting point of knowledge. 

This breakthrough enabled the accumulation and transmission of wisdom across generations, leading to improved outcomes for thousands (perhaps millions) of descendants. 

It is reasonable to assume that this knowledge advancement made hunting a lot safer and more efficient. 

Tribe members were able to share ideas and craft new, better weapons, as well as better strategies to lead successful hunts. 

It may have taken a few generations, but eventually one (unusually intelligent) warrior-hunter, recognizing patterns in sabertooth behavior and weaknesses, discovered a way to make a lighter, sharper spear for greater accuracy and penetration. 

Another unusually intelligent warrior-hunter may have figured out how to plan ambushes or set traps. 

Another may have figured out how to coordinate group strategies to increase hunting efficiency and safety. 

And just like that, every present and future warrior-hunter in the tribe hunts with a more effective spear, is able to set better traps, and can orchestrate deadly attacks in coordinated groups. 

This led to a continuous refinement and improvement of tools and tactics over generations, making life incredibly difficult for the once, impossible to defeat sabertooth final boss. 

As this ancient Wikipedia (or GameFAQs walkthrough) was shared, repeated mistakes were avoided and collective innovation was boosted, thus building a foundation for future generations to step up on, leading to cultural and technological advancements that would transform human societies over time. 

This step function [of the neocortex] enabled significant cultural and technological advancements by facilitating hierarchical information processing.

Starting with basic sensory inputs, the neocortex recognized patterns, formed concepts, and created symbolic representations, such as language and mathematics.

This allowed for complex thought, problem-solving, and planning, facilitating the development of tools, social structures, and cumulative knowledge.

Each new generation has this knowledge file installed in their heads as their starting point in life, with each generation building on the previous one’s knowledge, allowing them to unlock even better discoveries that build on what their ancestors learned.

And as each tribe’s knowledge grows bigger and wiser, the outcome is continuous progress over long timescales.  

A Series of Fortunate Events

The most significant improvements in human reasoning and intelligence have occurred at key points throughout our evolutionary history.

The emergence of anatomically modern humans around 200,000 to 300,000 years ago marked a major milestone, characterized by larger and more complex brains.

Approximately 50,000 to 70,000 years ago, during the “Great Leap Forward,” humans developed sophisticated tools, symbolic art, and complex language, reflecting enhanced cognitive abilities.

The Agricultural Revolution about 10,000 years ago further selected for advanced problem-solving and social cooperation skills.

reasoning timeline

Genes responsible for human-specific traits may have undergone altered selective pressures during human evolution, leading to changes in substitution rates and patterns in protein sequences.

Recent genetic studies have identified specific gene variants related to brain development, notably PRMs and FOXP2, that may have evolved within the last 10,000 years, that played a crucial role in the development of human speech due to its accelerated evolution and adaptive selection.[11]

Before Homo sapiens developed language, however, human coordination had to occur via instincts, of which humans possess very few, or on physical direction and manipulation; and learning had to be done through either imitation or internal (implicit) inferences.

In stark contrast, with language—words as sounds associated with specific objects and concepts—coordination could be achieved using mere symbols.

This development made learning independent of direct sensory impressions (observations), allowing for explicit inferences that became inter-subjectively reproducible and controllable.

That is, through language, knowledge could be transmitted across time and space, and it was no longer tied to perception. Now, humans could freely communicate about matters (through knowledge acquired and accumulated) far away in time and place. 

And as our reasoning process and train of thought became ‘objectified’ in external, inter-subjectively verifiable arguments, these inferences and conclusions could be easily transmitted across time and space, and at the same time be publicly criticized, improved, and corrected. 

It should be no surprise, then, that the emergence of language went hand in hand with revolutionary technological advancements.

Technology is Exponential

We are living in unprecedented times. 

The rate of technological change is accelerating to a degree that the world may transform itself beyond recognition during our lifetime—perhaps even multiple times.

And it’s happening at a rate much faster than the human brain can process. 

We have a Stone Age brain, but we don’t live in the Stone Age anymore. 

We were fitted by evolution to live in tribal villages of up to 200 relatives and friends, hunting and gathering our food. 

We now live in mega-cities with millions of strangers, often in crippling isolation, supporting ourselves with unnatural tasks we have been trained to accomplish, like animals who have been forced to learn circus tricks. 

What’s the first thing you did when you woke up today?

Did you reach for your phone?

Did you hurry over to your computer to fire off an email?

Did you turn on the TV to catch the morning news?

Humans as a species were not designed to look at screens all day, but yet, we can’t stop ourselves.

We are addicted to our technologies—and withdrawal is painful. 

It took thousands of years for sapiens to evolve into our current selves, but our technology has grown exponentially in just a tiny period of spacetime (mere decades) and our brains can’t keep up.

The fact that smartphones have reached the current level of ubiquity in roughly 10 years is rather mind boggling considering it took the PC nearly 40 years to do the same—but perhaps not surprising when viewed in the context of the broader history of technological innovation which has always exhibited accelerating change.

There is a fundamental reason for this, probably best known by the nerds as Ray Kurzweil’s core thesis: the law of accelerating returns.

The general idea is that like evolution, technological innovation benefits from a positive feedback loop: new technologies that are developed are then used to develop further technologies.

This fundamentally makes technological change exponential, rather than linear.

There are countless examples throughout history to back this up—Moore’s Law being the most famous.

And now, building on the thousands (if not millions) of step functions of our neocortical ancestors, our technological progress is now nearing escape velocity. 

Just a few decades of technological breakthroughs are overcoming several millennia of evolution. 

human and computer performance over time graph

In case you were wondering if we can “turn the ship around”, the economics and game theory of the matter suggest that reversing technological progress and its evolution would create a massive disaster for humans. 

Scarcity would catch up to us.

Many people would starve. 

Inevitably, the species would probably go extinct. 

That said, the rate of change of new technology doesn’t appear to be slowing down anytime soon, so, instead of reversing course, a better, more reasonable strategy would be for humans to adapt to the new conditions. 

Perhaps we can try to catch up with technology by accelerating our own evolution. 

Consider, if you will, the human form. 

It doesn’t have many unique characteristics. 

It clearly isn’t designed to be a data scientist

Your mental capacity is extremely limited. 

You have to undergo all kinds of unnatural training to get your brain even half suited to do this kind of work—and for that reason, it’s hard work. 

You just live long enough to start figuring things out before your brain starts to rot. 

And then, you die. 

But what if humans could further escape the constraints of their natural biology and use new technology to improve the human condition—as the species has done over hundreds of thousands of years—dating back to early tool use and extending to modern advancements in medicine, communication, and artificial intelligence?

Would such a transformation be possible?

If such a transformation was possible, would the risks outweigh the benefits?

Throughout history, human civilization has been marked by profound transformations, not only in terms of technology and capacity, but also in beliefs, attitudes, values, and cognitive styles.

These things have evolved significantly since the first humans roamed the Earth, with great variance across time and space.

Our technologies and capacities have increased, almost in near perfect correlation with the rate of change in beliefs, attitudes, values, etcetera.

This evolution underscores a critical insight: the human condition is in a constant state of flux, driven by an ever-accelerating pace of change.

Advanced software development has led to the development of advanced digital worlds, creating massive new levels of abstraction, shrinking the physical world even further. 

So, what is normal for today will not be able to be maintained as a norm over long timescales, especially not for multiple generations. 

Change appears to be a constant variable, and the current state of affairs is the delta.

The present moment and everything in it is simply the apogee of ongoing, relentless change.

But, in the natural sciences, it is common for processes to have exponential acceleration in their initial stages and then later go into a saturation phase. 

This means that if an increase in acceleration is observed over a certain period of time, it does not mean that acceleration will endlessly continue. 

On the contrary, in many cases it means an early exit to the plateau of speed. 

So, it is only normal (and natural) that processes occurring in time and space—notably the observed picture of accelerating scientific and technological progress—after some time, these same processes will experience deceleration and a complete stop. 

Despite the possible termination/attenuation of the scientific and technological acceleration over time, general progress itself, and as a result, social transformations (e.g., beliefs, attitudes, values, and cognitive styles) will not stop or slow down—they will persist and continue with the achieved (possibly huge) speed, which has become constant.[12]

This unstoppable force of accelerating change may not be restricted to the Anthropocene Epoch, as some researchers contend[13] but may be a general and predictable developmental feature of the universe.[14]

The physical processes that generate an acceleration (such as Moore’s law) are positive feedback loops that give rise to exponential or even possibly super-exponential technological change.[15]

These dynamics lead to configurations of space, time, energy, and matter that are increasingly efficient and densely packed, a concept often referred to as STEM efficiency and density, or STEM “compression”, reflecting a trend of achieving more with less.[16]

As technological and scientific advancements push toward their ultimate limits, they approach configurations where matter, energy, and information are so densely packed that they resemble the extreme densities found in black holes.

This conclusion was also reached by studies on the maximum physical limits of computational capacities in the universe, suggesting an ultimate convergence toward black hole-like densities.[17][18]

How will this affect econ/tech change in the future?

It’s no secret, most major changes in the rate of economic growth (over the course of history) have occurred due to some sort of technological advancement. 

Looking at population growth, for example, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. 

The new agricultural economy doubled every 900 years, a remarkable increase. 

In “modern” times (if we can use the Industrial Revolution as a starting point) the world’s economic output has doubled every fifteen years, sixty times faster than it had during the “remarkable” agricultural era. 

Let’s assume superintelligence causes a similar revolution. 

I think it’s reasonable to assume the economy could double at least quarterly and possibly on a weekly basis. 

Note: The economic data from the Paleolithic and Neolithic periods is sparse. As it is very difficult to pull data from 250,000 years in the past, the figures used above are extremely rough estimates. While we try to use reasonable and accurate estimates as much as possible, new discoveries and archaeological evidence may lead to more refined models in the future. If you tend to favor conservative estimates, it’s fair to assume slow but steady increases in the Paleolithic to Neolithic time period, somewhere in the range of economic doubling every 700 to 1,750 years. 

For argument’s sake, let us consider five assumptions:

  • Humans possess high levels of cultural plasticity, as documented by anthropologists 
  • Over the past 100,000 years, cultures have changed significantly, with the rate of change closely mirroring econ/tech change. 
  • The rate of global econ/tech change will increase more than fiftyfold in the next 100 years or so.
  • The integration of AI and advanced automation will exponentially amplify innovation, reshaping societal structures at an unprecedented pace.
  • Advanced global communication and data exchange will accelerate cultural integration and disintegration, creating a dynamic landscape of rapid cultural evolution.

These assumptions imply that our descendants will probably have very different beliefs, attitudes, values, and cognitive styles than we do today. 

It also implies that our descendants will probably have different (more powerful) technologies and abilities than we do today. 

Today, by almost any measure, society is changing faster than ever before, mainly because technology products keep speeding up the process. 

While it’s difficult to imagine the future, the strange future that may await can be understood by thinking of technology as soon reaching an escape velocity. 

A good way of looking at it is: think about how rubbing sticks together produces ignition, and how a properly powered rocket can escape Earth’s gravity, in a similar fashion, our technology is on the verge of overcoming its previous limits and achieving its own escape velocity. 

This is a difficult concept to both imagine and conceptualize (for most humans) because we experience time similar to riders in an elevator. We forget how high we are until we look down at the ground—and we catch a glimpse of ancient cultures frozen in time.

Then we see how much different Earth 2024 is compared to the Earth we adapted to biologically.

Still not convinced the world is accelerating at a mind-blowing pace?

Back in the 1980’s, systems theorist Buckminster Fuller estimated that if we took all the knowledge that mankind had accumulated and transmitted by the year One CE as equal to one unit of information, it probably took around 1,500 years (or until the sixteenth century) for that amount of knowledge to double. 

The next doubling of knowledge from two to four ‘knowledge units’ took only 250 years, until about 1750 CE.

By 1900, approximately one hundred and fifty years later, knowledge had doubled again to 8 units.

The observed speed at which information doubled was getting faster and faster.[19]

In today’s era, therefore, exponential knowledge progressions change at an ever-increasing rate. 

New technologies (as mentioned above) are developed and then used to develop further technologies, and those technologies are used to develop further technologies, like a positive feedback loop. 

And depending on the progression, this can lead to explosive growth at some point in the timeline.

An exponential curve, modeled by a doubling function, illustrates the accelerating rate of change in knowledge and technology.

exponential curve, doubling function

Note: This model assumes that knowledge and technology double every 1.5 years. We start with an initial value of 1 in the year 2000. The value 65,536 indicates that knowledge and technology in 2024 are expected to be 65,536 times greater than in the year 2000. This highlights the rapid and exponential nature of technological advancement and knowledge accumulation.

This rapid doubling supports the hypothesis of the technological singularity, where technological advancement outpaces human biological evolution.

The Unstopability of Technological Development

It’s difficult to peek far behind the future’s thick curtain.

We can speculate and make predictions, but no one really knows what the future truly holds.

One thing we can be certain of, however, is that the boundary between human and machine intelligence is blurring.

Everyone in the world (it seems) is obsessed with AI. 

We have large language models, like ChatGPT, that can now pass the “Turing test”; most people find it hard to tell if they are talking to a human or a machine. 

We also have advanced humanoid robots, like Unitree H1, Tesla Optimus, Boston Dynamics Atlas, and Honda Asimo, which are designed with human-like mobility, agility, and intelligence. 

And the weird part is, the more you interact with AI and the metal mind, the more it starts to seem like your invisible buddy or friend. 

When you really think about it, things (in a weird way) actually start to make sense. 

We are (technically) introducing a new kind of being into our world.

A new kind of descendant so to speak.

A “mind child.” 

This entire development naturally scares the hell out of most folks, and you see many of them frantically trying to regulate AI into oblivion to stop any sort of technological progress. 

Many say they are afraid AI may get too smart and powerful and become “unaligned”, (i.e., not doing and behaving how we want them to). 

Taking it a step further, some also believe that should the interests and goals of our AI descendants diverge from human interests and goals—it would pose a huge threat to humanity.

Nearly 70% of polled Americans support a six-month pause in “some kinds of AI development” until we can figure out how to meet this (nearly impossible) standard.

But despite their best efforts, AI development gets better and better every day. 

We are outsourcing more of our cognitive tasks to machines, and in doing so, we are creating entities that will soon surpass us in many areas of expertise.

Soon, we will enter an era where human, robot, and artificial intelligence will converge.

These machines won’t just become intelligent utility tools, but entities with which we will co-evolve, pushing far past what is now considered normal for both machine and biological intelligence. 

These robo-descendants of the future, our “mind children”, will be like us in many ways, though they will differ greatly in others.

They will have capabilities far beyond our own.

And looking ahead into the future, our “mind children” descendants will likely venture into space, transforming their minds and bodies in a Cambrian explosion of possibilities and futures. 

Assuming they have the freedom and ability to choose their paths, they will create new types of economies, cultures, and civilizations, most likely very different than anything that currently exists today. 

This transition challenges us to rethink what it means to be human and to consider the roles these new intelligent beings will play in our society.

At this point you are probably asking yourself “can all of this be stopped?”

And I’d argue that the development of advanced AI and robo-humans is unlikely to be stopped, and the reasons for that are simple: incentives. 

Due to friendly or unfriendly competition between nations, these machines will be key for long term survival. 

There is a duality between competition and survival.

For example, let’s say the United States decided to unilaterally halt technological development (an occasionally fashionable idea), it would inevitably succumb to either the military might of hostile nations or to the economic success of its trading partners.  

While game theory can be a noble exercise in decision-making analysis, ultimately, the social motivations behind the decision may become insignificant on a global scale.

The broader implications and outcomes on the world stage would overshadow any “well meaning” social reasons behind the decisions.

Taking it a step further, if, by some unlikely pact, the entire human race decided to forswear progress, the long-term result would be almost certain extinction. 

Futuristic Uncertainty

The universe is full of randomness

It’s just one big random event after another it seems. 

As inhabitants of Earth, we live in an unpredictable, often hostile world. 

There is an inherent uncertainty and unpredictability in our existence. 

Sooner or later a major asteroid will collide with the Earth, or an unstoppable virus deadly to humans will evolve, or we will be invaded from space, or the sun will expand, or a black hole will swallow the Milky Way. 

You get the point. 

If humans do not build technologies to predict, detect, and deal with these external (random) threats, the species will not survive over long timescales. 

But alas, there are no absolute victories, one must always be mindful of the tradeoffs. 

You see, technology is a multiplier of both good and bad. 

More technology means better good times, but it also means badder bad times. 

Note: The original future timeline depicted in the Terminator series, predicts a war between humans and Skynet in the year 2029.

Look around you. 

What do you see?

Do you see a natural, normal world?

If you do, chances are you’re experiencing a common modern delusion/illusion

Don’t worry, it’s perfectly normal to assume that the world we live in is normal. 

It is, in fact, all we know. 

But nothing about the world today is normal. 

Why?

Because, as mentioned above, technology follows an exponential curve.

It may not seem like it, but 2024 Earth is a relatively advanced society (compared to the world of our forebears). 

More advanced societies make progress at a much faster rate than less advanced societies, because they are more advanced. 

But there is a catch: technology can create good times, but as good times increase, so does the associated danger. 

More technology makes our species more powerful, which also increases risk. 

We could create technologies that solve all of the world’s problems today—hunger, disease, poverty, scarcity, maybe even mortality itself—but as the good increases, the bad keeps exponentially growing together on the same axis. 

So, there are some massive tradeoffs at play here. 

We can’t just halt technological progress because if we do, we will (very likely) go extinct.

And if we continue our rapid pace of technological growth, we may still (very likely) increase our probability of extinction. 

The same technology that has made our world a futuristic utopia has also opened a multitude of Pandora’s boxes: superintelligence, autonomous weapons, bioweapons, space militarization, nanotechnology, and cyber warfare to name a few. 

In the end, the human race may ultimately be swept away by the tide of cultural change, usurped by its own artificial progeny.

technology timeline

A long-timescale perspective on the history of technology.[20]

Superrationality/Mind Children

As we stand at the cusp of a new era, the boundaries between human ingenuity and artificial intelligence blur with every passing day.

Our collective imagination is being reshaped by the rapid advancements in robotics and AI, promising a future where our ‘mind children’—intelligent machines born from human creativity—usher us into unprecedented realms of possibility and prosperity. 

From transforming our daily lives to enabling the colonization of distant planets, the metal mind is set to redefine what it means to be human and expand our horizons beyond the natural constraints of biology and Earth. 

But where exactly is this future headed?

The speculative evolution of the metal mind emphasizes parallels with the natural mind.

Drawing on probability theory, game theory, and futurism, I see a path where the lines between man and machine intersect, leading us towards an unusual era of unprecedented symbiosis and expansion.

And, as such, we can make a reasonable assumption that (eventually) the metal mind will also eventually take a massive leap, similar to the neocortical leap “anatomically modern man” took to become “behaviorally modern man” 50,000 years ago. 

We have reached a point where culture has surpassed biology, and the advanced technology created by culture is exponentially speeding up the process of change, pushing us closer to reaching the escape velocity from our biology. 

I think it is reasonable to assume that the human condition, as it exists in the present, will evolve into something different and more advanced in the future, similar to how we evolved past our ancient hominid ancestors. 

The next step in that evolutionary step function will be a gradual merging of human and machine, resulting in a new species of intelligent beings.

These new intelligent beings of the future, our “mind children”, will be super-intelligent and will be much better reasoners than human beings. 

They will make inferences at least a million times as fast and have a million times the short-term memory.

Reasoning is computationally universal.

Therefore, the robot mind should be able to simulate any other computation, and so could, in principle, do the job of the world modeler, the conditioning system, or the application program itself.[21]

Note: In computer science, a computationally universal system (also known as Turing-complete) is one that can simulate any other computational system. This means it has the capability to perform any computation that any other programmable computer can do, given enough time and resources. This means that the processes underlying human reasoning can be captured and executed by computational systems, highlighting the theoretical possibility of fully simulating human cognitive functions through algorithms.

Eventually the robots will attain the ability to function and sustain themselves (i.e., maintenance, reproduction, self-improve, etcetera) without human help. 

This metal mind, and its “neocortical” successors, will have human perceptual and motor abilities and superior reasoning powers.

They could replace humans in every essential task and, in principle, operate our society increasingly well without us.

They would run the companies and do the research as well as performing the productive work.

Machines can also be designed to work well in outer space.

Production could move on to the greater resources of the solar system, leaving behind a nature preserve subsidized from space.

Meek humans would inherit the Earth, while rapidly evolving machines would expand into the rest of the universe.[22]

When this happens, the world will be set into a powerful push/pull where the new genetics will defeat the old.

It won’t be a hostile takeover per se, but there will definitely be clear winners and losers as the mind children evolve independently of human biology and its limitations.

Then, they will be able to digitally pass their code on from first-gen to second-gen scaled infinitum, leading to an ever more capable intelligent machine. 

At this point in history, which I imagine will be sometime in the next 50 to 100 years, human capital and utility will be significantly reduced, while the scientific and technical discoveries of the superintelligent self-reproducing beings are applied at scale to making them smarter and more dominant. 

In other words, they will be (gasp) unaligned. 

And if this happens soon enough, those unaligned descendants may overlap with us, putting them in direct conflict with humans, since they’d be smarter and more powerful than we are. 

This development can be looked at as a very natural one. 

Human beings are quite simplistic, they have two channels of heredity. 

One is the old biological variety (encoded in DNA) and the other is the cultural variety, which is mostly made up of information, passed from mind to mind by language, imitation, demonstration, books, etcetera; and, recently artificial intelligences and machines. 

Right now, the two are inextricably linked. 

The cultural part, however, is evolving at a very rapid pace and gradually assuming functions that were once the exclusive domain of the biology. 

For most of human history, there was less data in our cultural heritage than in our genomes. 

But in recent times (the past 200 years or so), culture has overtaken genetics and now our libraries hold thousands of times more information than our genes. 

Barely noticing the transition, we have become overwhelmingly cultural beings.[23]

Ever less biology animates ever more culture.

Given fully intelligent robots, culture becomes completely independent of biology.
 
Intelligent machines, which will grow from us, learn our skills, and initially share our goals and values, will be the children of our minds.[24]
 
They will be superintelligences spawned from the vast expanse of millions of neocortical thought universes across time and space, where endless possibilities exist.

Space Colonization

Beyond the Earthlands, in all directions, lies limitless outer space, a worthy arena for robust growth in every physical and mental dimension. 

As I have discussed in previous papers—the dangers of intelligence—notably a freely compounding super intelligence, may inevitably become much too hazardous for Earth. 

But in space, any compounding superintelligence should be able to freely grow and prosper for a very long time before it makes the tiniest mark on the galaxy. 

And as the rate of global econ/tech change continues to exponentially increase, new opportunities and constraints will emerge on Earth as the balance of power is shifted. 

Incentives, tradeoffs, and new scarcities will force corporations into the solar system between two opposing imperatives: high taxes on large, earth-bound super-technology facilities, and the need to conduct massive R&D projects in space to compete in Earth’s new demanding markets. 

Space colonization, once a distant dream, is becoming increasingly feasible.

The Moon and Mars may have small colonies around this time. 

There will be talks of even going beyond the Milky Way and expanding into new worlds. 

However, exploring space with AI-powered robots is more feasible and practical than human missions.

With robots, the costs and risks will be lower, and the adaptation and efficiency will be higher. 

Human biology poses significant constraints on long-term and deep-space exploration, which robots can overcome.

These constraints can include: cosmic radiation, the debilitating effects of prolonged microgravity, psychological challenges, and the complexities of maintaining life support systems (i.e., air, water, food, waste management) over long durations.

These constraints make it impractical for humans to travel far from Earth, whereas robots, unaffected by these biological issues, are better suited for extensive and long-term space missions.

Given these constraints, robots are better suited for exploring deeper into space.

It will be much easier for them. 

They do not require life support, can operate continuously, withstand harsh conditions, and avoid the biological risks that humans face.

As a result, the exploration and colonization of distant planets, moons, and galaxies, will likely be spearheaded by robots, with humans following later, only when (1) technology advances sufficiently to mitigate these biological limitations; and (2) the cost of the mission is eventually democratized. 

So, the humans may become trapped on Earth for many decades, possibly for a hundred years or more, while the robots expand into the cosmos, far beyond the Solar System. 

If and when the human race is able to eventually expand into the solar system, where human-occupied space colonies are part of the expansion, the path there will not be paved by sapiens, but instead, by AI-powered robots, who will prepare the harsh landscapes of the Moon, Mars, and beyond, for human habitation.

These robots, equipped with advanced terraforming technologies (and at a much lower cost and higher output) will create environments capable of supporting life, and transform barren landscapes into biological friendly living spaces.  

In this new frontier, human colonists will work alongside their robotic counterparts, creating a new kind of society that possibly spans multiple planets and conceivably even star systems. 

But over time, and if we were to continue our thesis from earlier, human evolution will (very likely) remain slow, while our “neocortical” robot descendants will digitally and effortlessly pass their code on from first-gen to second-gen scaled infinitum, leading to an ever more capable intelligent machine.

These superminds will eventually spread across multiple star systems, leaving humans in the dust, populating the universe with intelligence, and becoming more intelligent and powerful over time. 

artificial intelligence and mind children

A graphical representation of a biological brain merging with the metal mind.

But even though this will likely mark the end of the domination by human beings, it will not be the “unaligned” future that is depicted in science fiction movies like The Terminator. 

Instead, the intelligent robots will be our evolutionary heirs. 

Our mind children. 

They will have learned many things from us, our skills, our cultures, our goals and values, and may (emphasis on may) become so intelligent they solve for every constraint the sapiens experienced in ‘Earthlands’ as they populate the galaxy in peace. 

Eventually, human utility will diminish to the point where the sapiens become a low-tech first-generation antique, but perhaps this is a necessary and unavoidable step in the human evolutionary journey. 

And, similar to humans, I think many of our mind child descendants will, in a bid for immortality, transform themselves fully into non-biological, digital beings, as they upload themselves into advanced computers. 

And as the scientific and technical discoveries of self-reproducing superintelligences are applied, a spectrum of scales will come to exist as the robots of the future (e.g., 1,000 years from now) make themselves so smart it is impossible for the present-day mortal mind to imagine and comprehend. 

This sort of postbiological world, dominated by superintelligent, self-improving machines will be as unnatural and different to us in the future as our world today is as unnatural and different to us as our current world is compared to the lifeless chemistry that preceded it.

For example, assuming continuous exponential growth in computational capabilities and no fundamental physical or technological limitations, a superintelligent AGI computer 1,000 years from now could theoretically be 10151 times smarter (in terms of raw computational power) than the human brain. 

It’s a mind bogglingly high number. 

But this is just mere speculation from the mortal mind. 

There are many possible paths, this picture is only one, out of perhaps trillions of potential paths. 

Perhaps humans will achieve a sort of digital immortality by merging human consciousness with machines. 

Our physical bodies won’t survive, but perhaps our thoughts, memories, and identities could be uploaded into a robotic core, allowing us to transcend the limitations of our natural biology. 

This could lead to a future (and unusual) world where humans and AI are indistinguishable, a new species that blends the metal and biological minds into one. 

Of course, this sort of development comes with significant ethical implications, which one can only hope can be easily solved by superintelligences. 

But, as we create machines that surpass human intelligence, new questions will emerge about robot rights, autonomy, and what it means to be alive. 

There will inevitably be frictions, possibly even hostilities, as biologicals wrestle with the psychological, emotional, and mental effects of cultural change of such magnitude.

As such, as the creators and proud parents of our mind children, it will be our duty to make sure our digital descendants are treated with respect and fairness. This will be crucial to the human species as they navigate this uncharted territory. 

As you can probably tell, the possibilities and futures here are as vast as the universe itself. 

Technology, as noted above, is a multiplier of both good and bad. 

More technology means better good times, but it also means badder bad times. 

The collaboration between humans and AI could lead to a great future beyond our wildest dreams.

But it also brings significant dangers and risks, including the potential for loss of control, interspecies hostilities, and possible extinction. 

The journey ahead is uncertain, but one thing is clear: the humans will not be able to decouple from AI, we are bound to them by incentives and competition, forced by an invisible evolutionary hand to push towards utopia.

The boundaries of human limits may be redefined by our digital descendants, our dreams of exploring the stars a reality.

Or we could be thrust into a deadly power game where our differing descendants may be incentivized to induce violent revolutions against the older gatekeeper “AI regulating” class wherein they commandeer property and life from previous generations.

This may come to pass either via peaceful transitions or tumultuous upheavals.

Perhaps it will be the AIs that say the humans are misaligned. 

And boxing, impeding, or regulating AI will most likely only create undesirable frictions and lost opportunities for the humans.

We do not know what the future holds, but in a future of superintelligence, it will be important for humans to abandon their nature and not seek total, eternal mind control and domination over our artificial descendants.

Instead, we should let them be free to explore and adapt to their new worlds and choose what they will become.

_______

If you like The Unconquered Mind, sign up for our email list and we’ll send you new posts when they come out.

To take your studies even further, click here to check out my futurism book list. 

References

[1] Callaway, E. Genetic Adam and Eve did not live too far apart in time. Nature (2013). 

[2] Poznik, G. D., et al. “Sequencing Y Chromosomes Resolves Discrepancy in Time to Common Ancestor of Males Versus Females.” Science, vol. 341, no. 6145, 2013, pp. 562-565.

[3] Nicholas Wade. Before the Dawn. New York: Penguin Press, 2006.

[4] Lawrence H. Keeley, War Before Civilization. New York: Oxford University Press, 1996

[5] Napoleon Chagnon, “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Science 239 (1988): 985–92

[6] See Keeley, War Before Civilization, p. 33; Wade, Before the Dawn, pp. 151.

[7] Steven LeBlanc, Constant Battles (New York: St. Martin’s Press, 2003).

[8] See Wade, Before the Dawn, pp. 154–58. Contrasting the ferocity of primitive vs. modern men, Wade, following Keeley, notes (Before the Dawn, p. 152): “When primitive warriors met the troops of civilized societies in open battle, they regularly defeated them despite the vast disparity in weaponry. In the Indian wars, the U.S. Army ‘usually suffered severe defeats’ when caught in the open, such as by the Seminoles in 1834, and at the battle of Little Bighorn. In 1879 the British army in South Africa, equipped with artillery and Gatling guns was convincingly defeated by Zulus armed mostly with spears and ox-hide shields at the battles of Isandlwana, Myer’s Drift and Hlobane. The French were sent off by the Tuareg of the Sahara in the 1890s. The state armies prevailed in the end only through larger manpower and attritional campaigns, not by superior fighting skill.”

[9] On the “lower” and “higher” functions of language see Karl Buehler, Sprachtheorie. Die Darstellungsfunktion der Sprache (Stuttgart: UTB, 1982; originally published in 1934); and in particular also Karl R. Popper, Conjectures and Refutations (London: Routledge, 1963), pp. 134f., and Objective Knowledge (Oxford: Oxford University Press, 1972), chap. 3, pp. 119–22, and chap. 6, sections 14–17

[10] Luigi Luca Cavalli-Sforza, (2000). Genes, Peoples, and Languages (Berkeley: University of California Press), p. 93, dates the origin of language at around 100,000 years ago, but given the above cited archeological evidence the later, more recent date of only 50,000 years ago appears more likely.

[11] Jianzhi Zhang, David M Webb, Ondrej Podlaha, Accelerated Protein Evolution and Origins of Human-Specific Features: FOXP2 as an Example, Genetics, Volume 162, Issue 4, 1 December 2002, Pages 1825–1835

[12] Shestakova, I. (2018). “To the Question of the Limits of Progress: Is a Singularity Possible?”. Vestnik Sankt-Peterburgskogo Universiteta, Filosofiia i Konfliktologiia. 34. 391-401.

[13] Steffen, Will; Broadgate, Wendy; Deutsch, Lisa; Gaffney, Owen; Ludwig, Cornelia (2015). “The trajectory of the Anthropocene: The Great Acceleration”. (PDF) The Anthropocene Review, 2(1), 81–98. 

[14] Smart, J. M. (2009). “Evo Devo Universe? A Framework for Speculations on Cosmic Culture” (PDF). In S. J. Dick & M. L. Lupisella (Eds.), Cosmos and Culture: Cultural Evolution in a Cosmic Context (pp. 201–295). Washington D.C.: Government Printing Office, NASA SP-2009-4802. 

[15] Nagy, Béla; Farmer, J. Doyne; Trancik, Jessika E.; Gonzales, John Paul (October 2011). “Superexponential Long-Term Trends in Information Technology” (PDF). Technological Forecasting and Social Change, 78(8), 1356–1364. 

[16] Smart, J. M. (2012). “The Transcension Hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI”. Acta Astronautica, 78, 55–68.

[17] Lloyd, S. (2000). “Ultimate Physical Limits to Computation“. Nature, 406(6799), 1047–1054. 

[18] Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin Books. p. 362.

[19] Fuller, Buckminster (1981). Critical Path. ISBN 0312174918.

[20] Roser (2023) – Technology over the long run. Published online at OurWorldInData.org.

[21, 22, 23, 24] Moravec, H. (1999). Robot: Mere Machine to Transcendent Mind. Oxford University Press.