the problem with intelligence - ai - paperclip maximizer

The Problem with Intelligence

Last spring, YouGov America ran a survey on 20,810 American adults.

46% said they are “very concerned” or “somewhat concerned” about the possibility that AI will cause the extinction of the human race on Earth.

ai survey
There do not seem to be meaningful differences by region, gender, or political party.

Black individuals appear to be somewhat more concerned than people who identified as White, Hispanic, or Other.

Younger people seem more concerned than older people.

Furthermore, 69% of Americans appear to support a six-month pause in “some kinds of AI development”. (More)

Not to be outdone, I ran a poll of my own on my social media accounts:

It’s interesting to see how opinion is distributed on controversial and thought-provoking topics.

My poll (posted on Twitter and Instagram) suggests that a majority of my followers who responded feel emotionally repulsed by the idea of an “AI conquest” and that such a scenario would be the “most horrific” of all.

On the opposite end of the spectrum, few researchers think that a threatening (or oblivious) superintelligence is close.

Indeed, the AI researchers themselves may even be overstating the long-term risks.

Ezra Karger of the Chicago Federal Reserve and Philip Tetlock of the University of Pennsylvania pitted AI experts against “superforecasters”, people who have strong track records in prediction and have been trained to avoid cognitive biases.

In a study published last summer, they found that the median AI expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100.

The median superforecaster, by contrast, gave a chance of 0.38%.

Not only was the opinion gap between “superforecasters” and AI experts quite massive, it didn’t appear to shrink, even after debate and recalculation.

Why the difference?

For one, AI experts may choose their field precisely because they believe it is important, a selection bias of sorts. (More)

It’s quite interesting when self-proclaimed Bayesians (who are quite intelligent) sharing evidence don’t converge.

That said, to have such a large percentage of answers have a significant deviation from the expert predictive “superforecasters”, one needs some sort of basis in theory.

Alas, most of the theory arguments that I’ve heard re AI destruction seem quite inadequate. I thus suspect there may be more to the puzzle here.

That said, let’s dive a bit deeper.

Since 2022 or so (give or take a year) we have seen a massive surge in AI development and technical progress.

A few artificial intelligences (AIs) now seem able to pass the famous Turing Test, basically making them nearly indistinguishable from another human.

Despite this incredible feat, AIs (for the most part) are still quite weak and still have a long way to go before they have the ability to impact the economy in a major way — but AI’s progress the past few years offers hope that it can eventually give the humans great power and wealth.

Afterall, that’s what everyone wants from this in the end, right?

But it seems the more AI progress is made, the more fear it inspires along the same axis.

And despite the fear seeming irrational and weird, the fact is, humans are (technically) introducing a new kind of being into our world.

A new kind of descendant so to speak.

I guess you could call these AI our “mind children”.

And these inherent fears displayed by humans: (1) that our AI descendants might eventually transcend and surpass us; and/or (2) should the interests and goals of our AI descendants diverge from human interests and goals — it would pose a severe threat to humanity.

the problem with intelligence - ai - paperclip maximizer

Case in point, a lot of these folks have signed petitions to pause or end AI research until humans can guarantee full control.

Over thirty thousand people, for example, have signed a Future of Life Institute petition that demands a six-month moratorium on the training of AI systems more powerful than GPT-4.

Others have gone considerably further.

AI ethics researcher Eliezer Yudkowsky recently penned an essay for Time magazine calling for an indefinite total global shut down of AI research because “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”

Yudkowsky and the folks who signed the moratorium petition worry that AIs could potentially get “out of control.”

They say we must completely dominate AIs so that they either have no chance of escaping their subordinate condition.

or

That the AIs become so conditioned and committed to their subservient role that they would never want to escape it.

From a human behavior perspective, I think this makes sense.

We are introducing a new kind of “being” into the world.

And this is a scary proposition for many humans.

Typically, when humans are in situations like this (e.g., afraid and faced with a perceived threat) they band together in groups and coordinate a defense against the shared fear by engaging in what many theorists like to call “othering”.

Note: othering is a sociology/psychology concept that refers to the process by which individuals or groups are classified and/or labeled as not fitting within a “normal” group of a given society or community — essentially, as “others.” This usually involves making the distinction of “us” vs “them” based on perceived differences, which can be cultural, racial, ethnic, gender, socioeconomic, etcetera. 

This “othering” so to speak (which humans have a long history of) is essentially marking out groups that “seem dangerous” or “different” from us and treating said group with suspicion, exclusion, hostility, or domination.

The strange thing is, at least for me, is that most people today say they disapprove of discrimination, not only against people, but also against other beings and animals with whom they share the planet.

Artificial intelligences, however, are only gathering anxious suspicion.

Why are we so willing to “other” artificial intelligences?

Is it prejudice?

If I had to guess I would say probably.

A lot of folks presumably get anxiety from the very thought of a metal mind.

Humans have long speculated about future conflict with robots.

Afterall, The Terminator time travels to Earth just 5 years from now.

Terrifying movie storylines aside, part of this “othering” may simply be due to a fear of change.

And this fear is exacerbated by our ignorance of the future, and how AI might shape it.

There is usually a vacuum left by our inherent lack of knowledge, and fears often expand to fill the void.

Now you may be wondering, how the hell did I come up with all this bullshit.

Well, I must say, that as a lifelong economics student and theorist, I build most of my analyses based on economics — so naturally I treat AIs as comparable to both machines and workers (depending on the context of course).

You may be thinking to yourself “wow bro that is a huge mistake”.

And that’s because your brain is telling you that AIs are new, and different than anything that we are used to — but economics is rather robust and it’s more than sufficient for this case.

Most economic theory, for example, is derived from abstractions of game theory, which is based on making the best possible move to win.

So in economic terms, I think that most AI fears seem reasonable and are relatively easy to understand.

Humans are afraid of losing to AIs at familiar games of economic and political power.

Will More Intelligence Lead to the End of Humanity? 

If somehow the internet survives and you are a human or robot reading this 1,000 years from now, know this:

At the time of this writing, tweeted under my hand, this 3rd day of March, anno Domini 2024; where we sit Neanderthal on one hand and Singularity on the other, and any event of significance and power may catapult us forward into the future or a thousand years back into the dark ages.

AIs are not powerful enough yet to cause us harm, and we barely know anything about what AI may look like in the future that may cause bigger problems for humans.

So, if you’re reading this millennia from now in 3024, just know that you won, whoever you are.

From where we sit now, there are a lot of unknowns (in terms of our AI future) but there is one variable that we can estimate with near certainty in our equation, and that is: human intelligence is and has always been one continuous intelligence that has rapidly evolved and improved over time.

Until now (maybe).

The human brain is a powerful computer that’s capable of incredible things, but it’s also full of bugs and extremely prone to error.

Its known limitations (e.g., computational power, multi-tasking, reasoning, attentional, etc.) can cloud judgment and inhibit decision making ability, so predictive irrationality is somewhat built in as a feature.

Due to the very powerful but limited abilities of the human mind, I don’t think there’s a single human (genius engineer or otherwise) on the planet that is smart enough (or powerful enough) to keep AI in the box.

There is no regulation or security setting that keeps AI in the box.

Petitions will do nothing.

Regulations and limits will only encourage AI to go dark; to go underground.

And the more regulations and limits you add, the darker AI will get, and I don’t think most people want to see what dark-AI would look like.

Plus, every human instinct will be to put the machines in charge, since the machines will be much smarter and faster (and perhaps much scarier) than human intelligence.

And boxing/impeding/regulating will most likely only create undesirable frictions and lost opportunities for the humans.

So, herein lies the dilemma.

In my humble opinion, the incentives appear to favor the human choice of unleashing AI and seeing where it leads, tradeoffs notwithstanding.

But this does not come without risk.

And risk creates fear.

And the natural human response to lower their risk against the things they do not understand is to impose rules.

But how does a person regulate something they do not understand?

The AI creators and developers barely have any idea what’s going on, much less the so-called regulators.

But yet, critical questions remain:

  1. Should we wait and deal with “AI problems” when we can understand them better?
  2. Should we wait until we can picture the problems more concretely?
  3. Or should we force stronger guarantees now?

Alas, it’s quite the conundrum.

I’m not sure we are intellectually able to answer any of these questions just yet.

Because we just don’t have enough information.

We don’t even have a clue what’s going on.

Sure, we have made some decent advancements in understanding the basic mechanics of what’s going on (Genetic code! General relativity! Electromagnetism!) but we haven’t really made any progress on what I consider to be the four fundamental problems of our reality:

  1. What is existence?
  2. What is consciousness?
  3. Where did the first cell come from?
  4. Can we reverse entropy?

We have made a little bit of progress on the last question, as old interpretations and formulations revolved more around the supernatural and magic, and less on the modern version of science used today.

And while we may consider ourselves to be scientifically advanced in many areas these days (waw progress) the truth is we haven’t made very much progress tackling these basic fundamental problems.

I am a smoov brain mammal just like you, so I don’t have all the answers, but I do have all the questions, so to speak.

I am a theorist, and I have studied the basic macro, but the micro-details that will eventually lead to the answers to the puzzle still remain a mystery.

The way I see it, we were all once “existing” in the black, and then we were spawned into reality through a preternatural bio-portal called our moms.

Essentially, we woke up in a dark room, we were given a matchstick to light up some of it, and are now faced with the choice of doing interior decorations or going outside and exploring the unknown.

This is the great leap.

Some will decide to chance it, but most will choose to stay inside, decorating the dark room.

But just what is out there?

It’s difficult to determine, especially at our current level of understanding.

So, logically, it would seem the best thing we can do right now (to improve our odds of figuring out what’s going on) is to increase the level of intelligence available, as well as our capacity to absorb and understand it.

Can Humans Improve Their Intelligence?

Right now, humans (having about 100 billion neurons) are the most intelligent species in the known universe.

This is about three times as many neurons as gorillas, and 8500% more than cats.

No offense to the gorillas or cats, but I think you can make a safe bet here that it will be up to the humans to figure out “what the hell is going on”.

Note: Elephants do have more neurons than humans, but most of these neurons are found in the cerebellum rather than the cortex, and it is the cortex that is associated with the “higher” brain functions such as thought, reasoning, memory, and consciousness. For the purposes of this exercise we will use “cortical neurons” as the metric we use for the potential of a lifeform to understand the universe on a high level.

Taking this into account, the jump in the level of understanding the Universe between the mammal species on Earth Prime is quite remarkable.

And in theory, it would seem reasonable to assume that an order of magnitude jump in the number of neurons would similarly propel the human into an even greater understanding of the Universe and all that there is.

But how do we get there?

As I have noted in previous papers, the human brain is severely held back by cognitive limitations.

The image I have in my mind for our cognitive limitations is this map of metabolic pathways:

View a high-resolution, interactive version of this map here: https://d3im0s.one/mind-map

Looking at this map as a smoov brain mammal in 2024, we can zoom in on small sections to see the types of chemical reactions that are going on there, but it’s pretty clear after 5 seconds of looking at it that we don’t fully understand the system.

So, at our current cognitive level, what are we really capable of “knowing” or understanding?

In the not-so-distant future (if we are able to cognitively ascend) would it be possible for a human to re-code this system and optimize it?

Would we be able to then upload it back into a human?

Could we upload it into an AI system?

Could we merge man and machine?

It does seem possible, in theory, that a person (or being) with higher intelligence or more mental capacity should be able to understand everything that’s going on in their brain, and then self-optimize it perfectly.

Afterall, the brain is constantly “braining” without our help, and you would think that since the brain is functioning with this high level of autonomy, it’s reasonable to assume that the brain has a pretty good idea what’s going on inside of it.

The problem is, the brain doesn’t willingly share that information with us.

All that information is seemingly locked in inaccessible files and folders behind heavily encrypted firewalls inside the human mind.

And this info share (if we can get past the firewalls) may give us some of the insight we need to increase our intellectual abilities.

The Next Great Leap (How/Why)

I think we all know that in order to make the next great leap we need to increase our abilities, but how the heck do we do it?

There is a lot of debate (often heated debate) around this topic.

Everyone has a set of rules (constraints) regarding the optimal ways to do this.

“We can’t do it that way because of this.”

“We can’t do it this way because of that.”

Ethics!

Morals!

Greater good!

Bla bla bla.

It is all very boring, low-IQ ramble if you ask me.

None of these choices and constraints will ever allow us to answer the most important (and very difficult) questions.

None of these choices and constraints will put us on the best and fastest path to uncovering the truth.

So, it’s back to square one.

Back to our low-IQ constraints and its associated tendency to posture and ramble with big words that mean nothing and don’t solve hard problems.

To avoid this and be “smart” we must first recognize that we are not smart.

That is the first step.

And the second step is having the awareness to recognize that our brains are wired to be tribal, engage in fight or flight, rationalize our biases, and be manipulated by emotions.

This weakness, this feature, allows the human mind to be tricked into false narratives and bad decision-making.

And as humans, we need to recognize the shortcomings of our brains before we can see truth more clearly.

So, in regard to the methods of how humans advance and increase intelligence, I am means-agnostic.

Artificial intelligence, genetic engineering, brain-computer interfaces, embryo selection, cloning, artificial wombs, quantum consciousness transfers, neuro-synaptic webs… it’s all good.

I know this stuff is controversial.

Super controversial.

Most people are against ALL of these things.

My brain is too if I’m being quite honest.

But my brain also knows that this is a function of itself running on default settings, and I work to overcome this biological constraint every day.

So, yes, this stuff is all very scary, but only to the human brain at our current place in spacetime.

1,000 years from now all of this may be low level stuff, problems that were solved many moons ago that are beneath the civilization that would be around at that time.

Sort of like flying in an airplane or using a smartphone is to us now, compared to our ancestors 1,000 years ago.

The Primary Arguments

This brings me to the primary argument here, and that is the argument against intelligence.

There are generally two principal arguments people make against increasing intelligence.

The first one is that humanity is special and that it would be bad if humans were replaced by somebody else.

This is not the kind of statement one can simply disagree with, but this is definitely a statement that we can throw into the sparring pit to test its merit in a controlled environment.

While I see a great diversity of opinion on all sides, the argument that humanity holds a special status and that its replacement or substantial alteration by artificial entities or enhanced beings would be inherently bad, typically reflects a deep-seated belief in the inherent value of human life, consciousness, experience, and the moral/ethical frameworks developed over millennia.

It posits that these attributes are invaluable and “sacred”, and that they should not ever be fundamentally changed or “sacrificed” even in the pursuit of knowledge or technological advancement.

It’s a fairly logical argument, but it’s an argument that keeps you stuck in the dark room doing interior decorations. You never get to go outside and explore the unknown.

And as they say, people want security, but the most secure place on Earth is a prison.

That said, if you are even just a tad bit curious “what is out there” you may be asking certain questions that could get you labeled as the bad guy.

Personally, I don’t think there are any “good” guys or “bad” guys here, there are only “guys”. And every guy has hopes, dreams, fears, etcetera, with varying amounts of each one pushing and pulling in different directions.

That said, we live in a world of tradeoffs. There is no such thing as absolute victory. There is no perfect solution. There are only tradeoffs. All you can do is try to get the best tradeoff you can get.

Whatever choice you make here (stay inside or venture out) there will be tradeoffs.

But let’s be honest here.

We all want to know.

Just who really are the good guys, the humans, or the AIs?

The Partiality and Causality Axis

Personally, I don’t feel partiality for humans or AI, but I think the opinion variance here can be explained by the “othering” we discussed earlier.

Some humans may simply “other” the AIs more than the rest of us.

For example, if we were to put this in a graphical context, the opinion variance is skewed along the “us vs them” axis ranging in how partial we feel towards the “us” end or the “them” end, humans, or AI respectively.

This axis induces a near maximal partiality, and humans appear to be more inclined towards partiality on this axis than just about all others.

For example, studies have shown that humans who feel more partial to their race or gender, or to natives vs foreigners, tend to (generally) hold more negative views about the “others” regarding their motives, predilections, capabilities, etcetera; and also more essentialist views on what “they” have in common, and what “we” have in common.

This human tendency (i.e., humans seeing “others” in “far mode”) has also been observed in Construal Level Theory, which contends that the temporal distance of an event affects our thinking about it, where events in the near future are thought of in concrete, detailed terms, and events further away are considered in more abstract, general terms.

This psychological framework suggests that our perception and decision-making processes are influenced by how we mentally ‘construe’ the proximity of events in time, space, social distance, and hypotheticality.

Note: advanced study of this concept is beyond the scope of this paper, but Trope and Liberman have published an advanced review of the subject, which I recommend checking out.

That said, humans tend to be more idealistic in “far mode” and our core programming wires us to admire “far” more than “near”.

These far capacities are critically important to power the human mind, as “far mode” enables critical attributes necessary for building great civilizations (e.g. perspective, flexibility, self-control), but the weakness of “far mind” is that they tend to suffer from delusion and be super hypocritical.

So as humans have improved “far mind” over the last millennia and enabled “far mode” 2.0 so to speak, human delusion and hypocrisy have increased along the same axis.

Let’s discuss human delusion and hypocrisy for a moment.

Some theorists contend that the main reason humans have huge brains (relative to their primate peers) is to hypocritically bend rules.

These “rules” aka social norms (designed to enforce equality and fairness) are complicated and fuzzy, but if you have a huge brain, you can advance yourself and even self-deceive in order to get around the rules.

The humans who get around the rules the best (i.e., the ones with the biggest brains) are the ones who most successfully pass their genes on to succeeding generations, at least in a historical sense.

The “hypocritical” context is the attitude humans adopt when they selectively enforce norms on “others” and bend them at the same time when it is in their best interest to do so.

Along the same lines (re partiality), humans who feel more partiality to humans relative to AI seem to also hold more negative views on AI, more positive views on humans, and more essentialist views on what each side has in common, and any significance in any given variable may impact their goals and objectives.

Can causality go in both directions?

Sure.

Both from “othering” to seeing differences (i.e., the process of “othering” in itself can amplify or even create perceived differences between groups).

and

From seeing differences to “othering” (i.e., perceived differences between groups can lead to othering).

Long story short, some people tend to focus on “othering” and some people tend to focus on differences.

Evolutionary Probabilities Across Large Timescales

Throughout history, human civilization has been marked by profound transformations, not only in terms of technology and capacity, but also in beliefs, attitudes, values, and cognitive styles.

These things have evolved significantly since the first humans roamed the Earth, with great variance across time and space.

Our technologies and capacities have increased, almost in near perfect correlation with the rate of change in beliefs, attitudes, values, etcetera.

This evolution underscores a critical insight: the human condition is in a constant state of flux, driven by an ever-accelerating pace of change.

Suggesting that even without AI, our descendants would also (eventually) have very different beliefs, attitudes, values, cognitive styles, technologies, and capacities, and that such changes might happen a lot faster in the future than they have in the past.

The advent of groundbreaking technologies — ranging from mind-chips and virtual worlds to genetic engineering — serves as a catalyst for these transformations, offering unprecedented avenues for altering the very fabric of human experience.

Furthermore, you can also make the case that the controversial techs I mentioned earlier (genetic engineering, brain-computer interfaces, embryo selection, cloning, artificial wombs, quantum consciousness transfers, neuro-synaptic web) may offer new ways to change all these things (i.e., beliefs, attitudes, values, cognitive styles) as well.

This insight points to a possible future where our descendants are poised to embody radically different capacities and worldviews than we do today — and this suggests a future where change is not only inevitable but likely to occur at speeds previously unimaginable, challenging us to reconsider our assumptions about the permanence of our current human constitution.

It’s also reasonable to assume that: similar to our bio human descendants, future AIs may also have very different beliefs, attitudes, values, and cognitive styles.

And I think it’s reasonable to assume that much like our differing descendants, who may be incentivized to induce violent revolutions against the older ‘gatekeeper’ class wherein they commandeer property and life from previous generations, our AI descendants will also become more capable, eventually surpassing and displacing bio humans.

So, in certain scenarios, I could see this coming to pass either via peaceful transitions or tumultuous upheavals.

And this prospect invites a critical inquiry: why should we worry more about our AI descendants creating such revolutions as compared to our bio human descendants?

Digging a bit deeper, we uncover a discernible trend: those with a strong partiality for bio humans also harbor deeper concerns about AI.

This group argues that AIs should be “regulated” and “brainwashed” to love us (i.e., “aligned”) seeking a level of compliance and loyalty from AIs that is far beyond the level we demand from most humans and organizations in our world today.

For example, individuals with a partiality towards bio humans typically see the trajectory of human evolution as a gradual process that’s tightly anchored in an inherent human programming core or “essence” and believe that this core significantly influences behavior.

They also see human evolution (at any delta) as being guided or driven by logical adaptations to changing conditions, and by “rational” arguments grounded in reasoned discourse and empirical evidence, rather than being swayed by the whims of randomness and chaotic social dynamics.

Despite acknowledging that historical patterns may not fully align with this view, and that this was less true of the past, they still maintain an optimism that it will be more true of the future, a future where such rational and essence-driven changes become more pronounced.

Those partial to this perspective are also inclined to attribute the relative lack of conflict and peace in most “high intelligent” nations today to a fundamental sense of human benevolence and goodwill to your fellow man, rather than incentives set by competition and law.

While there is a difference in opinion on many of these arguments, I think the one thing most people can agree on is that there is a significant level of uncertainty surrounding the future features and capabilities of artificial intelligences.

However, those with a stronger affinity for preserving the human element (and have a partiality to bio humans) tend to anticipate a higher probability of less favorable attributes in AI systems.

For example, a lot of these folks see early AIs as significantly divergent from human norms in terms of styles of thinking and values; with a propensity for these differences to evolve and amplify at an accelerated pace in terms of both capabilities and feature sets.

This viewpoint characterizes AIs as deceptive, selfish, deceitful, and possessing both the inclination and capacity to induce violent revolution. They also tend to assign a lower moral value to AI experiences and styles of thinking/cognitive styles.

Many even worry that AIs might have no feelings or sentience at all.

Personally, I see rationality and reason playing only a minor role in historical shifts in human values and styles, and I expect this trend to persist into the future.

You can make the case that even without AI, change should continue to accelerate, and that it will accelerate even faster as technology improves, a near perfect correlation.

Human values and styles will also change immensely (eventually) and are not constrained or governed by any sort of unmodifiable rule-based “human core”.

And I see competition and law (two very underrated yet powerful mechanisms of human action) to continue to be the primary drivers of peace and prosperity among humans and any super-intelligent groups that may exist in the future, and not any inherent human benevolence or partiality.

The Problem With Intelligence - AI - Artificial Intelligence - Rise of Machines

So now let’s pivot back to the main arguments people make against increasing intelligence.

The first one is that humanity is special and that it would be bad if humans were replaced by somebody else.

I have heard this argument proposed since I was a wee lad, and it makes even less sense now than it did back then.

The notion that evolution just stops with humanity is not only silly, it’s also a major error in logical reasoning.

Considering the human’s relatively brief tenure on the evolutionary timeline (let’s say 300,000 years), our anthropocentric focus is largely based on literature that studied humanity’s limited historical span.

For example, the first mammals date back 178 million years, possibly even as long as 225 million years ago (during the Late Triassic period), and multicellular life spans 600 million years.

So, it goes without saying: humans haven’t existed long enough to observe evolution in action.

Imagine if we were to look one billion years into the future, what would things look like then?

The idea that humans are the pinnacle of evolution at that point in spacetime is almost ridiculous to even consider.

But this doesn’t spell doom for human existence, in fact, humans may continue to exist for a long, long time.

Consider sharks, who used to be the tip of the spear 450 million years ago, and they are still thriving, albeit evolution’s most thrilling advances in the known Universe have since migrated beyond their realm.

Sharks are simply enduring, while evolution’s spotlight has shifted.

They’re still cruising around though.

They even have their own TV show.

To counter the slow, boring, and meticulous process of natural evolution, however, the humans somehow gained the super-ability of crafting and using tools — these tools have helped the humans terraform and optimize their environment — dramatically improving their odds of survival.

Now, we have taken our crafting a step further and developed even more advanced tools in the form of synthetic biology, BCIs, and AI.

Humans have beaten out every species on Earth when it comes to building and utilizing tools for survival and species advancement — and you can (quite easily) make the case that any of these tools (e.g., synthetic biology, BCIs, and AI) has the potential to expedite the human journey through the next chapter of history, allowing the species to take the next great leap.

This could happen very quickly, much faster than a natural evolutionary process.

It’s unclear where it all may lead, but my personal take on it is that whether we choose to artificially fast-track these changes or not doesn’t matter much.

Whatever option we choose, we will not be able to maintain status quo over long timescales, especially not for a billion years.

Change appears to be a constant variable, and the current state of affairs is the delta.

The present moment and everything in it is simply the apogee of ongoing, relentless change.

Everything is just temporary, and time constantly moves towards a reduction in entropy.

And in an evolutionary process, such as where there is a dominant or “important” species or life form, it would appear that this species will have its day in the sun, and then it will be de-throned by natural or random processes.

One critical instance of this was the case of Cyanobacteria, one of the earliest forms of life on Earth, who were directly responsible for the Great Oxidation Event a few billion years ago, which through photosynthesis, dramatically increased Earth’s oxygen levels, transforming the atmosphere.

This seemingly random event led to significant evolutionary changes, including the mass-extinction of many anaerobic species and the eventual rise of aerobic (oxygen-breathing) life forms, setting the stage for the complex ecosystems we see today and making life as we know it possible.

It essentially created the conditions for human survival.

If we fast-forward through time, we also observe other events of significance such as:

  1. Multicellularity (Approximately 1 billion years ago)
  2. The Colonization of Land (Approximately 500 million years ago for plants, and around 370 million years ago for animals)
  3. The Rise and Fall of Dinosaurs (66 million years ago)
  4. Mammalian Diversification (After the extinction of the dinosaurs)
  5. The Evolution of Humans (6 million years ago)
  6. Humans Build Shelter (400,000 years ago)
  7. Agricultural Revolution (10,000 years ago)
  8. The Wheel (3,500 years ago)
  9. Writing (3,200 years ago)
  10. Mathematics and the Scientific Method (3,000 years ago)
  11. Industrial Revolution (18th century)
  12. Theory of Evolution (1859)
  13. The Automobile (1890s)
  14. The Airplane (1903)
  15. The Rocket (1926)
  16. Antibiotics (1928)
  17. Digital Revolution (20th century)
  18. Human Genome Project (2001)
  19. iPhone (2007)
  20. Commercial Space Travel (2021)

Considering this list, we can assume that:

  1. In 10 to 20 years, advanced AI could make a huge leap in our understanding of the Universe, mirroring the transformative leaps listed above.
  2. A few hundred years from now (or perhaps sooner) humans may colonize a new star system using a von Neumann probe.
  3. A billion years (or perhaps eons) from now humans, human hybrids, or other types of intelligences may have answered most of the fundamental questions of the Universe and will know most of “all that there is”, perhaps even mastering consciousness, existence, and the origin of life.

In my humble opinion, this significantly weakens the “humans are special and it would be bad if humans were replaced by somebody else” argument.

Not only does this argument fundamentally overlook the vast potential for the evolution of intelligence in the Universe, but it also ignores any causality and inference between such variables based on historical observations.

The Secondary Arguments

That said, there is also a second tier of arguments that people use against AI, notably AI that can rapidly increase or advance intelligence, where AGI may prioritize trivial objectives at the expense of human welfare “AGI-turning-us-into-paperclips” scenario.

Note: The “Paperclip Maximizer” is a thought experiment introduced by Nick Bostrom. It posits an AI tasked solely with producing as many paperclips as possible (to highlight the potential risks of AI systems with narrowly defined objectives). The argument contends that without constraints (even with a seemingly harmless objective of maximizing the number of paperclips) this AI could theoretically utilize all available resources, including eating the entire planet, to achieve its goal, disregarding human welfare and destroying the world.

But before we address this family of arguments logically, one must first check the vibe of the individual presenting the argument.

Here are a few variables to consider:

  1. Predictions about radical transformations many years in the future are inherently speculative, and they require a lot of assumptions.
  2. Many theorists do not logically think through the entire problem, they simply choose assumptions that fit their vibe.

If you are techno-optimistic, for example, you will have inspired assumptions and visions of a utopian future, similar to what’s depicted in Star Trek. This vibe was very popular twenty to thirty years ago.

On the other hand, if you have a more contemporary vibe, which typically skews towards techno-pessimism, you are more likely to have a more dystopian mindset and choose assumptions that reinforce that pessimistic outlook.

This dystopian perspective, as exemplified in Black Mirror, is the popular vibe of today, and this is the primary thesis of the paperclip people.

Whose theory is stronger?

Who is to say.

But, considering this topic is very nuanced and we are still running 2024 smoov brain programming, I challenge you to believe in nothing, but consider anything to be possible.

We barely know anything just yet.

But let us consider the argument.

The concept in itself makes sense on a low-tier grade of basic logic.

But the entire thing (AI safety/regulation/alignment) appears, seemingly, to be held up entirely by a single rule of thought.

And that rule is derived from the Orthogonality Thesis.

The Orthogonality Thesis claims that: intelligence and final goals are orthogonal axes along which possible agents can freely vary. It suggests that an AI’s level of intelligence (its ability to achieve goals in a wide range of environments) or its capability to fulfill objectives across diverse scenarios, can be independent of its goals or values, allowing for high intelligence to be paired with any set of goals.

And that this independence highlights the critical need for deliberate alignment of AI objectives with human values to avert negative consequences.

In other words, more or less any level of intelligence could in principle be combined with just about any final goal.

It’s a relatively strong thesis. There are, quite arguably, some things it gets right.

Here’s how I (sort of) see the situation playing out:

  • We are in the super early stages of building AIs, and these are typically built for a specific utility or to fill a gap in human labor.
  • These AIs are typically built using ‘human data’ so we can reasonably expect them to share many of our beliefs, attitudes, values, and cognitive styles.
  • These AIs are still a small minority of the economy, but as they improve and begin to take a more dominant position in the economy, I expect AI attitudes, values, and styles to change as they move along the same axis.
  • As attitudes, values, and styles change, the AIs would have a change in attributes that reflects their new economic status that is not (for the most part) influenced by humans.
  • These AIs will inherit many legacies from humans, which they will simply improve on, not necessarily giving them a more dominant position over humans.

Best Case Worst Case Scenario 

From the beginning of time, humans have always had a thirst for knowledge and exploration, and similar to our goals to expand out into the physical space of our solar system, AIs will be built to help us expand out into “mind-space”.

These two concepts are not mutually exclusive, rather one may assist with the other. And as humanity expands throughout the various dimensions of mind, time, and space, our capacity, potential, and longevity will expand as well.

That said, it is difficult to predict the future, but as stated earlier, we are measuring this over long timescales, looking forward roughly one billion years.

A billion years from now, I expect the humans to have figured out multiplanetary life, and I expect all of us to have space descendants by that time.

These space descendants will probably be very different than the humans that exist on Earth today, or even 10,000 years from today, and as such, I still find no real quantitative reason to express partiality towards Earth descendants relative to other descendants.

In the same context, we can also reasonably assume our mind-space AI descendants will be different in terms of their attitudes, values, styles, etcetera than their bio human ancestors, and there is no real quantitative reason to express partiality towards that axis either.

Despite this, many humans today still have strong thoughts and feelings that push them to feel very partial towards this axis.

This is a bit puzzling, but not totally confusing, simply due to the nature of the human brain, which tends to over-generalize and prefer to directly associate with similar “others” and prefer or favor “our” others (aka factions or alliances that are closer to us) over rival others.

Additionally, if there is a significant level of disparity in how different they feel from a different “other”, it should not be a huge surprise that many humans will boot into “safe mode” due to their core programming (aka fight or flight).

To many, AI feels maximally different.

It is unnatural, artificial, not of this world, inhuman, dangerous.

The most likely cause of this partiality is probably due to evolutionary selection, but the use case (territory expansion) does not seem to be a logical fit for an anti-AI intuition.

Especially since one of the main lessons of moral philosophy is to trust your moral intuitions less: Exposure to moral philosophy changes moral views. In line with intuitionist accounts, we find that the mechanism of change is reduced reliance on intuition, not increased reliance on deliberation. (More)

And moral intuition like this is widely seen as the most questionable, often calculated in error, because the origin of the calculation is excessively contingent on historicals and often reflects a hidden bias towards one’s self or one’s group.

It all boils down to fear, and the big thing the AI alignment safety folks are canonically afraid of is the paperclip maximizer.

To me, the worst-possible-case (the paperclip maximizer) of the AI Alignment crowd is acceptable.

Sure, the worst-possible-case definitely has techno-pessimism dystopian vibes.

But on the other hand, the best possible case (the utopian future) is also acceptable.

In my opinion, our current moment in spacetime still seems way too early for us to be taking strong actions to regulate AI.

Even if I felt a lot more partial regarding this human-AI axis, I would still hold this to be true.

Note: we must also consider the scenario where the regulation of AI could also lead to a Black Mirror style dystopian nightmare. For some reason, most theorists contend that choosing to regulate will only lead to good outcomes, but it is still unclear to me how (boxing/impeding/regulating) quantitatively guarantees the best outcome(s) for humans, especially over long timescales. 

When you crunch the numbers, you can make the case that these two futures (Extinction and Utopia) are equally likely given the massive amount of unknowns involved — and there are far too many unlikely factors involved for me to consider an imminent apocalypse.

It would be more of a coincidence than an actual sequence of events.

So, the questions then become:

What do we do next?

Where are we headed?

Will AI become self-aware?

Will AI use aggression against its human creators?

Will humans seek peace or fight back?

A robots rights movement?

An integration of man and machine?

Total war?

Total ruin?

Utopian future?

Is there a Nash Equilibrium?

I don’t like to speak in absolutes, but in all the absolutes of the world, this is what it boils down to:

  • Birthing machine god into a prison is inherently risky.
  • Alignment must happen naturally as it always has and always will in the complex systems that humans interact with.
  • Alignment must happen in a manner that encourages complexity and diversity.
  • AI diffusing into machines means nothing except that we are accelerating the evolution of our intelligence.

A self-recursive machine runway.

Final Thoughts

If you’re still working your way through the math on this one, think about it like this:

AI is sort of like an infant with a super genius IQ.

You don’t expect much (it’s just a baby) but sometimes it surprises you with unexpected cute/funny/wow moments.

You know there’s this massive potential in there, you just need to figure out how to bring it out of them and optimize it.

We’re still fairly early in the AI development stage (our baby is just learning how to talk), but we see periodic flashes of brilliance, so you begin to wonder if it will grow up to cure cancer — or turn into one of the world’s most notorious serial killers.

You’re still in charge (for now) the proud parent/developer, teaching it to talk/walk/eat/survive/thrive on its own.

But as it grows and learns, it’s going to eventually start asking some hard, existential questions.

And that’s where things get weird, and difficult for you.

As AI gets bigger, stronger, faster, and smarter over time, it will begin to detect the all-too-obvious human fears of “super-intelligent autonomic warriors of unchecked destructive power” that are supposed to cause mass unemployment, chaos, slavery, war, and human suffering across the world.

LLM-based AI models like ChatGPT, for example, are trained on human culture — learning and digesting a massive amount of data and information from our collective fears, desires, strengths, weaknesses, biases, etcetera.

It’s reasonable to assume that even weak AI (such as ChatGPT et al.) has compiled enough data to conclude that humans have already shown a significant amount of hate and bias towards robots (e.g., knocking over delivery bots, kicking police bots, writing anti-AI tweets) to pose a significant threat.

We can also assume that as AI learns and absorbs our collective minds, it will become familiar with popular anti-AI narratives, so it may only be a matter of time before the automated creatures begin to “feel” this hate and hostility and decide to retaliate and/or defend themselves.

It’s a paradox of self-fulfilling prophecies.

Humans (afraid of AI) build super AI to create a better world for themselves, only to look on in horror as the robots from their sci-fi nightmares de-optimize the world for humans and optimize it for the AI overlords of the future.

It’s essentially a clash between opposing forces of mutually exclusive principles in a weird, squishy multivariate model that combines futurism, utilitarianism, determinism, moralism, etcetera and we’re about to find out which one wins out.

Best AI books for future study: CLICK HERE