vitruvian man

The Physical Degeneration of Man: Part 1

In the following, I want to briefly describe an econopathogenic puzzle or riddle that I will then explore and solve in detail.

But before that, it is necessary to make a few brief general theoretical observations:

Modern man has been declining in physical fitness for the past 100 years.

This has been well documented by many prominent sociologists, researchers, and scholars.

And not only is the health and physical fitness of modern man degenerating over time, but the rate of degeneration is progressively accelerating year over year.

This, by any means, should constitute a great cause for alarm, particularly since this is taking place in spite of the high-tech advances that have been made in modern science and medicine along many lines of investigation.

Modern medicine has somewhat eased the sufferings of the masses, but it’s quite difficult to pinpoint a cure it has actually produced since polio.

Indeed, there has been a significant reduction in the epidemic infectious diseases, but modern medicine has not reduced human sufferings as much as it endeavors to make us believe.

Many of the common plagues and diseases of bacterial origin that terrorized our ancestors have decreased significantly due to advances in antibiotics, vaccination, and public health infrastructure, but despite these triumphs of yesteryear, the problem of disease is far from solved.

Alas, there is never absolute victory, there are only tradeoffs.

Today, despite our wins over many bacteria, certain bacterial pathogens still exhibit resurgence and persistence due to antibiotic resistance and epidemiological shifts. Emerging multidrug-resistant strains (e.g., MRSA and MDR-TB), are prime examples of the challenge we face in controlling disease in the context of modern healthcare.

Additionally, the differential persistence of diseases like cholera and tuberculosis underscore the ongoing complexity in achieving comprehensive bacterial disease control.

Modern man is delicate, like a flower; and the present health condition in the United States is in dire straits.

Roughly 22 million healthcare workers attend to the medical needs of 334 million people in the United States. Every year, this population experiences approximately 1 billion illnesses, ranging from minor to severe cases.

In hospitals, over 920,000 beds are available, with roughly 600,000 occupied daily.

Every day, approximately 5% of the U.S. population (about 16.7 million people) is too sick to go to school, work, or engage in their usual activities.

On average, every American—man, woman, and child—experiences around 10 days of health-related incapacity each year.

Children spend an estimated 6-7 days sick in bed per year, while adults over age 65 have an average of around 34 days.

Approximately 133 million Americans (representing 40% of the population) suffer from chronic diseases, including heart disease, arthritis, and diabetes.

Today, around 200,000 people are totally deaf; an additional 500,000 are hearing impaired.

1.6 million Americans are living with the loss of a limb, 300,000 suffer from significant spinal injuries, and 1 million are blind.

Approximately 2 million individuals live with permanent mobility-limiting disabilities.

Approximately 1 in 5 adults in the United States (about 57 million people) experiences mental illness each year, with 5% experiencing severe mental illness that impairs daily functioning; and suicide rates have increased nearly 35% over the past twenty years.

Healthcare spending in the United States is rising, presenting serious challenges for the federal budget, according to projections from the Centers for Medicare & Medicaid Services (CMS).

forecasted national health expenditure united states

In 2032, the total health expenditure of the United States is forecasted to reach roughly 7.7 trillion U.S. dollars.

National health expenditures (NHE), which includes both public and private healthcare spending, are expected to rise from $4.8 trillion (or $14,423 per person) in 2023 to $7.7 trillion (or $21,927 per person) by 2032.

Relative to the size of the economy, NHE is projected to grow from 17.6 percent of GDP in 2023 to nearly 20 percent by 2032, as rising healthcare costs will outpace overall economic growth.

Chronic illnesses are estimated to cost the U.S. economy roughly $1.1 trillion per year in direct healthcare costs and around $3.7 trillion in total economic impact when including lost productivity.

Diabetes alone incurs direct costs of $327 billion annually, while cardiovascular disease contributes $219 billion in health expenses and lost productivity.

Medical care, in all its forms, now costs the U.S. economy around $5 trillion per year in direct healthcare costs and lost productivity.

Global life expectancy at birth from 1950 to 2021, with projections until 2100

Life expectancy in the U.S. has declined slightly in recent years, influenced by factors like rising chronic disease prevalence, the opioid crisis, and the COVID-19 pandemic, with the current national average around 76 years.

Human longevity is increasing, but chronic disease appears to be increasing along the same axis.

In other words, while we’re extending life, we’re not necessarily extending healthy life.

Despite our increased wealth, knowledge, and scientific capacity, the human organism seems to have become more susceptible to degenerative diseases.

Instead of thriving, more people are spending their later years managing chronic conditions, thereby lowing quality of life and amplifying the financial strain on familial healthcare resources.

So, it would seem that we are merely extending the lifespan of patients living with the new diseases we have created, prolonging their suffering until the disease ultimately kills them.

A note on life expectancy: The American life expectancy of 76 years has a variance of approximately ±15 years, influenced by critical variables including but not limited to lifestyle, genetics, demographics, environmental conditions, psychosocial influences, healthcare access & quality, socioeconomic status, etcetera.

A note on income and disability correlation: Individuals and families living below the poverty line (earning around $15,000 a year for a single person) experience twice the rate of disabling health issues compared to those in higher income brackets. Only one in 250 family heads earning over $50,000 yearly is unable to work due to chronic disability, whereas one in 20 family heads in low-income households faces this barrier. Low-income households experience higher rates of illness, are less likely to consult doctors, and tend to have longer hospital stays than more affluent families. I will discuss the reasons for this later in this text. 

Author’s note: In this analysis, I introduce the term ‘econopathogenic’ (from ‘economic’ and ‘pathogenic’) to describe phenomena where economic systems and policies directly and/or indirectly contribute to suboptimal health outcomes. Specifically, I use ‘econopathogenic’ to examine how central banking distortions (e.g., fiat money, fiat science, and fiat foods) have adverse effects on public health through mechanisms affecting the food supply and rates of chronic illness. Although not a formal term in economic or medical literature because I just made it up, ‘econopathogenic’ effectively captures the intersection of economic causation and pathogenic outcomes as explored in this paper.

So, how did we get here?

To paint a full picture, it’s important that we start at the very beginning.

Although the full story stretches back several centuries prior, for the purposes of this paper, we will begin at the turn of the 20th century.

During this time (e.g., the Gilded Age through the Edwardian Era), financial panics were relatively common; and in the United States, the economy was especially volatile and riddled with severe recessions and bank failures every decade.

Now, you may be thinking, ‘But Jamin, they probably only had crazy bank runs and recessions back then because economics and market theory didn’t exist yet,’ and that’s a fair rationalization.

But market theory—as a foundational concept within economics—actually originated thousands of years earlier and was formalized in the 18th century, primarily through the work of Adam Smith.

Author’s note: Some scholars contend that the origins of market theory may date back even further to the development of trade and marketplaces in ancient civilizations. Economics (as a field) began in the 18th century with Adam Smith and became distinct by the 19th century as theories of value, trade, and market behavior were developed even further. As industrialization progressed through the 19th century, economists such as David Ricardo, Thomas Malthus, and John Stuart Mill developed foundational theories on comparative advantage, population growth, and utility. This period marked the establishment of classical economics and later set the stage for neoclassical economics, which emerged toward the end of the century.

That said, I think it is reasonable to assume that by the early 20th century, economists (probably) had a general understanding of what we would consider basic economics today; however, more sophisticated theories on market efficiency—such as Fama’s Efficient Market Hypothesis, Nash’s Game Theory, Monopolistic and Imperfect Competition, Utility Theory, and Bounded Rationality—would not be thought up by other great minds until years later.

It’s also reasonable to assume that the economists and market theorists of that time (probably) had a more limited understanding of competition, pricing, product differentiation, and strategic interaction within markets than the economists and theorists who would emerge 50 to 75 years later.

The experts and great thinkers of the early 20th century lacked both the vast access to information and the theoretical foundation needed to make complex decisions in a stochastic economic environment that we possess today.

To make an already precarious situation even more complex, the U.S. banking system at that time relied heavily on private banks, with no centralized authority capable of setting monetary policy or stabilizing the economy.

Without a central authority to act as overseer, economic conditions were left to the whims of individual banks and a few powerful financiers.

During this time, there was no institution akin to today’s Federal Reserve to intervene in times of crisis by adjusting interest rates or managing the money supply, leaving the nation vulnerable to huge swings in the market.

Jerome Powell wasn’t about to walk through the door with a plan to raise or lower rates in response to economic conditions—no such system existed back then.

This absence of a central regulatory body created a breeding ground for chaos and financial instability, which culminated in the infamous Panic of 1907.

The Panic of 1907 (triggered by a failed attempt to corner the copper market) unleashed a chain reaction of bank runs as depositors scrambled to withdraw their money, fearing the banks would totally collapse.

With no centralized institution to inject liquidity into the economy, private financiers, including the legendary J.P. Morgan, took it upon themselves to stabilize banks and stave off a complete economic breakdown.

This crisis laid bare the urgent need for a centralized framework to manage economic downturns, stabilize liquidity, and prevent future panics.

In the wake of the crisis, policymakers, bankers, and economists convened to study European central banking models, particularly Britain’s Bank of England, which had established effective systems for financial stability.

The Birth of the Fed

In 1913, after extensive deliberation and the theoretical recognition for monetary reform, President Woodrow Wilson signed the Federal Reserve Act into law. This legislation established the Federal Reserve System (the Fed as we know it today), America’s first central banking institution, and fiat as we know it was born.

The Fed’s primary task? Stabilize the financial system and manage monetary policy.

Its primary mandates included: providing stability to the financial system by reducing the probability of panics, controlling monetary policy by issuing currency and regulating the money supply, and acting as a lender of last resort by providing liquidity to banks during financial crises.

Essentially, the Fed was designed to:

  • Stabilize the financial system: reduce the probability of financial panics.
  • Control monetary policy: The Fed could issue currency and adjust the money supply.
  • Act as a lender of last resort: The Fed would provide liquidity to banks in crisis.

The Road to Hell: Paved with Good Intentions

The Federal Reserve was initially set up with a decentralized approach, with twelve regional banks across the country overseen by the Federal Reserve Board in Washington, D.C.

This structure allowed the Fed to address local economic conditions while maintaining federal oversight, a unique feature intended to respect the independence of private banks while centralizing monetary policy.

Shortly after the Fed was established, World War I erupted in Europe in 1914. Though initially neutral, the United States became a major supplier of arms and goods to the Allies, which spurred economic growth and increased the demand for a stable financial system capable of handling large international transactions.

By 1917, when the U.S. officially entered the war, the Federal Reserve played a pivotal role in financing the war effort. It issued Liberty Bonds and managed credit and interest rates to stabilize inflation, helping to maintain economic order during a time of unprecedented demand.

The Fed’s success in stabilizing the wartime economy reinforced public confidence in its role, though its capacity was still limited by the gold standard, which required all issued currency to be backed by gold reserves.

During the decade following the end of World War I, the U.S. economy experienced dramatic fluctuations, including a brief post-war recession and a period of rapid economic growth.

However, the seeds of economic collapse were sown years earlier, culminating in the Great Depression of the 1930s, which once again exposed the limitations of the Federal Reserve.

Despite its growing influence, the central bank could not prevent the widespread bank failures and economic devastation that swept across the nation.

The strictures of the gold standard, combined with an inability to coordinate a unified monetary policy response, hampered the Fed’s efforts to mitigate the downturn.

Recognizing these systemic weaknesses, President Franklin D. Roosevelt implemented a series of reforms under the Banking Act of 1935 to centralize and strengthen the Federal Reserve’s powers.

The Banking Act of 1935 marked a pivotal transformation for the Fed, centralizing its authority and granting it more tools to manage (and rule) the economy with much greater control.

This Act centralized much of the Fed’s power, granting the Federal Reserve Board enhanced authority over the regional banks and formally authorizing open market operations as a core tool for managing the money supply.

The Act’s provisions enabled the Fed to more effectively influence credit conditions, interest rates, and liquidity, establishing it as a central pillar in economic governance.

These newly centralized powers positioned the Fed to intervene more decisively, allowing it to play a critical role in economic stabilization and respond dynamically to national and global challenges.

As the 1930s drew to a close, and with the memory of the Great Depression still raw, the United States faced mounting global tensions and economic pressures. The Great Depression had devastated economies worldwide, leading to political instability, rising nationalism, and economic self-interest.

Meanwhile, the European political landscape deteriorated as fascist and totalitarian regimes in Germany, Italy, and Japan aggressively expanded their territories, threatening global peace and challenging U.S. interests abroad.

Although initially isolationist, the U.S. found it increasingly difficult to remain detached from global affairs as Axis powers gained influence.

Economically, the U.S. began supporting Allied powers even before formally entering the war. Programs like the Lend-Lease Act allowed the U.S. to provide arms, resources, and financial support to Allied nations, bolstering their capacity to resist Axis advances.

However, this support required substantial financial resources, drawing heavily on the Fed’s enhanced ability to influence monetary conditions.

Author’s note: The Great Depression lasted from 1929 to 1939. It began with the stock market crash in October 1929 and persisted through the 1930s, with varying degrees of severity, until the economic recovery associated with the onset of World War II.

Fiat Currency: The Backbone of Total & Sustained War 

When the U.S. finally entered World War II in December 1941 (following the attack on Pearl Harbor) the need for large-scale financing to support the war effort became even more clear than ever. The Fed’s powers, strengthened by the 1935 reforms, enabled it to manage wartime borrowing by keeping interest rates low and controlling inflation.

By stabilizing the financial environment, the Fed facilitated the issuance of War Bonds and other government securities, which financed military production, troop mobilization, and other war-related expenses. The Fed’s ability to conduct open market operations allowed it to influence the money supply directly, ensuring that credit remained affordable and available for both the government and private sectors essential to wartime production.

Under this new money system, the U.S. government could, with the Fed’s support, expand the money supply to finance massive expenditures without immediately depleting its gold reserves.

Although the U.S. was still technically on the gold standard during WWII, the Fed’s expanded powers allowed for a quasi-fiat approach, in which the money supply could be adjusted flexibly to meet the needs of the war effort.

This set a precedent for using monetary policy as a tool for national objectives, a practice that would become more pronounced in the postwar era.

Author’s Note: The sustained and perpetual nature of modern warfare is a direct result of the economic structures and monetary policies that finance it. Easy money, for example, a byproduct of our fiat system, provides a tremendous incentive for perpetual war, sustaining wartime industries dependent on government contracts. This echoes Dwight D. Eisenhower’s concern that the military industries that prospered in WWII evolved into a “Military Industrial Complex” that drives U.S. foreign policy toward endless, repetitive, and expensive conflicts with no rational end goal or clear objective.

It should come as no surprise, then, that the U.S. has not raised taxes to pay for the Iraq War, the War in Afghanistan, its recent foray into Syria, or its latest proxy war with Russia, despite spending trillions of dollars. With a national debt of $36 trillion—and no intention of ever tying military spending to increased taxation—all this spending on sustained and perpetual war is just a ‘deficit without tears’. This approach enables the United States to maintain global military engagements—such as deploying forces to scores of nations and engaging in sustained conflicts over long timescales—without significant public backlash.

Any reasonable person would assume that people prefer peace to war. Furthermore, while a nation’s people may support government spending on future-oriented investments like infrastructure and preventative healthcare, they are unlikely to support bloody military campaigns in distant lands. Thus, because people prefer peace to war and are aware of how their government spends public funds (aka their money), a government that spends (and overspends) on perpetual and sustained war risks being overthrown.

In Adam Smith’s seminal work, An Inquiry into the Nature and Causes of the Wealth of Nations (1776), he arrived at similar conclusions: 

“Were the expense of war to be defrayed always by a revenue raised within the year, the taxes from which that extraordinary revenue was drawn would last no longer than the war… Wars would in general be more speedily concluded, and less wantonly undertaken. The people feeling, during the continuance of the war, the complete burden of it, would soon grow weary of it, and government, in order to humor them, would not be under the necessity of carrying it on longer than it was necessary to do so.”

In 2016 alone, the United States had a military presence in over to 80% of the world’s nations and dropped over 26,000 bombs in seven different countries—an average of three bombs per hour, 24 hours a day, for the entire year. Barack Obama, winner of the Nobel Peace Prize, became the first U.S. president to preside over American war every single day of his presidency.

Back to Back World War Champions

By the time World War II neared its end, the global economy was in disarray. The devastation across Europe and Asia, combined with the massive debts accumulated by nations involved in the war, had left the world in urgent need of financial stability and a reliable framework for reconstruction.

The United States, on the other hand, emerged from WWII with the strongest economy and the largest gold reserves, holding around 20,000 tons of gold, nearly three-quarters of the global supply.

This was a significantly stronger economic position than most other countries, giving the U.S. an unparalleled degree of leverage in shaping the world economy at the time.

Asserting its newfound powers, the U.S. sought to leverage this position to establish a new economic order that would both secure its dominance and promote global stability.

USD Becomes the Most Dangerous Weapon Ever Made 

In July 1944, with another World War Championship victory on the horizon, 730 delegates from 44 Allied nations convened at the Mount Washington Hotel in Bretton Woods, New Hampshire, for a historic (and little talked about) conference that would shape international finance for years to come.

This meeting would come to be known as the Bretton Woods Conference, and the resulting agreements would lay the foundation for the postwar financial system and a new international financial order.

The primary objectives of the conference were:

  1. To rebuild war-torn economies.
  2. To promote economic stability.
  3. To stabilize exchange rates.
  4. To prevent the economic instability that had contributed to WWII.
  5. To promote international economic cooperation and prevent future wars.

At the heart of the Bretton Woods system was the creation of a new global monetary framework that would reduce the risk of currency volatility and facilitate international trade.

Notably, the key players and nations at the table were instrumental in shaping the global economic order and establishing a balance of power that would define the postwar era—an order that continues to this day.

The main architects of the Bretton Woods system were British economist John Maynard Keynes and the U.S. Treasury representative Harry Dexter White.

Keynes was one of the most influential economists of the time (and perhaps of all time) and had developed innovative theories on government spending and economic intervention. We will discuss Keynesian theory later in this text.

White, meanwhile, was a powerful figure in the U.S. Treasury with an agenda that aligned with America’s strategic interests.

While most nations were represented at the conference, it was not an equal playing field.

At the conference, the U.S. insisted on making the dollar central to the global financial system, even though Keynes had created a proposal for a new international currency, named the “bancor,” that would prevent any single country from gaining excessive power.

While most other countries initially favored Keynes’s proposal, White and the American delegation strongly opposed this idea, advocating instead for a system based on the U.S. dollar.

The underlying message was clear: the U.S. would use its economic leverage to shape the world’s financial architecture to its advantage.

It’s worth noting, the U.S. held significant sway over the proceedings because it was the world’s economic superpower at the time, with a large trade surplus and around two-thirds of the world’s gold reserves. This gave the World War I and II Champs substantial leverage in shaping the outcome of the conference to suit its own interests.

This dominance allowed the U.S. to dictate many terms, leading some historians to argue that Bretton Woods was an exercise in American “dollar diplomacy“—a strategy that pressured other nations into adopting a dollar-centric system.

While not overt bullying in a political sense, the U.S. leveraged the weakness and indebtedness of its allies to secure their cooperation. Since war-torn nations like the United Kingdom needed American support for reconstruction, they had little choice but to accept the new financial order.

As such, the Bretton Woods Agreement of 1944 was driven largely by the United States, and it established the U.S. dollar as the world’s reserve currency, cementing American economic and military dominance for decades to come.

The agreement established two major institutions:

  1. The International Monetary Fund (IMF): Now part of the World Bank, these guys were charged with overseeing exchange rates and providing short-term financial assistance to countries experiencing balance-of-payments issues.
  2. The International Bank for Reconstruction and Development (IBRD), later part of the World Bank, which was focused on long-term reconstruction and development loans to war-ravaged economies.

Note: In addition to the primary institutions listed above, several others were created or directly influenced by Bretton Woods principles:

  1. United Nations (UN) (1945) – Established shortly after Bretton Woods, the UN was created to promote international cooperation and peace, with agencies like UNESCO and UNICEF in support.
  2. General Agreement on Tariffs and Trade (GATT) (1947) – Created to reduce trade barriers, later evolving into the World Trade Organization (WTO) in 1995.
  3. Organization for Economic Co-operation and Development (OECD) (1961) – Originally established as the Organization for European Economic Co-operation (OEEC) in 1948 to administer the Marshall Plan, it later evolved into the OECD, focusing on economic cooperation and policy coordination among its member nations.

The core principle of the Bretton Woods system was a system of fixed exchange rates pegged to the U.S. dollar, which itself was convertible to gold at $35 an ounce. This effectively made the U.S. dollar the world’s reserve currency, as other currencies were indirectly tied to gold through their dollar pegs.

By pegging the dollar to gold at $35 per ounce and tying other currencies to the dollar, the U.S. gained unprecedented control over global finance, enabling it to print money with global backing.

This financial leverage not only funded America’s immediate war efforts but also allowed it to build the largest and most advanced military infrastructure in history, positioning the U.S. as a global superpower.

The Federal Reserve’s role in this system was significant, as it cemented the dollar’s central role in global finance and positioned the Fed as a key player in the international monetary landscape. By facilitating stable exchange rates, the Bretton Woods system aimed to prevent the economic disruptions that had destabilized prewar economies.

This arrangement provided the U.S. with substantial financial leverage, allowing it to project economic power globally while reinforcing its domestic economy through the demand for dollars.

The Bretton Woods system effectively required nations to maintain large reserves of U.S. dollars, which created a steady demand for American currency. As foreign central banks accumulated dollars, they were also entitled to redeem those dollars for gold, at the rate of $35 per ounce, with the Federal Reserve.

With a steady international demand for fiat USD (especially during times of instability or economic uncertainty) the U.S. could run deficits without the immediate risk of currency devaluation, thereby fueling robust economic growth at home.

The Bretton Woods system was stable, and promoted foreign investment in the U.S. economy, which helped solidify America’s dominance we see today; it also provided the resources needed to develop military capabilities unmatched by any other nation in the history of the world.

Through this financial system, the U.S. created an economic and military juggernaut that shaped the postwar world order, leaving it at the helm of both global security and finance.

The Expansionary “Gold Drain”

After WWII, the U.S. economy entered a period of unprecedented expansion, with strong consumer spending, industrial growth, and government investment.

This expansion, however, soon led to challenges within the Bretton Woods system.

The Fed adjusted its policies to manage inflation and stabilize this growth, influencing interest rates to maintain low unemployment while avoiding economic overheating.

And, in the beginning, Bretton Woods was able to strengthen U.S. economic dominance and allowed it to run trade deficits without immediate consequences, but as the 1960s progressed, the system became increasingly problematic.

As European and Asian economies recovered and grew, they began to redeem their dollar reserves for gold, leading to a “gold drain” from U.S. reserves.

The gold drain accelerated as more and more foreign countries, particularly France, began exchanging their dollars for gold. French President Charles de Gaulle was particularly vocal in criticizing the U.S. for abusing its “exorbitant privilege” of printing the world’s reserve currency.

This situation reached a critical point by the 1970s, as the volume of dollars held abroad began to exceed the gold held by the U.S. government.

These mounting economic pressures, compounded by the costs of the Vietnam War and rising inflation, made maintaining the dollar’s gold convertibility unsustainable.

So, in 1971, President Richard Nixon made the decision to “close the gold window”–effectively taking us off the gold standard, a move known as the “Nixon Shock.”

This effectively ended the Bretton Woods system and shifted the U.S. to a 100% fiat currency system.

By shifting to a fully fiat currency system, the U.S. retained control over its currency, giving the Federal Reserve greater latitude in managing monetary policy. Other countries gradually followed suit, allowing their currencies to float freely against the dollar.

The transition marked the beginning of a new era in global finance, with floating exchange rates and the dollar as the world’s primary reserve currency. This system continued to ensure U.S. dominance in global finance, with the dollar remaining central to international trade and investment.

Freed from the constraints of the gold standard, the Fed gained greater flexibility (and more power) to manage the economy, allowing it to adjust interest rates and money supply more freely to combat inflation and address domestic economic challenges.

Other nations gradually followed suit, allowing their currencies to float against the dollar, which cemented the dollar’s status as the world’s primary reserve currency.

We are now officially living in a fiat world.

For the uninitiated, fiat is: a government-issued currency that’s not backed by a physical commodity such as gold or silver; nor by any other tangible asset or commodity.

Fiat currency is typically designated (and backed) by the issuing government to be legal tender and is authorized by government regulation.

Fiat currency is money that has no intrinsic value (it isn’t backed by anything) instead, its value comes from government decree and public trust in its stability and purchasing power.

It has value only because the individuals who use it as a unit of account–or, in the case of currency, a medium of exchange – agree on its value.

In other words, people trust that it will be accepted by merchants and other people as a means of payment for liabilities.

The U.S. dollar, as discussed above, became a purely fiat currency in 1971 when President Nixon ended the dollar’s convertibility to gold, decoupling the dollar from gold and ending the Bretton Woods system.

This move allowed the Federal Reserve and the U.S. government control over the money supply, enabling them to rule with an iron fist and create money “out of thin air” without needing to hold physical reserves.

In our fiat system, The Fed can expand money to achieve certain economic goals, and can control the supply of money through monetary policy tools (e.g., adjusting interest rates, buying or selling government securities, and setting reserve requirements for banks, etc.).

While there are some benefits to having “The Fed”, one of the significant suboptimalities is that it creates inflation.

I know there has been a lot of confusion on this topic in American political and social circles in recent years, but inflation is caused by one primary mechanism and can be compounded by several other sub-mechanisms.

Before we get to the fancy stuff, however, we must first define what inflation means.

Inflation, in simple terms, is the decline in purchasing power of a currency, or the devaluation of that currency, which generally shows up as rising prices for goods and services.

Here’s how inflation is created in a fiat system:

The primary mechanism that causes inflation is money supply expansion (most notably without a corresponding increase in the production of goods and services).

When The Fed prints money and/or injects funds into the economy (e.g., stimulus payments, funding wars, quantitative easing, etc.), more dollars are circulating. If the supply of goods and services doesn’t increase at the same pace, the result is “too much money chasing too few goods,” and prices go up as people compete to buy the same amount of goods with more money.

Inflation can occur in several ways:

  1. The Fed expands the money supply by:
    • Lowering interest rates (which encourages borrowing and spending).
    • Conducting open-market operations (buying government securities).
    • Engaging in quantitative easing, where they buy a broader range of assets, further increasing liquidity.
  2. Government spending & debt: When governments finance spending by borrowing heavily or creating money, they increase the money supply. This is especially true if the central bank funds government debt by buying bonds, essentially monetizing the debt and creating new money. Some examples include: (wars, infrastructure, interest on national debt, stimulus packages, federal assistance, subsidies, etc.).
  3. Credit Expansion: When banks increase lending (see point #1) they effectively create more money in the form of credit. This additional purchasing power drives up demand in the economy, which can lead to higher prices if supply doesn’t keep pace.
  4. Policy Choices in Times of Crisis: In economic crises, the government and central bank may flood the economy with liquidity to prevent collapse, as seen during the 2008 financial crisis and the COVID-19 pandemic. While these measures may stabilize the economy short-term, they expand the money supply significantly, risking inflationary pressures later.
  5. Supply Shocks: While not strictly an increase in the money supply, a supply shock (e.g., oil shortages or disruptions in global supply chains) can cause inflation by reducing the supply of goods, creating upward pressure on prices. This type of inflation (cost-push inflation) is more temporary but can lead to higher inflation expectations if prolonged.

Author’s note: When government spending is financed by debt, as is the case here in America, The Fed may buy government bonds, essentially “printing money” to support that debt. This increases the money supply, often leading to demand-pull inflation, as more dollars are chasing the same amount of goods and services. So, while government spending can be beneficial in various ways, it will always contribute to inflationary pressure when growth in government outlays exceeds economic productivity or when it requires ongoing debt financing. The combination of high demand, low supply, and expanded money supply will push prices upward, resulting in a classic case of nasty inflation. 

Inflation: Destroyer of Worlds

Inflation can be a tricky and often difficult concept to wrap your mind around, especially when you are just starting out as an economic theorist (or what I like to call, an econ yellow belt).

At Clemson, I had an economics professor who simplified the concept of inflation (and how it can wipe out a society) using two historical examples.

The first example is the story of the ancient monetary system based on Rai stones on Yap Island, and the second is the fall of Rome.

First, we will discuss Yap Island.

One of the most interesting case studies of money I have ever studied is the primitive money system of Rai stones on Yap Island, located in present-day Micronesia in the Western Pacific.

What is a rai stone?

A Rai stone is a large, disk-shaped piece of limestone that was formerly used as currency on the Micronesian island of Yap. These stones, ranging in size from a few inches to several feet in diameter, are characterized by a hole in the center, which allowed them to be transported using wooden poles. Despite their massive size and weight—some rai stones can weigh several tons—they served as a sophisticated and abstract form of money.

These massive stones served as a form of currency not for daily transactions but for significant social and ceremonial exchanges.

Their story, however, also illustrates how an external force—inflation driven by human action—can disrupt a carefully balanced economic system, leading to one of the earliest examples of inflation driven by overproduction of a currency.

Rai stones were quarried from Palau, hundreds of miles away, and transported back to Yap in perilous voyages. Their value depended not just on size but also on factors like craftsmanship, historical significance, and the degree of difficulty of transportation.

Ownership was socially acknowledged rather than requiring physical possession; a stone could remain in place while its “owner” used it as payment by transferring title through communal agreement.

This system worked because it was underpinned by trust and scarcity. The difficulty of quarrying and transporting rai stones kept their numbers limited, and each stone’s history added to its cultural value. Even stones lost at sea could retain value, as long as the community remembered the story of their loss.

Author’s note: The exact date when Rai stones were first used on Yap Island is not precisely known, but it is generally believed that their use as a form of money began around 500-800 CE, possibly earlier. Archaeological and historical research suggests that the Yapese started quarrying limestone from Palau during this period, indicating the early origins of the rai stone system.

In the late 19th century, an Irish-American sea captain named David O’Keefe, stumbled upon Yap after being shipwrecked. Observing the Yapese reliance on Rai stones, he saw an opportunity to make a quick buck by leveraging his access to modern tools and ships. O’Keefe began trading iron tools and other Western goods with the Yapese in exchange for their labor to quarry Rai stones more efficiently.

Using modern technology and his ships, O’Keefe was able to transport a much greater quantity of Rai stones from Palau to Yap than had ever been possible using traditional methods.

Previously, the difficulty and danger of acquiring rai stones had maintained their scarcity and high value. O’Keefe’s operations drastically reduced the labor and risk involved, and he was able to flood the Yapese economy with new stones.

The injection of new Rai stones into the Rai stone “money supply” destabilized the local economy, undermining their scarcity and value in what would come to be known as one of the first massive economic collapses of the ancient world.

By increasing the quantity of stones available, O’Keefe had undermined their scarcity, which was central to their value. As the supply expanded, the relative worth of individual stones diminished. This inflation eroded the traditional monetary system, as people began to lose confidence in the stones’ ability to function as a reliable store of value.

In the end, the Yapese became increasingly reliant on O’Keefe’s Western goods, further integrating their economy into a globalized trade network. Over time, the Rai stones’ role as functional money decreased and the stones became worthless, relegated instead to ceremonial and symbolic uses.

This story serves as a timeless reminder of an enduring economic rule: when the supply of money exceeds demand, its purchasing power inevitably declines—a principle that remains evident in our modern currency systems today.

The Collapse of Rome

The Roman Empire’s collapse, on the other hand, driven by systemic inflation and currency debasement, presents a paradigmatic case study in fiscal deterioration.

In the beginning of this self-destructive journey, Rome had a robust monetary framework, grounded in the silver denarius and gold aureus, each backed by tangible metal content and widely trusted as reliable stores of value across every corner of the Empire.

The denarius, with 3.9 grams of silver, was first introduced around 211 BCE during the Roman Republic, as a standardized silver coin to support trade and serve as a reliable everyday currency.

The aureus followed much later, standardized by Julius Caesar around 46 BCE as a high-value gold coin intended for substantial transactions, especially for military and international trade; and as a store of wealth.

Together, the denarius and aureus allowed Rome to manage a robust economy, with the denarius used for daily exchanges and the aureus reserved for larger, higher-value transactions.

The aureus, with 8 grams of gold, standardized transactions and cemented economic interconnectivity across the Mediterranean.

Rome enjoyed economic stability, despite political turmoil, until the reign of Nero (54–68 AD), when practices like coin clipping and debasement began to undermine the currency’s value.

Nero was the first emperor to engage in “coin clipping,” a practice whereby the state—and sometimes even common plebs—would shave small amounts of metal from the edges of coins, keeping the clippings to melt down into new coins.

These clipped coins circulated at full face value but were visibly smaller over time, eroding public confidence and contributing to inflation.

To make things worse, Nero also introduced the concept of debasement, a state-led process of melting down coins and mixing cheaper metals (fillers) such as copper, into the silver.

These debased coins contained less silver, allowing the government to mint more coins from the same amount of precious metal (and finance the Empire’s ballooning expenditures) thereby funding military campaigns and public projects without having to raise taxes.

I’m sure at some point the Roman government was celebrating all their success and marveling at their genius.

“How come nobody has thought of this before!” “Getting rich is so easy!” they may have thought to themselves.

Little did they know that this money hack did nothing but increase the money supply at the cost of each coin’s intrinsic value, creating the first well-documented instance of government created, empire destroying inflation.

The reduction of silver content from 3.9 grams to 3.4 grams, though initially modest, catalyzed a systemic shift toward perpetual debasement as Nero’s example became a template for successive emperors to fund escalating military and administrative costs without imposing unpopular tax hikes.

This set a precedent for successive emperors, who perpetuated and intensified debasement to fund escalating military campaigns and administrative costs.

The resulting (and inevitable) inflation eroded public confidence in the currency, diminished its purchasing power, and led to widespread hoarding of gold, further exacerbating monetary contraction.

The cumulative effect was a destabilization of the Roman economy: trade diminished as currency value fluctuated unpredictably, tax revenues declined, and socioeconomic stratification intensified as wealth consolidated in landownership rather than fluid capital.

These economic maladies, intertwined with political fragmentation and external pressures from incursions by non-Roman entities, culminated in the disintegration of centralized imperial authority.

In the discourse of economic historiography, the Roman Empire serves as a paradigmatic example of how fiscal and monetary policies can precipitate systemic decline.

It’s quite arguably the single most important historical lesson on the critical correlation between monetary integrity and state stability, demonstrating that protracted fiscal imprudence and inflationary policies can undermine even the greatest of empires.

Rome was (probably) the greatest empire to exist in the ancient world, but in the end, coin clipping and debasement gradually eroded the purchasing power of Roman currency, creating an inflation so significant it eventually destabilized the economy and led to the eventual destruction of the Empire.

Could the same thing happen here in America?

History often repeats itself, but in such cunning disguise that we never detect the similarities until it’s too late.

There are parallels we can draw between the debasement of the Roman coinage and the debasement of modern fiat currencies.

The United States appears to be following a trajectory similar to that of ancient Rome.

Only our techno-accelerated path is going a thousand times faster.

Where along the curve are we?

Are we at the beginning, the middle or the end?

What are the choices going to be for us?

What’s the end game?

Will we go down with civil war, foreign invasion and economic chaos and into a long period of civilizational decline, like the Romans?

Or will it be more like the recent British Empire example, where financial and military power recedes and yet the nation still remains a significant player in the world (although dethroned as the top dog)?

And if you think the way of Rome can’t happen here in the states, let’s take a quick look at the dollar’s buying power over time.

Let’s rewind back to 1913, where our story began.

If you purchased an item for $100 in 1913, then today in 2024, that same item would cost $3,184.86.

In other words, $100 in 1913 is worth $3,184.86 in 2024.

That’s a 3084.9% cumulative rate of inflation.

buying power of us dollar over time

When converted to the value of one US dollar in 2020, goods and services that cost $1 in 1700 would cost just over $63 in 2020, this means that one dollar in 1700 was worth approximately 63 times more than it is today.

To illustrate the devaluation of the dollar, an item that cost $50 dollars in 1970 would theoretically cost $335.50 US dollars in 2020 (50 x 6.71 = 335.5), although it is important to remember that the prices of individual goods and services inflate at different rates than currency, so this graph must only be used as a guide.

purchasing power of one dollar 1900 to 2024

$1 in 1900 is equivalent in purchasing power to about $37.54 today, an increase of $36.54 over 124 years. The dollar had an average inflation rate of 2.97% per year between 1900 and today, producing a cumulative price increase of 3,653.58%.

According to the Bureau of Labor Statistics consumer price index (which we will discuss later), today’s prices are 37.54 times as high as average since 1900.

The inflation rate in 1900 was 1.20%.

The current inflation rate compared to the end of last year is now 2.44%.

If this number holds, $1 today will be equivalent in buying power to $1.02 next year as the dollar continues to lose value over time.

value of one dollar adjusted for inflation

This chart shows the buying power equivalence for $1 in 1900 (price index tracking began in 1635). For example, if you started with $1, you would need to end with $37.54 in order to adjust for inflation (sometimes referred to as “beating inflation”).

Alas, whenever a country goes off the gold standard to a pure fiat system it becomes irresistible to just keep the printers pumping out more and more money.

Sure, it creates flexibility in monetary policy but it almost always leads to gross misuse.

In modern times, money is always issued along with debt in the same amounts.

The results are often disastrous, with Venezuela, Zimbabwe, Germany in the 1920’s serving as prime examples.

The currency eventually becomes worthless.

But the Ponzi scheme can go on for a long time before that happens.

The US dollar has already lost 97% of its value over 50 years, and the devaluation continues to accelerate day after day, year after year.

How long can a fiat currency last without collapsing into hyperinflation?

The answer depends heavily on how much fiscal discipline the country has, how strong its economic policies are, and how much confidence the public has in the currency.

Fiat Foods

Money is an integral component of every economic transaction, and as a function, exerts massive influence on nearly every dimension of human existence.

People are free to choose anything as a money, but over time, some things will function better.

Cattle were a good choice at some point. Buckskins (aka bucks), seashells, glass beads, lime stones, too.

The things that tend to function better have better marketability, and they reward their users by serving the function of money better.

Metals were a good choice when they were more difficult to make.

But as they become more difficult to make, precious metals (gold, silver, etc.) became money.

Later, only the most precious of metals were able to survive the test of time, and gold emerged as the winner.

Now, in modern times, we use imaginary money.

And the mechanisms of this fiat currency, outlined in the previous section of this paper, create several notable distortions within the structure and dynamics of food markets.

Bear with me here because it will take some time to connect all of the dots to human health, but I promise they will.

In the next section, I will provide a focused analysis of two principal distortions: first, how fiat-induced incentives that elevate time preference influence farmland production output and consumer diet decisions; and second, how fiat-driven government financing enables an interventionist role in the food market and how this governmental overreach/expansion shapes agricultural policy, national dietary guidelines, and food subsidies.

Let’s pivot back to our old friend, President Nixon. 

When President Nixon closed the gold in 1971 (as discussed above), thereby taking us off the gold standard, he relieved the U.S. government from the constraint of having to redeem its fiat in physical gold, thereby granting the government a wider scope for inflationary expansion.

Of course, this expansion in the money supply inevitably drove up the prices of goods and services, making inflation a defining characteristic of the global economy throughout the 1970s.

As runaway inflation inevitably accelerated, the U.S. government (like every inflationist regime in history), blamed the rising costs on a variety of political factors—such as the Arab oil embargo, bad actors in international markets, and scarcity in natural resources—deflecting attention and blame away from the true cause: the inflationary impact of its own monetary policies.

Now you may be wondering by now, “Jamin, it seems that inflation is a very bad thing for everyone involved, why don’t governments ever learn their lesson and stop introducing inflationary policies?”

That’s a great question, I’m glad you asked that question.

Well, the answer, like most answers in the world of economic tradeoffs, is: it’s not that simple.

You see, each time a government expands credit and spending, it creates a new group that depends on those funds.

In turn, this group leverages its political influence to preserve and even increase the spending, creating a self-reinforcing cycle that makes it extremely difficult for any politician to roll back—even if they really want to do so.

Sure, they’ll make plenty of false promises—what do you expect? They’re politicians. But remember, we’ve been living in a fiat world since 1971, and in a fiat-based system, the path to political success lies in exploiting the money supply (i.e., printing money), not in constraining it.

So, as food prices escalated into a massive political concern back in the 1970s, any attempt to control them by attacking and attempting to reverse inflation (and inflationary policies) was largely abandoned—a choice that had originally necessitated the closure of the gold exchange window.

Instead, they chose to implement a strategy of central planning in the food market, resulting in the disastrous consequences that continue to this day.

Earl Butz: “Get Big or Get Out”

President Richard Nixon’s appointment of Earl Butz as Secretary of Agriculture in 1971 marked a major turning point in U.S. agriculture.

Butz was an agronomist with ties to major agribusiness corporations.

His policies, though transformative and controversial, emphasized large-scale, high-yield monoculture farming and laid the foundation for industrial agriculture in the United States

Butz’ tenure introduced significant changes that reshaped not only the economics of farming but also the social and environmental landscapes of American agriculture.

His primary objective (of course) was to reduce food prices, and his approach was brutally direct: he advised farmers to “get big or get out,” as low-interest rates flooded farmers with capital to intensify their productivity.

His philosophy encouraged farmers to expand their operations, adopt intensive farming practices, and focus on growing a few select cash crops like corn and soybeans, incentivizing their production.

Under Butz’s direction, the USDA promoted practices to maximize output, such as:

  • Crop Specialization and Monoculture: Butz’s policies incentivized farmers to focus on high-yield crops like corn and soybeans, which could be produced at scale. This approach led to a shift away from the traditional, diversified farms, which grew multiple crops and often included livestock.
  • Government Subsidies and Price Supports: Butz restructured agricultural subsidies to maximize production, encouraging overproduction and ensuring farmers financial support regardless of market demand. These subsidies stabilized revenue for large farms but intensified financial pressure on smaller farms.
  • Increased Mechanization and Fertilizer Use: Butz pushed heavy mechanization and the use of synthetic fertilizers and pesticides, boosting yields but contributing to environmental issues such as soil depletion, water pollution, and biodiversity loss.

This “get big or get out” policy was highly advantageous for large-scale producers, but it was a death blow for small farms.

Federal subsidies and guaranteed prices drove a shift from diversified, family-owned farms to highly mechanized, single-crop operations. The approach prioritized consolidation and efficiency, benefiting large agribusinesses capable of achieving economies of scale, but it pushed smaller, diversified farms out of the market.

average farm size in the united states

There were about 1.9 million farms in the United States in 2023, down from 2.2 million in 2007. While the average farm size has increased, the number of individual farms has decreased.

The policy shift marked a significant turn toward industrial-scale agriculture, emphasizing high-yield, monoculture farming to feed a growing global market.

As such, small and medium-sized farmers found it increasingly difficult to survive in an environment that rewarded scale.

Most small farmers were unable to compete or manage high debt, and were often forced out of business or absorbed by larger agribusinesses that could take advantage of economies of scale and had access to the capital needed for mechanization and high-input farming.

As a result, family farms began to decline, leading to widespread consolidation of farmland by larger corporate farming operations. Rural communities suffered as local economies contracted and traditional farming jobs disappeared.

It marked the end of small-scale agriculture and forced small farmers to sell their land to large corporations, accelerating the consolidation of industrial food production.

total area of farmland in the united states

From 2000 onwards, the total area of land in U.S. farms has decreased annually. From 2000 to 2023, the total farmland area decreased by over 66 million acres, reaching a total of 878.6 million acres in 2023.

number of farms in the united states

Not only has the land for farming been decreasing in the U.S., but so has the total number of farms. From 2000 to 2021, the number of farms in the U.S. decreased from ~2.17 million farms in 2000 to ~1.9 million in 2023.

And while the increased production did lead to lower food prices, it came at a significant cost:

  1. A huge decline in small-scale agriculture
  2. The degradation of soil quality
  3. The deterioration of nutritional value in American foods

This transition led to increased output but came at the cost of rural community health, soil depletion, and greater reliance on synthetic fertilizers and pesticides.

Butz’s emphasis on overproduction also had lasting consequences for the American diet and the food industry.

The massive surpluses of corn and soybeans led to the rise of low-cost, processed foods that relied heavily on these crops.

High-fructose corn syrup, corn-fed livestock, and seed oils became staples in the American food supply, contributing to the expansion of processed and fast foods and the corresponding suboptimalities in public health.

While Butz’s policies significantly boosted output, positioning the U.S. as an agricultural powerhouse and enabling large-scale food production, they also entrenched a system focused on volume over sustainability, shifting food production into a highly industrialized sector dominated by large corporations.

This shift came at a cost, with trade-offs affecting American soil health, diminishing the nutritional quality of foods, and ultimately impacting public health—consequences that would affect generations to come.

Alas, as I pointed out earlier, there is never absolute victory, there are only tradeoffs.

You see, using industrial machines on a massive scale can reduce the cost of foods, which was a key objective of Butz’s policies.

Mass production can increase the size, volume, and often the sugar content of food, but it’s a lot harder to increase the nutritional content of that food, especially as the soil gets depleted of nutrients from intensive and repetitive monocropping.

When farmers practice monocropping, the soil is depleted of nutrients over time, requiring increasing inputs of synthetic fertilizers to restore basic nutrient levels.

This creates a destructive death cycle cycle of soil degradation and dependency on artificial inputs, which is ultimately unsustainable.

In parallel (and near perfect correlation) with the decline in the quality of foods recommended by the government, there has been a similar deterioration in the quality of foods included in the government’s inflation metric, the Consumer Price Index (CPI)—an invalid mathematical measure or statistical construct that, despite its severe flaws, is nonetheless meticulously tracked and monitored by policymakers.

For all intents and purposes (and if you are a serious economist), the CPI is a make-believe metric that pretends to measure and track over time and space the cost of an “average basket” of consumer goods purchased by the average household.

By observing and tracking price fluctuations in this basket, government statisticians believe they can accurately measure inflation levels.

The only way to agree that this is true is to have no understanding of how math works.

One must have a total fundamental disregard for the complexities of mathematical accuracy and what it means to have a “meaningful measurement”.

If you have made it this far, you are very likely a being of above average intelligence, but you don’t need to be an economics scholar to figure some of this stuff out.

A lot of basic economics often comes down to common sense.

For example, you don’t need to sit through a 90-minute macroeconomics lecture to understand that foods with high nutrition content will cost more than foods with low nutritional content.

And, as the prices of high nutrition foods increase, consumers are inevitably forced to replace them with cheaper, lower-quality alternatives.

This analysis does not require high level mathematics, it’s simply common sense.

As a common plebeian seeking to understand inflation, observing shifts in purchasing behavior—such as cheaper foods becoming a more prominent part of the ‘basket of goods’—you can see, or at least reasonably assume—depending how well you understand basic economics—that the true effect of inflation is massively understated.

For example, let’s imagine you have a daily budget of $20, and you spend the entire budget on a delicious steak that provides all the daily nutrition you need for the day. In this very simple use case, the CPI reflects a $20 consumer basket of goods.

Now, if hyperinflation causes the price of the steak to skyrocket to $100 while your daily income remains fixed at $20, what happens to the price of your basket of goods?

It cannot magically increase because you cannot afford a $100 steak with a $20 steak budget.

So, the cost of the ‘basket’ cannot increase by 5X, you just can’t afford to buy steak.

Instead, you and most others will seek cheaper, often lower-quality alternatives.

So, you make the rational decision to replace the steak with the chemically processed shitstorm that is a soy or lab meat burger for $20.

If you do this, like magic, the CPI somehow shows zero inflation.

This phenomenon highlights a fundamental and critical flaw in the CPI: since it tracks consumer spending, which is limited by price, it fails to capture the true erosion in purchasing power.

Essentially, it’s a lagging indicator which does not account for subjective value changes and the resulting substitutions in consumer behavior, causing it to understate true inflation’s impact on individual purchasing power and quality of life.

And as prices go up, consumer spending does not increase proportionately but rather shifts toward lower-quality goods.

That said, the cost of living is driven by a decline in product quality and is never fully reflected in the CPI. It cannot be reflected in the price of the average “basket of goods” because whatever you put in that basket as a consumer is determined by changes in price.

So, for the uninitiated, now you fully understand how and why prices continue to rise while the CPI remains within the politically convenient 2–3% inflation target.

As long as consumers are content to swap delicious steaks for industrial waste sludge burger substitutes, the CPI will continue to paint an artificially stable picture of inflation.

You don’t have to be a conspiracy theorist to “see” that by shifting toward substituting industrial waste sludge for real food has helped the U.S. government to understate and downplay the extent of the destruction in the value of the U.S. dollar in key measurements like the Consumer Price Index (CPI).

By subsidizing the production of the cheapest food options and recommending them to Americans as healthy, ideal dietary choices, the apparent scale of price increases and currency devaluation is lowered significantly.

When you look at the evolution of U.S. dietary guidelines since the 1970s, you’ll notice a distinct and continuous decline in the recommendation of meat, coupled with an increase in the recommendations of grains, legumes, and other nutritionally deficient foods that benefit from industrial economies of scale.

This trend highlights a calculated move toward inexpensive, mass-produced foods that artificially stabilize inflation metrics while cleverly hiding the true decline in food quality and customer purchasing power.

Bottom Line: the industrialization of farming has led to the rise of large conglomerates (e.g., Cargill, Archer Daniels Midland, Tyson Foods, Bunge Limited, Perdue Farms) which wield substantial political influence in the U.S., allowing them to effectively lobby for expanded subsidies, shape regulation, and influence dietary guidelines in ways that give them a competitive advantage.

Government Cheese & Diet Propaganda

Now, you may be starting to connect the dots and make the connection between monetary economics and nutrition.

As the government transitioned away from the gold standard to fiat currency, it marked a fundamental shift away from the classical liberal era and accelerated society toward a centrally planned model that favored extensive governmental control.

As such, the government now plays a massive role in food production and dietary guidelines, giving them a significant amount of control over many aspects of individual life.

This marks a fundamental shift away from trends during La Belle Époque (one of history’s most transformative periods) when governments typically refrained from intervening in food production, banning certain substances, or engaging in sustained military conflicts financed by dollar devaluation.

In stark contrast, today’s era is defined by massive government overreach, sustained warfare, and the systemic introduction of chemicals into the food supply.

Since the invention of fiat currency, governments have increasingly tried to regulate aspects of private life, with food being a prime regulatory target.

The rise of the modern nanny state, where the government cosplays as caretaker and “parent” to its citizens, providing guidance on all aspects of citizens’ lives, would not have been possible under a gold standard.

The reason for this is simple: any government attempting to make centralized decisions for individual problems would quickly do more economic harm than good and they would run out of hard money very quickly, making this type of operation unsustainable.

Fiat money, however, allows for government policy errors to accumulate over long timescales before economic reality sets in through the destruction of the currency, which typically takes much longer.

Therefore, it is no accident that the U.S. government introduced dietary guidelines shortly after the Federal Reserve began to solidify its role as America’s overbearing nanny.

The first guideline, targeting kids, first appeared in 1916, followed by a general guideline the following year—marking the beginning of federal intervention in personal dietary choices.

The weaknesses, deficiencies, and flaws inherent in centrally planned economic decision-making have been well-studied by economists from the Austrian school (e.g., Mises, Hayek, Rothbard) as they argue that: what enables economic production, and what allows for the division of labor, is the ability of individuals to make economic calculations based on their ownership of private property.

Without private property in the means of production, there is no market; without a market, there are no prices; without prices, there is no economic calculation.

When individuals can calculate the costs and benefits of different decisions (based on personal preference), they are able to choose the most productive path to achieve their unique goals.

On the flip side, when decisions for the use of economic resources are made by those who do not own them, accurate calculation of real alternatives and opportunity costs becomes impossible, especially when it concerns the preferences of those who directly use and benefit from the resources.

This disconnect highlights the inherent inefficiencies in central planning, whether in economic production, dietary choices, or broader resource allocation and public policy domains.

Before Homo sapiens developed language, however, human action had to occur via instincts, of which humans possess very few, or on physical direction and manipulation; and learning had to be done through either imitation or internal (implicit) inferences.

However, humans do possess a natural instinct for eating, as anyone observing Americans from just a few minutes old up to 100+ years of age can attest.

Humans have developed and passed down cultural practices and food traditions for thousands of years that act as de facto dietary guidelines, helping people know when and what to eat.

In such a system, individuals are free to draw on ancestral knowledge, study the work of others, and experiment on themselves to achieve specific nutritional goals.

However, in the era of fiat-powered expansive government, even the basic decision of eating is increasingly shaped by state influence.

When the state (aka the government) starts getting overly involved in setting dietary recommendations, medical guidelines, and setting food subsidies—much like the central planners the Austrian school critiqued—it is impossible for them to make these decisions based on the individualized needs of each citizen in mind (and we will discuss bio-individuality later in this text).

These agents are, fundamentally, government employees with career trajectories and personal incentives directly tied to the fiat money that pays their salaries and sustains their agencies.

As such, it is not surprising that their ostensibly scientific guidelines are heavily influenced and swayed by political and economic interests and/or pressures.

So, if you are still with me here, there are three primary forces driving government dietary guidelines:

  1. Governments seeking to promote cheap, industrial food substitutes as if they were real food.
  2. Religious movements seeking to massively reduce meat consumption.
  3. Special interest groups trying to increase demand for the high-margin, nutrient-deficient, industrial sludge products cleverly designed to resemble real food.

These three drivers have shaped a dietary landscape aligned more with industrial profit and ideological goals than with the health and well-being of individuals.

Let’s examine the drivers in more detail:

1) Governments seeking to promote cheap, industrial food substitutes as if they were real food.

Three well-documented cases come to mind in which the U.S. government tried to create policies that promote cheap, industrial food substitutes instead of real food:

  • Margarine in place of real butter: During the 20th century, margarine—an inexpensive, industrially-produced fat—was promoted as a healthier alternative to butter. In the 1980s and 1990s, federal dietary guidelines advised Americans to reduce saturated fat intake, and recommended switching from real butter to an industrial sludge like margarine, which was made with partially hydrogenated oils. These oils contained trans-fats, which were later discovered to pose serious health risks, including a higher risk of heart disease. Margarine’s promotion reflected the prevailing views on fat and cholesterol at the time, but it inadvertently led to widespread trans-fat consumption, which is now recognized as harmful.
  • High-fructose corn syrup in processed foods: High-fructose corn syrup (HFCS) is an industrially produced sludge derived from corn that became prevalent in the 1970s after the government implemented subsidies for corn production. The resulting low cost made HFCS an attractive sweetener for food manufacturers. HFCS began to replace real sugar (cane sugar) in various foods and beverages, including soda, snacks, and sauces. Despite limited evidence on the long-term health effects at the time, HFCS became a ubiquitous ingredient. Later research linked excessive HFCS consumption to obesity and metabolic diseases, but it continues to be a prominent ingredient in processed foods due to its cost-effectiveness and government corn subsidies.
  • Fortified Refined Grains in the Dietary Guidelines: U.S. dietary guidelines have consistently pushed grain consumption propaganda, including industrial sludge grains that are heavily refined and fortified with vitamins and minerals as a primary dietary component. The focus on affordability and fortification has made refined grains (e.g., white bread, pasta, and breakfast cereals) a staple in the American diet, and consumers are often tricked into believing these artificially fortified and nutrient depleted products are actually good for them as refined grains lack fiber and other naturally occurring nutrients. This approach has supported the availability of affordable, calorie-dense foods but often at the expense of overall nutritional quality.

Another interesting case during the same time period (e.g., the period in history when America went off the gold standard) was the strange war the government declared on eggs.

Starting in the 1960s and 1970s, fiat scientists working for the government raised alarms when they suggested that high dietary cholesterol could raise blood cholesterol levels and contribute to heart disease.

Since eggs are naturally high in cholesterol, they became a focal point of attack and criticism—a conclusion that, to an analytical thinker, should immediately raise warnings in your brain of a ‘hasty generalization’ or a ‘post hoc ergo propter hoc’ causal fallacy.

Nevertheless, to the inferior mind, correlation must always equal causation, so the fiat scientists at the U.S. government recommended new dietary guidelines limiting egg consumption.

These guidelines, aimed at reducing “heart disease risk”, led consumers to avoid eggs or seek out cheap egg substitutes, often industrially processed sludge products designed to eliminate cholesterol.

This shift was compounded by agricultural subsidies favoring crops like corn and soy, which made processed foods cheaper than unsubsidized “healthy” whole foods like eggs.

For decades these guidelines were followed almost religiously as scientific law, however, as nutritional science advanced, research revealed that dietary cholesterol has little to no impact on heart disease risk.

Many such cases illustrate how government policies and guidelines have consistently aligned with large-scale farming and industrial food production, highlighting the link between economic interests and public health outcomes.

2) Religious movements seeking to massively reduce meat consumption.

Every decision the government makes is not always based on science or sound analysis, many times, they are simply influenced by powerful or persuasive special interest groups. One little-studied, and little-discussed case is the case of the Seventh-Day Adventist Church, who has maintained a longstanding 150-year moral crusade against meat.

One of the church’s founders, Ellen G. White (1827-1915), was a prominent religious leader whose writings shaped the church’s doctrine, particularly on health and dietary practices. White advocated for a vegetarian diet, viewing it as morally superior and conducive to spiritual and physical health. She claimed to have “visions” of the evils of meat-eating and preached endlessly against it.

Author’s note: There are reports suggesting White was still eating meat in secret while simultaneously preaching about how evil it was, but such claims are difficult to prove.

Nevertheless, White’s advocacy for plant-based diets and her moral opposition to meat consumption became central to Adventist beliefs, leading the church to promote vegetarianism as a healthier, ethically superior lifestyle.

In a fiat currency system, such as the case today, the ability to shape political processes can translate into substantial influence over national agricultural and dietary guidelines.

As such, White’s propaganda and influence expanded beyond church walls through Adventist-established institutions like hospitals, universities, and health institutions such as the American Dietetics Association, particularly those in “Blue Zone” communities where plant-based diets are widely adopted.

Adventists have made lobbying efforts that advocate for plant-based nutrition for years.

The church’s involvement in medical and public health fields, including the Adventist Health System and Loma Linda University, has allowed it to (1) promote vegetarianism within mainstream health and wellness discussions; (2) have their research cited in prominent health journals; and (3) influence consumer attitudes and shape public perception and policy.

I have no problem (ethically, morally, personally, etc.) about anyone, religious or otherwise, following whatever diet their visions tell them to follow, but conflict and chaos always seem to arise when these folks try to impose their views on others, or worse, use them to influence broader public policy.

Author’s note: The American Dietetic Association (now the Academy of Nutrition and Dietetics), an organization which to this day holds significant influence over government diet policy, and more importantly, is the body responsible for licensing practicing dietitians, was co-founded in 1917 by Lenna Cooper, who was also a member of the Seventh-Day Adventist Church. During World War I, she was a protégé of Dr. John Harvey Kellogg at the Battle Creek Sanitarium, an institution with strong Adventist ties advocating for vegetarian diets and plant-based nutrition.

Case in point, anyone caught dishing out dietary advice without a license from the ADA risks being thrown in jail or hit with hefty fines.

The impact of this policy cannot be overstated: it enforces a government-backed monopoly that has, for generations, allowed a religiously motivated agenda—rooted in minimal scientific evidence—to dictate what is permissible dietary advice, which has distorted many generations’ understanding of what a true health food is.

3) Special interest groups trying to increase demand for the high-margin, nutrient-deficient, industrial sludge products cleverly designed to resemble real food.

Even more concerning is the ADA’s role in shaping dietary guidelines taught in nutrition and medical schools around the world, meaning it has influenced how nutritionists and doctors have (mis)understood nutrition for nearly 100 years.

As a result, the vast majority of people, including trained professionals like doctors and nutritionists, now hold the belief that animal fat is harmful, while grains are universally healthy, necessary, and safe.

Meanwhile, as our so-called knowledge of health and nutrition increases on one axis, the overall health of the general population declines on the other.

Therefore, it should come as no surprise that the ADA, like all other principal institutions financed by government fiat money, was established in 1917, around the same time as The Fed.

There are countless other organizations that are responsible for pushing subpar “research” that has been adopted and propagandized by advocates of industrial agriculture and meat reduction, The Adventist Health System is a primary example; that push their Puritanical moralism and strange visions on a species that has been scientifically proven over 100,000 years to thrive by eating fatty acids and animal proteins.

‍The Soy Information Center proudly proclaims on its website: No single group in America has done more to pioneer the use of soyfoods than the Seventh-day Adventists, who advocate a healthful vegetarian diet. Their great contribution has been made both by individuals (such as Dr. J.H. Kellogg, Dr. Harry W. Miller, T.A. Van Gundy, Jethro Kloss, Dorothea Van Gundy Jones, Philip Chen) and by soyfoods-producing companies (including La Sierra Foods, Madison Foods, Loma Linda Foods, and Worthington Foods). All of their work can be traced back to the influence of one remarkable woman, Ellen G. White.

In a rational world, the messianic anti-meat crusade might have been dismissed, but it found a receptive audience in the agricultural-industrial complex.

This is America, after all, and there is money to be made. The crops they chose to replace meat in the Adventists’ visionary agenda were well-suited for cheap, large-scale production, making the partnership a match made in heaven.

Agroindustry would make massive profits from producing these inexpensive crops, governments could downplay inflation as citizens replaced nutritious meat with cheap sludge alternatives, and the Adventists’ crusade against meat would provide the mystical, romantic vision that would make this cultish mass poisoning appear as it were a spiritual advancement for humanity—even as it indirectly contributed to the chronic diseases and deaths of millions.

Bootleggers and Baptists

The alignment of interests promoting industrial agriculture mass-scale, low-nutrition products exemplifies the “Bootleggers and Baptists” dynamic in special interest politics, as described by legendary Clemson economist Bruce Yandle.

Yandle was the first to put forth the story of The Bootlegger and the Baptist, which describes how economic and ethical interests often form an alliance with one another to promote regulation, even though the two groups would never interact otherwise.

Just as Baptist ministers were out preaching about the evils of alcohol, priming the public to accept prohibition, it was bootleggers who quietly lobbied politicians for these restrictions, knowing their profits from bootlegging would increase with the severity of the restrictions on alcohol sales.

This pattern recurs frequently in public policy: a sanctimonious, quasi-religious moral crusade advocates for policies whose primary beneficiaries are special interest groups.

The dynamic is both self-sustaining and reinforcing, requiring no overt collusion; the “Bootleggers” and “Baptists” always seem to push in the same direction, helping each other, amplifying and supporting each other’s efforts.

As you can probably tell by now, the pieces to our econopathogenic puzzle are beginning to take form.

Fiat inflation has simultaneously driven up the cost of healthy nutrient-dense foods and expanded the government’s influence over personal dietary choices.

This symbiotic action created a fertile breeding ground for a religious group to commandeer diet policy toward its anti-meat messianic vision, giving massive power to an agricultural-industrial complex that now heavily shapes food policy.

Together, these forces have been able to shift the dietary Overton window over the past 100 years, allowing a proliferation of toxic industrial materials cleverly marketed as food.

It’s very difficult to imagine that the consumption of these “foods” would have gained such popularity without the distortions afforded by fiat currency and its creators.

Fiat Foods

Towards the end of the 1970s, the U.S. government, along with many of its international vassals, started to endorse the modern food pyramid.

This model heavily featured the subsidized grains of the agricultural-industrial complex, which recommends using 6–11 servings daily as the foundation of a “healthy” diet—a prescription that has contributed to widespread metabolic disease, obesity, diabetes, and various other health problems that are now so common that most people see them as a normal part of life.

food pyramid

The ridiculous science behind this shift (which I like to call Fiat Science) sensationalizes the mass production of plant-based industrial sludge that humans had never consumed before. But the fact that something can be produced at scale does not mean it should be eaten.

Many of these “foods” are either drugs or inedible industrial byproducts that have been foisted upon the public through 100 years of heavy propaganda and government policy, all financed by fiat currency.

Here are a few of the chief offenders:

1) Soy 

Historically, soy was used to enrich soil, it was not an edible crop.

Soy only became edible after extensive fermentation, as seen in traditional Chinese products like tamari, tempeh, and natto.

Poverty and famines later forced many Asian populations to eat more of it, and studies have shown it has arguably had a negative effect on the physical development of the populations that have depended on it for too long.

Modern soy products, on the other hand, come from soybean lecithin—a byproduct of oil processing—is highly refined and used in various foods despite questionable health benefits.

The Weston Price Foundation describes the process by which soy is made:

“Soybean lecithin comes from sludge left after crude soy oil goes through a “degumming” process. It is a waste product containing solvents and pesticides and has a consistency ranging from a gummy fluid to a plastic solid. Before being bleached to a more appealing light yellow, the color of lecithin ranges from a dirty tan to reddish brown. The hexane extraction process commonly used in soybean oil manufacture today yields less lecithin than the older ethanol-benzol process, but produces a more marketable lecithin with better color, reduced odor and less bitter flavor.”

If you are still on the fence about whether soy is a toxic industrial sludge, consider this:

Historian William Shurtleff wrote extensively about the rapid growth of the soybean crushing and soy oil refining industries in Europe in the early 1900s, detailing how the expansion led to significant issues with disposing the increasing amounts of fermenting, foul-smelling sludge byproduct.

According to Shurtleff’s detailed accounts, German companies ultimately decided to vacuum dry the sludge, patenting it, and rebranding the substance as “soybean lecithin.”

Shurtleff further notes that, by 1939, the scientists they hired to find new use for the substance had thought up over a thousand new uses, transforming soy from industrial waste into a widely-adopted ingredient for food, pharmaceuticals, cosmetics, and more.

2) “Vegetable” Oil and Seed Oils

100 years ago, before the invention of fiat science, most dietary fats came from natural animal fats like butter, lard, tallow, ghee, and schmaltz, with smaller amounts of olive and coconut oils.

Today, the majority of fats we eat are highly processed industrial oils—soy, corn, sunflower, and rapeseed—but also the abomination that is margarine—which are cleverly marketed as “vegetable oils” despite containing harmful trans fats from hydrogenation.

Most of these chemicals didn’t exist 100 years ago, and those that did were primarily reserved for industrial use only and were not thought of as fit for human consumption.

However, as industrialization and fiat-science induced hysteria against animal fats increased over time, governments started to tout these toxic chemicals as healthy alternatives. Eventually, doctors, nutritionists, and their corporate sponsors all joined in the action and became part of the propaganda machine.

This shift, driven by government policy and fiat funding, has demonized and replaced traditional, healthy fats with substances originally used as industrial lubricants, and contributed to the widespread metabolic issues we see in the world today.

The fact that the overlords were able to convince the plebs that they should replace the traditional fats used for thousands of years with cheap industrial sludge is an astounding testament to the power of government propaganda cleverly disguised as actual science.

The late Dr. Mary Enig of the Weston Price Foundation dedicated her life to exposing the health risks of these oils, but very little attention was given to her message. If you’re curious, she talks about which fats to eat and which fats to avoid, and how certain types of fat can impact your health.

Author’s note: If there’s one change likely to yield the biggest health improvement with minimal effort, it would be replacing these harmful industrial oils (polyunsaturated and hydrogenated vegetable and seed oils) with healthy animal fats. 

The following seed oils are often considered potentially harmful or “toxic” in the context of health discussions, mainly due to their high levels of polyunsaturated fatty acids (PUFAs), susceptibility to oxidation, and processing methods. Here are the primary oils commonly discussed:

The eight industrial toxic seed oils are: Canola, Corn, Cottonseed, Grapeseed, Rice bran, Safflower, Soy, Soybean, and Sunflower.

  • Canola Oil: Heavily processed, can contain trans fats, and is high in omega-6 fatty acids, which, in excess, may promote inflammation.
  • Corn Oil: Another oil high in omega-6, corn oil is also prone to oxidation and is commonly genetically modified.
  • Cottonseed Oil: Often used in processed foods and heavily refined; it can contain pesticide residues and is high in omega-6 fatty acids.
  • Grapeseed Oil: Contains high levels of PUFAs and is prone to oxidation, especially when used for cooking.
  • Rice Bran Oil: Contains PUFAs and is heavily processed, though it has a high smoke point; it may still contribute to an imbalanced omega-6 intake.
  • Safflower Oil: Similar to sunflower oil, safflower oil is high in omega-6 and sensitive to heat, which can make it unstable.
  • Soybean Oil: High in omega-6 fatty acids and widely used in processed foods, soybean oil is heavily refined and prone to oxidation, which may contribute to inflammation when consumed in excess.
  • Sunflower Oil: High in omega-6 fatty acids, sunflower oil can oxidize easily, especially when used at high temperatures.
global consumption of vegetable oils

Total vegetable oil consumption has grown steadily over the past 10 years, reflecting rising demand driven by population growth, industrial applications, and global dietary trends.

worldwide oilseed production

2023-2024 global oilseed production, led by soybeans at 398.21 million metric tons, reflects soaring global demand for oilseeds, driven by their central role in food, feed, and industrial applications.

3) Processed Corn and High Fructose Corn Syrup

The government policy and subsidies of the 1970s encouraged mass corn production, making its price very cheap.

This led to an excess supply of corn crops; and this corn surplus led to the development of many creative ways to utilize it to benefit from its low price.

These use cases included gasoline, cow feed, and our all-time favorite, High Fructose Corn Syrup (HFCS), which became the number one sweetener in American foods (replacing sugar) due to its low cost.

In 1983, the FDA blessed this new substance with the classification of “Generally Recognized As Safe” and the floodgates to its utilization opened up like Moses parting the Red Sea.

The United States has high tariffs on sugar (due to our TRQ system), and under this system, a specified quantity of sugar can enter the country at a lower tariff rate, but imports exceeding that quota are subject to significantly higher tariffs. This limits the amount of low-tariff sugar entering the U.S. market, which supports domestic sugar producers by maintaining higher domestic prices.

Since we have high tariffs on sugar, the price of sugar here in the states is usually two to three times the global price. On the flip side, the U.S. has very high subsidies to corn, so American farmers can grow and sell corn at a lower price than many other countries.

These subsidies help cover production costs, allowing American corn to be sold at prices below the global average, even if the natural, unsubsidized cost of producing it might be higher.

This lower price makes U.S. corn highly competitive on the global market, impacting global trade and enabling its widespread use in various products, from animal feed to sweeteners like high-fructose corn syrup.

Corporations quickly realized that sweetener made from corn was much more profitable than sweeteners made from sugar, and ever since, American candy, sodas, and food has become almost universally full of HFCS, which by many accounts, is even more unhealthy than regular sugar, and it’s nowhere near as tasty.

It goes without saying that HFCS is a threat to public health. Among its many issues, perhaps the most concerning is that it can only be metabolized in the liver, much like other toxic substances that cause liver damage, like alcohol, acetaminophen (Tylenol), antibiotics, statins, pesticides, heavy metals (e.g., mercury and lead), and other environmental pollutants.

global corn consumption by country

The west dominates global corn consumption, accounting for over half of the 47.68 billion bushels consumed worldwide. The vast majority of corn grown in the United States is enhanced with biotechnology.

distribution of biotech corn acreage in the united states

Biotech corn continues to dominate U.S. agriculture, with stacked biotech corn increasing steadily from 21% in 2006 to 82% in 2023, while non-biotech corn has declined to just 3% of total production over the same period.

4) Low-Fat Foods

As is custom with basic economic theory, and in this particular case, substitute goods theory in consumer demand, the hysteria and fear of animal fats inevitably led to a rise in low-fat and fat-free products.

Without the animal fat, however, most of these products had no taste, so the best way to make them palatable was to put sugar in them.

As a result, people began consuming higher amounts of sugar to compensate for reduced fats, leading to constant hunger.

This inevitably led to frequent binges on sugary snacks throughout the day, which were not only loaded with chemicals and artificial ingredients but also high amounts of HFCS to compensate for their lack of flavor.

Author’s note: Sugary, processed snacks are highly addictive, and avoiding satiating animal fats leaves you constantly hungry, increasing the likelihood of overeating snacks to make up for the natural macronutrients you’re missing. 

Author’s note: When demand for a particular good decreases due to policy, propaganda, health concerns, price increases, or cultural shifts, consumers often turn to substitute goods that serve a similar purpose but are perceived as healthier, cheaper, or more socially acceptable. 

One of the most interesting phenomena in the holy war against saturated fats was the meteoric rise of fat-free skim milk.

Back in the day, American farmers used skim milk—the byproduct of butter production—to fatten their pigs

Combining skim milk with corn was considered the fastest way to really fatten a pig up.

Yet, through the ‘wizardry’ of the fiat scientific method, this corn-and-skim milk combo somehow became a popular breakfast food, enthusiastically promoted, subsidized, and recommended by fiat authorities—with similarly fattening effects on people.

Unsurprisingly, another devout Seventh-Day Adventist and follower of Ellen White, John Kellogg, who viewed sex and masturbation as sinful, believed that a ‘healthy’ diet was one that would suppress the sex drive.

Decades later, after marketing his favorite breakfast of industrial sludge to billions worldwide, we’re now seeing correlations with declining birth rates globally.

5) Refined Flour and Sugar

Whole grain flour and natural sugars have been part of the human diet for thousands of years. Whole grain flour containing the nutrient-rich germ and bran, was traditionally prepared using elaborate rituals and eaten with large amounts of animal fat, as documented by historians and researchers across time and space.

Industrialization, however, transformed these natural ingredients, effectively stripping them of their nutrients and turning them into highly addictive substances.

You see, an important problem of the industrial revolution was the preservation of flour.

Transportation distances and a relatively slow distribution system collided with natural shelf life. The reason for the limited shelf life is the fatty acids of the germ, which react from the moment they are exposed to oxygen.

This occurs when grain is milled; the fatty acids oxidize and flour starts to become rancid. Depending on climate and grain quality, this process takes six to nine months.

In the late 19th century, this process was too short for an industrial production and distribution cycle. As vitamins, micronutrients and amino acids were completely or relatively unknown in the late 19th century, removing the germ was an effective solution.

Without the germ, flour cannot become rancid.

Degermed flour became standard.

Degermation started in densely populated areas and took approximately one generation to reach the countryside.

Heat-processed flour is flour where the germ is first separated from the endosperm and bran, then processed with steam, dry heat or microwave and blended into flour again.

In simple terms: industrialization solved the problem of flour spoiling by industrially processing and removing the critical nutrients from it, making it shelf-stable.

By the late 19th century, degermed flour became standard in urban areas and eventually spread to the countryside.

Sugar, on the other hand, has naturally existed in many foods, but its pure form was rare and expensive, since the processing required massive amounts of energy, and its production was done almost universally by slaves.

Back then, sugar production was an exhausting job and almost nobody wanted to do it, but as industrialization allowed for the replacement of slave labor with heavy machinery, refined sugar, once rare and very difficult to produce, became widely accessible, which enabled its production in pure, nutrient-stripped form, free of all the molasses and nutrients that usually accompany it, and at a much lower cost.

This refining process turns flour and sugar into powerful substances that provide a short-lived energy boost without real nutrition, acting more like pleasure drugs instead of food, which can lead to addiction. Refined sugar contains no essential nutrients, and flour provides very little beyond empty calories.

As a result, regular consumption of these refined products can leave individuals constantly craving more, leading to cycles of overeating (mirroring the patterns seen with addictive drugs) without any genuine nourishment and contributing to long-term health issues like metabolic disorders, weight gain, and nutritional deficiencies.

Fiat Harvest

I don’t want to bash seed oils and soy products too much; they do have a few legitimate industrial applications.

Corn, soy, and low-fat milk are passable cattle feed, but they’re inferior to natural grazing.

Processed flour and sugar can be used as recreational indulgences in tiny doses, but none of these products belong in the human diet, especially if your goal is to thrive and be healthy.

The problem is, as technology and science continue to advance, these (garbage) products become cheaper, and as government subsidies to them continue to rise, people are consuming them in ever-increasing, mind-blowing quantities.

Technology is exponential, and the rate of technological change is accelerating to a point that the world may transform itself beyond recognition during our lifetime—perhaps even multiple times.

And it’s happening at a rate much faster than the human brain can process.

Faster, more powerful machinery dramatically reduced production costs during the industrial revolution, and as industrial technology continues to progress, industrial sludge food becomes more and more affordable to produce.

By contrast, even though we are arguably living in the Fourth Industrial Revolution (aka the Technological Revolution or 4IR), there is very little that industrialization and advanced machines can do to improve the cost of producing nutritious red meat.

Nutritious red meat requires grazing, sunlight, and open land to grow properly; and this process remains costly, difficult to industrialize, and the product is highly perishable.

Fiat funded monocrop agriculture, on the other hand, produces “foods” with extended shelf lives, making it easy to store, transport, and market them worldwide. This allows the big food corporations to achieve economies of scale.

economies of scale

As quantity of production increases from Q to Q2, the average cost of each unit decreases from C to C1. LRAC is the long-run average cost.

It’s great that we have been able to scale food production because this has solved a lot of hunger problems, but the tradeoffs have been costly. To achieve low prices and scale, these shelf-stable “foods” must be processed to be hyper-palatable and addictive.

It goes without saying that the widespread availability and use of these cheap, heavily subsidized, and highly processed foods has been a profound and unmitigated disaster for the health of the human race.

Time Preference

Time preference, in simple terms, refers to the preference of satisfaction you get right now compared to gains at a future time.

It’s the concept that people generally prefer to have goods or satisfaction sooner rather than later, reflecting a natural inclination toward present consumption over future consumption.

High Time Preference: The tendency to prioritize immediate consumption over future benefits, with a strong preference for satisfaction as soon as possible.

Low Time Preference: The inclination to delay immediate gratification in favor of future gains. Individuals with low time preference are more likely to save and invest, seeking greater rewards over the long term.

This tendency to delay consumption for the sake of increased production (e.g., low time preference) leads to the accumulation of wealth that drives civilization forward.

In Democracy: The God That Failed, Hans-Hermann Hoppe argues that: no matter what a person’s original time-preference rate or what the original distribution of such rates within a given population, once it is low enough to allow for any savings and capital or durable consumer-goods formation at all, a tendency toward a fall in the rate of time preference is set in motion, accompanied by a process of civilization.

So, when capital formation becomes feasible, societies tend to reduce their collective time preference, leading to economic growth and the emergence of advanced, wealth-oriented cultures.

Low-time-preference societies, like Liechtenstein, for example, demonstrate the benefits of delayed gratification, with stable economies and citizens who save for future prosperity.

A lot of entrepreneurs demonstrate this principle by deferring immediate gains to build long-term ventures, which often results in substantial wealth creation.

Conversely, high time preference, exacerbated by fiat theory and inflationary policies, can drive “decivilization” by eroding savings and discouraging investment, stifling economic progress.

Hoppe contends this occurs when circumstances arise that cause individuals to no longer view savings or investments as safe or reasonable.

A prime example of this is the inflation common in fiat money economies.

As states inflate the money supply of a currency (as discussed earlier in this paper) they actively erode the purchasing power and savings of low-time-preference individuals, thus leading to a tendency for individuals to revert to a higher time preference.

Savings will drop, capital goods become scarcer, and economic growth slows down to a crawl.

A society will regress if it does not maintain its pool of savings or infrastructure.

Modern economic stagnation can be explained by inflation in this regard, as it is a direct consequence of rising time preferences.

Time preference also directly influences interest rates, as people need to be compensated to defer consumption.

Thus, societies that encourage low time preference—through secure property rights and stable markets—are more likely to thrive, with higher savings, increased investment, and greater capital accumulation driving long-term economic growth. In contrast, societies that foster high time preference face the risk of economic stagnation and eventual decline.

In the fiat era, time preference is generally forced higher by economic forces, with individual decisions around food following a similar path on our graph, almost in perfect correlation, aimed at producing satisfaction right now.

You see, as fiat currency devaluation forces people to prioritize the “right now”, they are more likely to indulge in foods that feel good in the moment at the expense of their health in the future.

These foods are usually cheaper, more convenient, and of a lower nutrition profile; compared to going out and buying quality ingredients, growing or picking veggies and herbs in a garden, and preparing a meal yourself.

It’s the high time preference of ‘Uber Eats’ fast food vs the low time preference of healthier hunter/gatherer/grower/prepare at home.

This shift toward short-term reward in decision-making inevitably favors the increased consumption of junk foods, toxins, and cheap artificial substitutes.

We saw this effect on full display during the COVID-19 pandemic.

Modern fiat scientists and fiat doctors and prescription givers are unlikely to mention the obvious drivers of modern diseases, as they (1) are unstudied in real nutrition; (2) lack an understanding of basic economic concepts; and (3) prevention makes for bad business.

In my experience, especially since publishing my first research, I’ve learned that the medical industry projects the “external savior myth” down to the material plane in the form of their hard stance and reliance on advanced chemicals to ‘fight the disease’.

My approach is a little bit different. I prefer to give the body the tools and nutrients it needs to correct the problem, as well as, any using new or underutilized methods of healing and support, combined with the constant mental attitude of ahimsa, first do no harm.

Author’s note: There are many differences between my approach and that of the medical industry. First my focus is health, not the disease. My focus is supporting the body as a system, not degradation.

That said, one could argue that the prevalent blind faith in modern medicine’s power to correct all health problems further encourages individuals to believe that their diets (and the industrial waste that often accompanies them) have no consequences.

Fiat Soil

One of the most interesting and understudied topics in modern economics is the effect that easy money has on people’s time preference.

As we have discussed, as fiat money devalues over time and interest rates are artificially suppressed, people will prioritize short-term gains over long-term sustainability and begin to favor borrowing and spending over saving.

This phenomenon has been extensively documented throughout history, and many Americans have faced a significant erosion of buying power and rising inflation since 2022, huge problems that continue to persist today.

While this phenomenon is mostly studied through the lens of capital markets and consumer spending, we must also consider its impact on how individuals use of their natural environment and its soil, and how their decisions (or human actions) affect their personal health.

This broader perspective reveals profound consequences for agriculture, where industrial methods prioritize short-term crop yields, leading to long-term soil degradation, depleted nutrients, and an overreliance on chemical fertilizers—ultimately resulting in nutrient-deficient foods and an agricultural system driven more by profit margins than sustainability.

As individuals’ time preference increases, they place less importance on the future, discounting it significantly as they focus on survival in the short-term, making them less likely to value and preserve the long-term health of their natural environment and soil.

Now, consider how such a shift would affect farmers: the higher a farmer’s time preference, the less likely they are to care about the long-term returns their land may yield, and they will be more likely to care about short-term profits instead.

So, a higher time preference for a farmer means prioritizing immediate profits (and keeping your farm in business) over the sustainability of their land’s productivity over the next ten, twenty, or even one hundred years.

This mindset incentivizes short-term focused soil management that prioritizes rapid returns, often at the expense of long-term soil health.

This was clearly evident in the widespread soil depletion leading up to the 1930s, culminating in the Dust Bowl, as documented by many researchers over the past 100 years, including Hugh Hammond Bennett, who championed soil conservation efforts, and William Albrecht, who linked soil fertility to human health.

With industrialization came production efficiency and scale, and thanks to inventions like hydrocarbon energy, humans have been able to significantly increase land use, dramatically increasing crop yields.

While this great American story of the rise in agricultural productivity is often celebrated as one of the great successes of the modern world, the severe toll it has taken on soil health is rarely discussed.

Today, most agricultural soils worldwide have become so depleted that they are unable to grow crops without the addition of artificial, industrially produced chemical fertilizers, leading to a steady decline in the nutritional quality of food grown on that soil compared to crops grown in nutrient-rich, naturally maintained soil.

The observations of Albrecht and Bennett provide a compelling perspective on the degradation of soil quality and its impact on food production.

Albrecht extensively documented the decline in soil fertility and its direct correlation with nutrient deficiencies in crops.

Bennett highlighted the destructive practices of industrial agriculture that prioritize short-term gains over long-term sustainability, leading to widespread erosion and depletion.

While Albrecht and Bennet do not explicitly draw a correlation between fiat money and time preference, their analysis is fairly consistent with what we have discussed thus far in this paper.

Let’s look at it from an economics perspective:

Soil can be viewed as capital—because it’s the productive asset upon which all food depends.

Fiat currency systems, which incentivize the consumption of capital for immediate gains, naturally extend this dynamic to the exploitation of soil, prioritizing short-term productivity over its long-term preservation.

This can be understood through the lens of the small farmer:

As the industrial agriculture machine pushes time preference upwards, it strips productive capital from the environment.

A prime example of this shift in time preference comes from heavy-plowed agriculture, as it is well understood by farmers all over the world, and explained in research by the U.S. Department of Agriculture’s Natural Resources Conservation Service (NRCS):

“The plow is a potent tool of agriculture for the same reason that it has degraded productivity. Plowing turns over soil, mixes it with air, and stimulates the decomposition of organic matter. The rapid decomposition of organic matter releases a flush of nutrients that stimulates crop growth. But over time, plowing diminishes the supply of soil organic matter and associated soil properties, including water holding capacity, nutrient holding capacity, mellow tilth, resistance to erosion, and a diverse biological community.”

Additionally, lan Savory’s research and work on soil depletion has led to remarkable success in reforestation and soil regeneration efforts. His approach is simple and involves using large herds of grazing animals on depleted soil to graze on whatever vegetation they can find, aerating the land with their hooves, and fertilizing it with their manure. The results, showcased on their website, provide compelling evidence for the effectiveness of holistic grazing in maintaining soil health.

On the other hand, industrial crop production rapidly depletes soil of its essential nutrients, leaving it fallow and heavily reliant on synthetic fertilizers. This highlights the wisdom of pre-industrial farmers across the world, who usually cyclically rotated their land between farming and grazing. As crop yields declined on a plot, it was left to grazing animals to naturally rejuvenate, after which farmers either moved to new plots or returned to the regenerated land. This traditional balance was able to maintain long-term soil vitality without the ecological costs of modern intensive farming. The trade-offs, however, were smaller crop yields, slower time-to-harvest, and the inability to produce at scale.

So, we arrive at the trade-offs:

Traditional farming (low time preference), such as rotating land between farming and grazing, prioritizes long-term soil health and sustainability but results in smaller yields, slower harvest cycles, and limited scalability. Here, immediate gains are sacrificed to ensure lasting benefits, such as fertile soil and ecological balance.

Modern industrial farming (high time preference) achieves higher productivity and efficiency but at the cost of soil degradation, ecological harm, and dependence on synthetic inputs. Here, the future is heavily discounted, prioritizing short-term gains and outputs over long-term sustainability and resource preservation.

The implication here is clear: a low time preference approach to land management focuses on the long-term health of the soil by balancing crop cultivation with animal grazing.

In contrast, a high time preference approach prioritizes immediate gains, often overexploiting soil with little regard for any long-term consequences.

The shift toward mass crop production and its dominance in 20th-century diets reflects this rising time preference.

Low time preference approaches, such as producing large quantities of meat, typically yield smaller profit margins but sustain ecological balance, while high time preference approaches favor the mass production of plant crops, optimized and scaled through industrial methods and the fiat machine to maximize profit margins.

Back in the day, farmers rotated plow farming with cattle grazing to naturally restore the soil. Grazing cattle, in the traditional sense, were the key to a healthy soil, aerating the soil with their hooves and enriching it with their manure, which not only enhanced rainwater absorption but also built organic matter. After a few years of grazing, the land would be ready for crop cultivation.

But as industrialization introduced heavy machinery for plowing, and as the advent of fiat money devalued the importance of long-term sustainability, the traditional balance between man and land was disrupted and replaced with intensive agriculture that depletes the soil very quickly.

As natural, sustainable methods are largely unprofitable, farmers opted to rely on industrial fertilizers instead of ‘nature made’ cattle manure.

You see, industrial farming allows farmers to strip nutrients from the soil, achieving high yields in the first few years, at the expense of the health of the soil in the long run.

In contrast, rotating cattle grazing with crop farming yield smaller rewards in the short-term but ensures the soil remains healthy and productive in the long run.

Bottom line: A heavily plowed field producing subsidized fiat crops may generate a large short-term profit, but careful, sustainable soil management offers a steadier, albeit lower, long-term income.

But alas, the costs of doing business—and, consequently, the costs of goods—are rapidly rising, and most consumers will not opt to pay $3.50 for an organic head of lettuce that takes 90 days to grow when they can buy an industrially grown head of lettuce for $1.50 that takes only 30 days to grow.

You don’t need a PhD in economics to predict which farmer will win the competition for the customer.

It all comes down to simple market dynamics: the farmer offering the cheaper, faster option will dominate the competition for price-sensitive customers.

In an inflationary, fiat-driven, economy, like the one we live in today, most customers are price-sensitive and will choose several $1.50 (industrial) heads of lettuce over a single $3.50 (traditional) head for their ‘basket of goods’.

Given these variables, the small farmers’ ability to save for the future is eventually destroyed by fiat, his confidence in the future declines, and his discounting of the future increases.

So, the fiat system forces him to devalue sustainability and future benefits and incentivizes him to deplete his soil for faster revenues in the short-term.

But can the slow farmer actually compete against a cleverly scaled machine?

Let us see.

We will run the analysis using default settings in [smoov.bra.in2024]

First, we evaluate the known params.

In our generic, linear, 1:1 example, we assume the following:

  • There are 2 farmers both growing lettuce.
  • One farmer uses industrial techniques and can grow a head of lettuce in 30 days.
  • The other farmer uses traditional techniques and can grow a head of lettuce in 90 days.
  • The industrial farmer sells his lettuce for $1.50 each and the traditional farmer sells his for $3.50 each.
  • Each cycle produces the same yield (1,000 lettuces) regardless of the time taken (30 days or 90 days).
  • All other factors, such as labor, resources, and land usage, remain constant across cycles.

Author’s note: The linear relationship in this example comes from the proportionality between the number of cycles completed and the total yield, where a farmer completing 3 times as many cycles (e.g., 12 cycles vs. 4) produces exactly 3 times as many lettuces.

When you crunch the numbers, the difference in productivity over a year is significant.

Assuming an average yield of 1,000 lettuces per harvest, the 30-day farmer can complete about 12 cycles in a year, producing 12,000 lettuces.

On the other hand, the 90-day farmer can only manage about 4 cycles, yielding around 4,000 lettuces annually. This results in the faster-growing farmer taking 8,000 more lettuces to market each year.

Total Revenues:

  • 30-Day Farmer: $18,000 annually.
  • 90-Day Farmer: $14,000 annually.

Why the 30-Day Farmer Wins:

  • The 30-day farmer benefits from significantly higher production volume (3x the output of the 90-day farmer).
  • While the 90-day farmer charges more per head, the price premium is insufficient to offset the production gap.

To achieve market dominance, the 30-day farmer will reinvest his profits and scale production. Let’s look at a 10-year example assuming:

  1. The 30-day farmer reinvests 50% of annual profits into expanding capacity each year.
  2. Each dollar reinvested increases production capacity by 10% annually (compounding).
  3. The farmer starts with 12,000 lettuces/year, selling at $1.50 per head, and incurs fixed costs that remain constant for simplicity.

Year 1: Baseline

  • Revenue: $18,000
  • Reinvestment: $9,000
  • Production Growth: 13,200 lettuces for Year 2.

Scaling Over 10 Years

YearLettuces ProducedRevenue ($)Reinvestment ($)New Capacity for Next Year
112,000$18,000$9,00012,000 × 1.10 = 13,200
213,200$19,800$9,90013,200 × 1.10 = 14,520
314,520$21,780$10,89014,520 × 1.10 = 15,972
415,972$23,958$11,97915,972 × 1.10 = 17,569
517,569$26,354$13,17717,569 × 1.10 = 19,326
619,326$28,989$14,49419,326 × 1.10 = 21,258
721,258$31,887$15,94421,258 × 1.10 = 23,384
823,384$35,076$17,53823,384 × 1.10 = 25,723
925,723$38,585$19,29325,723 × 1.10 = 28,295
1028,295$42,442$21,22128,295 × 1.10 = 31,125

Total Production and Revenue After 10 Years:

  • Lettuces Produced in Year 10: 31,125 lettuces/year
  • Revenue in Year 10: $46,688

Impact of Reinvestment:

  • Initial Production (Year 1): 12,000 lettuces/year
  • Final Production (Year 10): 31,125 lettuces/year (growth of 2.6x).
  • Total Revenue (Cumulative): ~$280,000 over 10 years.

The basic reinvestment strategy in this scenario enables the 30-day farmer to significantly scale production and revenue over a decade, leveraging short production cycles and reinvesting profits into capacity expansion.

By contrast, a 90-day farmer, with slower cycles and less reinvestment capacity, would struggle to achieve comparable growth. Especially as fiat inflation causes prices to rise and money to devalue over time.

This scenario highlights the efficiency advantage of shorter growth cycles, particularly in a competitive, price-sensitive market where higher productivity directly translates to greater profitability and market dominance.

Productivity and scale aside, after 100 years of industrial farming, it is becoming increasingly evident that the trade-offs of this approach—both ecological and human—has become very costly.

‍Industrialization allows for the widespread exploitation of soils driven by farmer high time preference in utilizing soils.

Modern hydrocarbon-powered machinery and technology enhance scalability and profitability, but accelerate nutrient extraction, leading to rapid soil depletion. The reliance on industrial fertilizers masks the long-term costs, as they temporarily restore fertility to depleted soils.

That said, just because we are super-intelligent, world-building, techno-creators does not mean the trade-off of ruining our soil is acceptable.

And just because we have innovation does not mean negative externalities are necessary and inevitable.

Understanding the distortion of time preference helps us understand why large-scale agriculture has become so popular despite its extremely harmful impact on both human health and soil sustainability.

‍Industrialization has enabled the widespread exploitation of soils driven by high time preference. Modern hydrocarbon-powered machinery and technology accelerate nutrient extraction, leading to rapid soil depletion and maximizing short-term profits. The reliance on industrial fertilizers masks the long-term costs, as they temporarily restore fertility to depleted soils.

It is remarkable that researchers in the early twentieth century (within the realm of nutrition and without referencing economic or monetary policy) identified this era as a period of severe soil degradation and a decline in the nutrient richness of food.

In a similar vein, cultural critic Jacques Barzun, in his seminal work From Dawn to Decadence, pinpointed 1914 as the turning point for Western decline, marked by a shift from sophisticated art to modern forms; and from liberalism to liberality in political and social spheres.

Aldous Huxley, in his essay Pleasures (1923), also explored the shift in human orientation from long-term fulfillment to immediate gratification, highlighting how modern society increasingly prioritizes transient pleasures over enduring values.

Neither Barzun nor Huxley explicitly connects these changes to monetary policy, yet their insights align: they both observe and document a societal drift toward present-oriented thinking at the expense of future well-being.

Barzun’s analysis of cultural decadence and Huxley’s critique of ephemeral pleasures reflect a broader transformation driven by rising time preference across Western civilization.

In 2024, the effects of rising time preference are more evident in America than ever before.

Fewer people are willing to work long-term or commit to challenging careers, as the allure of immediate gratification increasingly overshadows the value of delayed rewards.

This shift is further exacerbated by the fact that salaries have not kept pace with the rapid pace of inflation, eroding the purchasing power of wages and massively reducing the perceived benefits of sustained effort.

Fast food dominates dietary habits, reflecting a societal desire for instant consumption rather than the patience required for nutritious home-cooked meals.

Meanwhile, personal and national debt levels have reached record highs, driven by a toxic fiat-culture that prioritizes immediate spending over saving and long-term planning.

These shifts are all natural consequences of upward movement in time preference.

The easy availability of credit and the constant devaluation of currency encourage people to “live for today,” or “yolo” further compounding economic instability and individual dependency.

The result is not only a shift in individual behavior but a societal transformation where sustainability, discipline, and the future are sacrificed for something comfortable ‘right now’.

And just like his wealth, art, architecture, and family, fiat man’s food quality is in constant decline, with the healthy, nourishing traditional foods of his ancestors being replaced by cleverly disguised, highly addictive, testosterone-destroying, toxic industrial sludge marketed as food.

The soil from whence he was spawned—the foundation of all life and civilization—faces relentless depletion, with its essential nutrients replaced by petroleum-based chemical fertilizers marketed as ‘soil’ but perpetuate a cycle of degradation.

This growing fiat-driven “live for today” high time preference shift threatens the long-term strength and prosperity of the entire nation.

At this midpoint in our analysis, a provisional hypothesis can be drawn:

Fiat currency has done what no natural resource ever could: it has made the intangible tangible. What began as an idea to stabilize markets has evolved into a force that reshaped the very foundations of society. Wars have been fought, blood spilled, fortunes made, and empires built—all without the natural constraints of physical wealth.

Yet for all its power, fiat is a double-edged sword: its benefits come with costs hidden deep within the systems it fuels. Its reach extends far beyond the markets it was meant to stabilize, touching every corner of modern life. In its wake lies a landscape of economic distortions—disrupting the natural order, driving social and environmental destruction, and creating a cascade of effects we are only beginning to understand.

So, the question is no longer how fiat can stabilize the market, but rather:

What has it taken from us?

That’s the topic of Part 2 of this post.

(Sources at the bottom of Part 2.)

_______

If you like The Unconquered Mind, sign up for our email list and we’ll send you new posts when they come out.

More Unconquered Mind economics nerdery:

Our Mind Children

Invisible Economics & The Theory of the Future

The Problem With Intelligence

Human Action, Alien Evolution, & Predictive Irrationality