Could Simulation Theory Explain Why “Space is Hard”?


What if none of this is real? What if everything we see, hear, touch, taste, smell, and perceive is part of a gigantic simulation designed to keep us contained? And what if the beings who built this simulation are part of a highly advanced alien species that created the simulation so they could study us and keep us under control.

This is the essence of the “Zoo Hypothesis,” which is a proposed resolution to the Fermi Paradox. It is also sometimes referred to as the “Planetarium Hypothesis” as a way of clarifying that the intention of the big simulation is not to protect but to control. Moreover, the zookeepers in this scenario have designed the simulation so that humanity won’t suspect they are living in a cage.

While it may sound like science fiction (it actually is), the idea has been explored as part of the larger debate over the Simulation Hypothesis. To date, multiple theoretical studies have been conducted to determine if the laws of physics could be used to prove we’re in a false reality. But if we are living in a simulation, then the physics themselves are part of it, aren’t they?

If the laws of physics as we know them are the same inside the simulation as they are in the real Universe, we ought to be able to use them to our advantage. But if they were designed in such a way as to reinforce the simulation, then they’re not likely to tell us anything. More to the point, they would probably be specifically designed to keep us in our cage.

But first, let’s review the particulars.

The Fermi Paradox is named in honor of Italian-American physicist Enrico Fermi, a pioneer in the development of nuclear power who was part of the Manhattan Project. As the story goes, it was during a “lunchtime conversation” with colleagues at the Los Alamos National Laboratory in 1950 that Fermi asked a question that would launch a decades-long debate.

While discussing UFOs and the possible existence of extraterrestrial intelligence, Fermi spontaneously asked: “Where is everybody?” His colleagues were amused since they knew exactly what he meant with those three simple words. If life is ubiquitous (very common) in the Universe, why haven’t we seen any indication of it?

However, it was not until the 1980s that the term “Fermi Paradox” emerged, in part because of the works of Michael Hart and Frank Tipler. Together, they gave rise to the Hart-Tipler Conjecture, which states that if intelligent life were ubiquitous in the Universe, humanity would have seen some evidence of it by now. Ergo, they argued, humanity was alone.

Naturally, this inspired many counter-arguments, like Carl Sagan and William I. Newman’s rebuttal paper (nicknamed “Sagan’s Response“). For one, they took issue with Hart and Tipler’s anthropocentric bias, simple assumptions, and math. Also, Sagan and Newman emphasized that humanity hadn’t found evidence of intelligence yet, and the search was just beginning.

And yet, the question has endured. Beyond the Hart-Tipler Conjecture, lots of exciting and creative resolutions have been proposed, which where is where the Planetarium Hypothesis comes into play.

Are we living in a simulation?

The theory was first proposed in 2001 by British scientist and hard science fiction author Stephen Baxter. As he described his theory in the paper, “The Planetarium Hypothesis: A Resolution to the Fermi Paradox“:

“A possible resolution to the Fermi Paradox is that we are living in an artificial universe, perhaps a form of virtual-reality `planetarium,’ designed to give us the illusion that the Universe is empty. Quantum-physical and thermo-dynamic considerations inform estimates of the energy required to generate such simulations of varying sizes and quality.”

“The perfect simulation of a world containing our present civilization is within the scope of a Type K3 extraterrestrial culture. However, the containment of a coherent human culture spanning ~100 light-years within a perfect simulation would exceed the capacities of any conceivable virtual-reality generator.”

The Type K3 culture refers to the Kardashev Scale, specifically, to a civilization that has achieved Type 3 status. According to Kardashev’s classification scheme, such a civilization would have advanced to the point that it was able to harness the energy of its entire galaxy and engineer structures on an equal scale.

For this type of civilization, building a massive simulation like the one Maxwell describes would be relatively easy. Granted, such a predicament is not exactly testable or falsifiable, hence why it’s not treated as a scientific theory. But let’s consider the possibility that the very laws of physics are an indication that we could be inside a simulation.

Once again, this is not a scientific hypothesis, more like food for thought (and fodder for science fiction!). In particular, there are four ways in which the laws of physics make it so hard to expand beyond Earth and become a space-faring species. They include:

  • Earth’s Gravity Well
  • The Extreme Space Environment
  • Logarithmic Scales of Distance
  • Relativity and the Speed of Light (c)

On its face, the Planetarium Hypothesis does answer the question, “why don’t we see any aliens out there?” After all, how could we notice the activity of intelligent species — especially ones that have had a head-start on us — if they built a massive planetarium around us and were effectively controlling everything we see?

Would they not want to present us with a “Great Silence” so we wouldn’t be encouraged to get out and explore? If nothing else, they’d be taking great pains to hide their existence from us. More to the point, wouldn’t they want to ensure that the simulation had controls in place to keep our growth rate slow and controlled?

Gravity is a wonderful thing. It keeps us from flying off into space and ensures that our bones, muscles, and organs remain strong and healthy. But in the context of space exploration, gravity can be downright oppressive! On Earth, the force of gravity is equivalent to ~32 ft/s² (9.8 m/s²), or what we define as 1 g.

For anything to break free of Earth’s gravity, it has to achieve an “escape velocity” of 6.95 mi/s (11.186 km/s), which works out to 25,020 mph (40,270 km/h). Achieving this velocity requires a tremendous amount of energy, which means an enormous amount of propellant, which means a large spacecraft with huge propellant tanks.

On the one hand, this creates a bit of a vicious circle, where large, fully-fueled spacecraft are mostly propellant mass and all that weight requires more energy (and more propellant) to escape Earth’s gravity. In short, spaceflight doesn’t come cheap, especially when you’re trying to lift heavy payloads to orbit. 

Between 1970 and 2000, the average cost of launching a single pound (0.45 kg) to space remained steady at around $8,400 per lb ($18,500 per kg). Even with the benefit of reusable rockets in the modern age, it still costs between $640 and $1,236 per lb ($1,410 and $2,720 per kg) to launch payloads and crews to space.

This imposes limits on both the number of space launches we can conduct, as well as the types of payloads we’re able to send to space. Granted, this could be solved by building a space elevator, which would reduce costs to as little as $113 per lb ($250 per kg). However, the cost of building this structure would be immense and presents all kinds of engineering challenges.

It also means that the payloads we send to space are a mere fraction of the rocket’s overall “wet mass.” To put that in perspective, the Apollo 11 Lunar Module had a total mass of 33,296 lbs (15,103 kg), including ascent and descent stages and propellants. The descent stage required 18,184 lbs (8,248 kg) of propellant to land but had a dry mass of just 4,484 lbs (2,034 kg).

All told, the Apollo Program (1960-1973) cost an estimated $280 billion when adjusted for inflation. Yet, the six missions that landed on the Moon only transported around 0.3% of their pre-launch mass. Doing the math means that it cost over $62 million to transport one pound (or $138 per kg) to the lunar surface to stay.

With several space agencies planning to build outposts on the Moon, Elon Musk’s plans to colonize Mars, and the many proposals for sending crewed missions to both, the cost is going to be astronomical (no pun!) using rockets. Under the circumstances, it’s clear why some people are so passionate about building a space elevator!

From a strictly hypothetical point of view, these kinds of restrictions would make perfect sense if we were in a simulation. If humanity were to expand into space too rapidly, we would surely find the outer edges of the planetarium before long. What better way to keep that from happening than to make it very expensive for us just to leave Earth?

Here on Earth, we have it easy! We are protected from cosmic rays and solar radiation by our thick, fluffy atmosphere. Earth also has a planetary magnetic field, something no other rocky planet in the Solar System has. This not only offers even greater shielding from solar and cosmic rays but prevents our atmosphere from being stripped away by Solar wind too (like it did Mars).

On top of that, Earth orbits the Sun in that sweet spot known as the “Goldilocks Zone,” or “Circumsolar Habitable Zone” if you want to get fancy! This ensures that water can exist in a liquid state on our planet’s surface and that we don’t suffer a runaway Greenhouse Effect, which is how Venus became the hellish place it is today.

In short, Earth is a planet that seems ideally suited to the emergence and continued existence of life. This can be illustrated by taking one look at its immediate neighbors, Mars and Venus, which represent the extreme ends of the spectrum. One of them is too cold and the atmosphere’s too thin (Mars), while the other is too hot and its atmosphere is too dense (Venus)!

But here on Earth, conditions are “just right!” Step outside of our cozy planet, however, and the threats and hazards abound! Not only is every other planet and moon in our Solar System hostile to life as we know it, but the space between them also seems intent on killing us! Just look at all the lethal threats out there:

  1. Vacuum: In space, there is no air (or very close to it). If we hope to travel to space, we humans need to bring our breathable atmosphere with us, as well as lots of food, water, and medicine. If we’re looking to perform long-duration missions to deep-space or live out there, we need to bring our entire biosphere with us! This includes all the lifeforms here on Earth that provide us with self-replenishing sources of air, food, water, energy, and stable temperatures.
  2. Extreme Temperatures: In the airless environment of space, temperatures range from one extreme to the next. For example, the cosmic background temperature is extremely cold — 2.73 K (-455°F; -270°C), or just shy of “absolute zero.” But in high-radiation environments, temperatures can reach thousands or even millions of degrees. As a result, space habitats and spacecraft need to be heavily insulated and have state-of-the-art environmental controls.
  3. Radiation: Even with spacecraft and habitats that can hold a breathable atmosphere and protect us from extremes in temperature, there’s still the matter of radiation getting inside. On Earth, people are exposed to an average of 2.4 millisieverts (mSv) of ionizing radiation a day, whereas exposure in space from solar and cosmic sources can range from 50 to 2,000 mSv (20 to 830 times as much!) And when solar or cosmic rays strike radiation shielding. they create secondary particle “showers,” which can be just as deadly as solar and cosmic rays.

If we were to compare our planet to a planetarium, then space would be the fence or glass walls surrounding it. There are no explicit warning signs, but we’ve learned from experience that venturing outside the walls is extremely dangerous. Anyone who would still dare have to be very daring and very creative to survive for extended periods of time.

As controls go, it’s simple but effective!

In space, the distance from one boundary to the next always gets bigger! Right now, there are multiple plans for sending crewed missions to Mars, which is often described as the “next great leap” after the Moon. What comes after that? The outer Solar System? The nearest stars? The nearest galaxy?

Between each of these “leaps,” there are huge distances that increase at an exponential rate. To illustrate, consider the great leaps that we’ve made so far and then compare that to the ones we hope to take in the future. First, there’s the official boundary of space (aka. the Kármán Line), which corresponds to an altitude of 62 mi (100 km) above sea level.

Humanity surpassed this boundary in the early 1960s with the Soviet Vostok program and the American Mercury Program. Next, you have Low Earth Orbit (LEO), which reaches a minimum altitude of 621 mi (1000 km) and is where spacecraft and satellites need to be to have a stable orbit. Astronauts first reached this altitude as part of NASA’s Gemini Program in the mid-1960s.

Could Simulation Theory Explain Why
Source: NASA

Then there’s the Moon, which we reached during the Apollo Program in the late 60s and early 70s. The Moon orbits Earth at a distance of 238,854 mi (384,399 km), and we have not sent astronauts back there in almost 50 years. And Mars’ distance from Earth ranges over time from 38.6 million mi (62.1 million km) and 249 million mi (401 million km).

In cosmological terms, these distances are the equivalent of walking from our house, through the front yard, and across the street to the neighbor’s house. How do the distances stack up?

  • Suborbital: 62 mi (100 km)
  • LEO: 621 mi (1000 km) – 20 times as far
  • Moon: 238,850 mi (384,399 km) – over 192 times as far
  • Mars: 140 million mi (225 million km) on average – over 585 times as far

Now let’s pretend you want to go to the next block. That would mean reaching the very edge of the Solar System, which means establishing outposts as far as Triton (Neptune’s largest moon), Pluto and Charon, and other small objects in the Kuiper Belt. From there, the next leaps will be interstellar and intergalactic:

  • Edge of Solar System: around 2.67 to 2.8 billion miles (4.3 to 4.55 billion km) – ~2,000 times
  • Nearest Star (Proxima Centauri): 4.246 light-years – ~9,000 times
  • Nearest Galaxy (Andromeda): 2.5 million light-years — ~588,720 times!

Get the picture? Taking the “next great leap” apparently means that you work on your long jump because you’ll have to leap many, many times farther. And even if we managed to reach the Andromeda Galaxy tomorrow and could map out every star system it has, we would still have explored less than 0.000027 % of our Universe.

This brings us at last to what is arguably the most imposing restriction of all.

In 1905, Albert Einstein proposed his Theory of Special Relativity (SR), which attempted to reconcile Newton’s Laws of Motion with Maxwell’s Equations of electromagnetism. In so doing, Einstein resolved a major stumbling block that physicists had been dealing with since the mid-19th century. In brief, SR comes down to two postulates:

  1. The laws of physics are the same in all (non-accelerated) inertial reference frames.
  2. The speed of light in a vacuum is the same in all reference frames regardless of the motion of the light source or observer.

Newton’s laws of motion accurately described objects at rest or moving at a constant velocity. This was important, since Newton’s and Galileo’s theories were based on the idea that there was such a thing as “absolute space.” In this framework, time and space were objective realities that were also independent of each other.

But where acceleration was involved, Einstein showed that time was relative to the observer and that time and space were not distinct at all. For example, in an accelerating reference frame (where one is approaching the speed of light), the experience of time slows down for the observer (an effect known as “time dilation”.)

In addition, Einstein’s theory indicated that mass and energy are similar expressions of the same thing (“mass-energy equivalence”), as represented by the famous equation, E=mc². What this means is, as an object approaches the speed of light, its inertial mass will increase and more energy is needed to accelerate further.

It also means that the speed of light (c) is unattainable, since it would require an infinite amount of energy and the object would achieve infinite mass. Even achieving relativistic travel (a fraction of the speed of light) is incredibly hard, given the energy required. While proposals have been made, they’re either prohibitively expensive or would require scientific breakthroughs beforehand.

Also, the speed of light imposes time delays on communications. Even in a modest interstellar empire (say, 100 light-years in any direction), it would still take two hundred years for Earth to send a message to one of its outermost systems and receive a reply. Even if we could travel at 99% the speed of light, it would still take spacecraft over a century to respond to problems out on the rim.

For crews traveling from one edge of the empire to the other, the travel time would only feel like a few years. But during that time, entire generations will be born, die, and even entire planetary civilizations could collapse. Maintaining a “Galactic Empire” is therefore the stuff of fantasy, barring any breakthroughs that show how FTL could be possible.

Once again, this is a great way of limiting a civilization’s growth, especially if the simulation looks like it measures 93 billion light-years from one end to the other but is actually only a few lightyears in diameter. Even if the boundaries of our simulated Universe were just beyond our Solar System, it would take a very long time for us to send people out there to check!

*     *     *

Of course, there’s still the tiresome question of how we might go about proving this theory. In the essay where he proposed the Planetarium Hypothesis, Maxwell stated flat out that it could never be proven either way. While some scholars have proposed various means of testing this and “simulation theory” in general, but there are some obvious flaws in their optimism.

First, there is the assumption that the laws of physics are the same inside the simulation as they are in the outside Universe. To put it into perspective, think of the hypothetical simulation as a gigantic video game. If the designers wanted to keep players confined to the game and from leveling up too fast, wouldn’t they want to set the difficulty on high?

Second, if the laws of physics as we know them are part of the simulation, how are we to use them to prove the existence of the simulation? Wouldn’t they be designed to show us whatever our overseers wanted us to see? How can you prove you’re in the box when everything about it is programmed to keep you unaware that you’re in a box.

During the 2016 Isaac Asimov Memorial Debate, physicist Lisa Randall summarized her views on the Simulation Hypothesis and whether it could ever be proven. As she said:

“We don’t know the answer, and we just keep doing science until it fails… To the extent that it gives us an incentive to ask interesting questions […] that’s certainly worth doing, to see what’s the extent of the laws of physics as we understand them. We are trying to figure it out to the extent we can.”

In the meantime, it makes for some fun speculation. And as Stephen Baxter certainly demonstrated, it makes for some great science fiction!





Source link