Epistemic status and trigger warnings: Not rigorous in either math or physics. Not proof-read by any third party yet. May contain original research. Lengthy. Some sections are tongue-in-cheek. Anthrophics. Cosmology. Theism.

TL;DR: There are good reasons why both the number of possible distinct human-like conscious experiences and the number of possible physics-like observable universes are countable.

Introduction

Catching up The Bayesian Conspiracy, I have recently listened to the episode from a month ago where they talked with Bentham's Bulldog about his argument for God, which was also recently linked by Scott, as an example of an anthropic mistake which is much more interesting.

Unsurprisingly, like Eneasz and Steve, I was also not convinced by their guest's arguments.

However, my difficulties with these arguments start even earlier. Bentham's Bulldog asserts:

But the number of possible people is at least Beth 2—that’s an infinite way bigger than aleph null. In fact, Beth 2 is infinitely bigger than something that’s infinitely bigger than aleph null. Beth 2 is more than the numbers of numbers that exist—and there’s no plausible atheistic account of reality on which Beth 2 people come to exist.

His citation can be tracked back to D. Lewis: Counterfactuals (1973). The gist of it seems to be that you start with an Euklidian spacetime R4, and consider mappings of that to {0,1} which indicate if a particle is present at that location and time^{[1]}. Finally you do some pruning of equivalent states and arrive at ℶ2 possible universes.

From this, Bentham's Bulldog then argues that there are ℶ2 possible people.

I will first argue against that second proposition, then present another lengthy armchair argument against ℶ2 possible universes, and how to build universes like this one with finite resources. Finally I will try to shed some light on the unfortunate implications of x∈R in general.

Counting people

People, that is, the subjective experience of sentient, sapient, possibly conscious beings are their brains. Compared to universes, brains have a few key advantages when trying to determine how many of them could possibly exist: they have a finite size and a finite amount of matter in them.

Let us consider the question how many distinct brain states there are. As an upper limit, we could consider the wave function^{[2]} of all the n elementary particles in the brain: Ψ(→r1,→r2,…,→rn,t)^{[3]} near a fixed time t. Ψ is a continuous function R3n+1→C. Of these, there exist only as many as real numbers: ℶ1^{[4]}. Looping over n∈N will not increase this.^{[5]}

But is not a human experience made up from a long stream of these brain-moments? Certainly not. All the memories of the things you thought you have experienced in your life-time are encoded in your brain. What your limited bandwidth senses are telling you at this given moment could just be implemented as a boundary condition. Granted, most of the wave functions would not describe a brain, and certainly not a brain which has coherent memories, but we are just interested in the upper limit.

We can debate if this level of description is warranted. Are two brains which are identical except that in the one brain, the expected value of the position of a proton is shifted by a femtometer yielding different human experiences?

You could argue that brains are complex systems whose behavior is chaotic, so that femtometer shift in (expected!) position could have huge cascading effects later on which are relevant to the human inhabiting the brain.

I do not think this is true. A lot of relevant microscopic processes are governed by quantum mechanics. For example, how long a neurotransmitter binds to a receptor -- which is certainly shaping the human experience -- should be influenced by quantum randomness. In the face of the noise of these processes, the impact of a very small perturbation should be drowned out rather than having a life-changing impact -- at most it changes the odds that you end up in some future brain state ever so slightly. A human having a 10% chance of picking their nose in the next minute and the same human with a 10.000001% chance are not having a different subjective experience.

Therefore, we do not actually have to track Ψ with arbitrary precision using real numbers, but we can track it with sufficient accuracy on some fixed scale. You certainly don't require more than one single particle state per Planck volume. We can describe every brain state contained in a given volume V which contains a certain amount of particles n using some finite precision coefficients describing how a various finite combinations of single particle states have to be combined to form our (silly quantum) brain state. Given quantum mechanics, the required information will likely be exponential in both n and V, but still very much finite.

If we allow arbitrary brain volumes and particle counts, we our estimate of meaningfully distinct human-like brain states should be ℶ0=ℵ0, or as kids say these days: countable.

But should we allow arbitrary brain sizes? Here I leave the thin ice of armchair physics and follow Bentham's Bulldog on the infinitesimal thin ice of anthropics. Suppose God was interested in running every one of the ℵ0 different brains (or every brain experiencing a good life). Is it not funny that you find yourself in one of the tiniest brains capable of appreciating its situation? Only a finite number of meaningfully distinct brain states fit into either a human skull or the (slightly larger) observable universe, almost all of the ℵ0 different brains are much larger than that. Even if God was limiting themself to brains with twice the volume of a typical human brain, the odds of randomly inhabiting a normal-sized human brain would be astronomically low. Imagine generating two petabytes of randomness, and then noticing that by chance, the first petabyte is all zeros. I would be surprised by this, but I am sure that there is some school of anthropics who would call that normal and expected.

So, in my estimation, the number of human-sized brain states is finite, and rather small, as finite numbers go -- an upper bound would certainly be computable.

As a physicist, I have some comments about the Quine/Lewis world modelling. Classically, I would model a universe by using a phase space, but this does not change any cardinalities. I would however argue that whenever a physicist writes x∈R in a model, they are more likely to use the real numbers because they are convenient -- you can take square roots to your heart's desire -- than because they want to strongly assert that that there are ℶ1 different possible values for x.

I should also note that I will assume that the universe which we seem to inhabit is vaguely representative of the universes we want to consider. So if you want to argue that most universes actually run on magic, not physics, and that there are at least ℶBB(42) different magic particles and thus universes, I am not contradicting you, merely asking what the odds would be that I would end up in one of the few universes which seem to run on physics.

The effects of quantum mechanics

Quantum mechanics changes the number of possible universes in two different ways:

The Heisenberg uncertainty principle limits the resolution with which we can measure things like position and momentum.

Instead of having a particle trajectory →ri(t), we have a collective wave functionΨ(→ri,t)=Ψ(→r1,→r2,→r3,…,t), either one over some index set i∈J. Note that in most cases I have encountered, J⊂N, i.e. |J|≤ℵ0.

The first fact alone would cut down on our possibilities. Instead of having to use two real numbers to uniquely specify a particle in phase space, for almost all^{[6]} particles, I have a finite resolution for their position and momentum, so I can describe them with values from Q just fine. If my Δx=1, then I will not be able to experimentally distinguish x=3.1415 from x=π for any single particle.

The effects of the latter one are a bit harder to quantify for me. As an upper limit, there exists a bijection of RN to R, so the number of functions RN→C is equal to the number of functions R→R, i.e. ℶ2. However, our wave functions have to satisfy certain criteria, for example their second derivative should exist and be continuous, which excludes almost all functions. I don't know enough math to be sure here, but my gut feeling is that we might just end up with a set whose cardinality is equal to the set of continuous real functions, that is ℶ1.

Measuring values within universes: can inhabitants know real numbers?

If the inhabitants of a universe can determine fundamental parameters such as the fine structure constant α to arbitrary precision, that might be a good reason to define these parameters using real values when instantiating a universe. Here is what you would require for that:

Infinite useful universe lifetime

Infinite accessible volume (for t→∞) to store the measured parameter

If you want a cheap universe where the fundamental parameters are only accessible with a limited precision, here are some design choices you can make:

Finite lifetime with the BigCrunchTM option

Finite useful lifetime

Heat death

Proton decay

Finite access to matter (e.g. accelerated expansion, periodic boundary conditions)

The case against ℶ1 particles

Suppose you have a universe filled with particles and want to enumerate them to prove that there are at most a countable number of them. One thing you might do is to consider the number of particles contained in concentric spheres of radius r around an arbitrary point of origin. If each of these spheres contains only a finite number of particles, your enumeration is easy: just assign zero to the particle closest to the origin, one to the next and so on. Thus, if you want to have more than countable many particles, you would require one sphere of radius r which contains an infinite number of particles.

Now, an infinite amount of particles with some non-zero mass m confined in a finite volume would have a an infinite Schwarzschild radius. This is bad if you want your particles to fill your universe with fluff and uniqueness, because the whole mess will collapse into a black hole which can be described by just a few numbers.

There is a caveat: from what I can see, you might be okay having an infinite amount of photons all going in the same direction within a finite volume.^{[7]} This may be my bias as a fermion-based life form speaking, but I don't find that solution terribly interesting. Either their cross section of interacting with matter is literally zero, in which case they might as well not be there, or it is non-zero, in which case they will accelerate that mass to their own frame of reference, which would likely go poorly for whatever complex structures that matter might have contained before.

A guide for countable-budget gods for building observable universes

So you are a budget god, and want to shop for universe parts at the Hilbert Hotel Budget Outlet instead of the exclusive Beth Boutique? Here is how you can get away with it.

First, you want to limit the precision with which inhabitants can know any parameter, as discussed above. This also limits the precision with which you have to track anything (if you do it right).

For example, suppose a carbon based life form whose observable universe you are simulating will be active for some ten billion years and gain access to some 1080 protons distributed over 1080 cubic meters. Say these primitives want to store any value with the maximum precision possible. As a (totally unrealistic) worst case, they can divide the volume of their realm into some 10185 Planck volumes and distribute their protons in them. So they have less than(10185)(1080)=10(1082.267) different states available, or (if my math is right) 2275 bits.^{[8]} So standard 2256 bit shorts don't offer sufficient precision and you should track the parameters of every particle in the universe are calculated with long 21024 bit precision^{[9]}. How much storage do you need to buy for that? It depends. If most of the particles are non-entangled, whatever god-pad you are reading this on has likely enough RAM. However, if you did not nerf entanglement, the ungrateful inhabitants might turn all of the protons you selflessly provided them for building clothes and fusion rockets into a 2266 bit quantum computer. The basis of that thing has a size of 2(2266), so you need that many coefficients (of some tiny size like 21024 bit each). Most universes designed by beginner gods actually end when inhabitants build large entangled systems and the hit the memory limit of their god-cloud free account.

If you pay for ℵ0 cloud storage, you can actually have a countable number of such universes. You can also have non-limited observable universes, but take care to impose measurement limits in some ways, inhabitants tend to form weird cults once they discover that free parameters of their models are neat fractions. Using diagonalization enumeration schemes, you can even have countably many such universes (but that really messes up CPU caching).

Among chaotic evil deities, it is common to increase the precision as the ability of the light cone of a civilization they are running to represent larger numbers increases. Many of them openly admit to doing this just to piss of the snobbish lawful good 'my universes all run on real numbers' types by pointing out that either's inhabitants can't tell the difference. Getting that trick right consistently is not easy however (not that the chaotics care). Neutral evil gods prefer to rerun their simulation with higher precision from the start whenever they would run into detectable rounding errors.

Final thoughts: On 'real' numbers

When students travel through the education system and encounter the different sets of items we label numbers, typically the natural numbers N, the integers Z, the rational numbers Q, the real numbers R, and the complex numbers C, the point where they anecdotally tend to freak out is when encountering the complex numbers.

I think that this is due to a failure of the educational system. If the inhabitants of R were adequately explained, this would freak people out more than i2=−1 or eiπ=−1 (xkcd).

Instead, like some irresponsible summoner wanting trick people into believing that demons are cute and cuddly, the inhabitants of R∖Q the median math teacher introduces to students are carefully selected to not be representative. √2, π, e all seem nice enough. Their decimal representation runs on forever just like 199, only for them we have to follow slightly more complex algorithms to calculate more digits. Few teachers mention that the inhabitant of R which can be named is not a typical inhabitant. Even in analysis courses in university, students are unlikely to encounter a representative Cauchy sequence whose definition does not fit into any finite amount of space which will point them to a representative eldritch horror lurking in the depth of R between the sparse beacons computable numbers.

The very name 'real' numbers is part of the sugar-coating, like calling the inhabitants of the Warp 'cuddle-ponies'. From there, you have programming languages such as Fortran or Pascal calling their floating point numbers -- which are obviously unfit to represent even the slightest fraction of Q -- real. (K&R avoided stepping into that particular pile of poo, and their int is at least named somewhat different from integer.)

I am not suggesting that we get rid of R -- it certainly has many useful properties, just like the power of the Warp enables the Sanctioned Psyker to do many useful physical things. (My gut feeling is that one could probably rewrite many theorems in analysis to deal with computable real numbers instead without too much trouble, but I am not exactly volunteering.) However, when modelling things and considering cardinalities, we should keep in mind that R is not necessarily the sane default choice.

^{^}

This construction is called by Quine a Democritian world in Propositional Objects. (Yes, that Quine. From a cursory read of that article, he indeed postulates real-valued coordinates, but makes no claims on the cardinality of possible worlds based on that.)

^{^}

This is total overkill to functionally describe a brain, of course. On the plus side, it also works in the face of 'but the brain is a quantum computer, woo!'

^{^}

Physicists will notice that my wave function should also track a few more quantum numbers, such as spin. However, this will not increase the state space significantly.

^{^}

Sketch of a proof: There are continuous transformations of R to (0,1), apply these to all the parameters to end up with a function f:(0,1)k→C which is continuous iff the original function is. While k>1, consider g(t,…)=f(x(t),y(t),…), where x(t), y(t) form a space-filling curve. g is a concatination of continuous functions, and thus continuous. The transformation from f to g is a bijection, thus the number of continuous functions f is at most the number of continuous functions g (not every continuous g ends up being transformed to a continuous f). Replace g with f and repeat this process until you arrive at a function g:(0,1)→C. g is continuous iff Reg and Img are continuous. Per Wikipedia, there exist ℶ1 continuous functions from R to R, so we have ℶ21=ℶ1 functions g, and thus had at most that many original functions f. The proof that there are at leastℶ1 continuous functions f is left as an exercise.

^{^}

If you allow brains made out of an infinite number of particles, you might get more, but they would also be completely unlike human brains.

^{^}

Mathematically, I can have Δp=0 for Δx=∞, i.e. plane waves whose momentum could be known to infinite precision, but I would argue that while this is a good approximation for some models, it is also a pathological case which does not appear in our reality.

^{^}

I tried to consider the case where you have n photons and a single massive particle at rest, but if my math (SR only, I don't know GR) is right, this configuration will not collapse into a BH either for n→∞: The energy in the center of mass system will scale with √n, but the wave length of the photons in the CMS will also scale with √n. So adding more photons will increase the Schwarzschild radius and the spatial extend of your system by the same factor.

^{^}

Given the rough speed of one digit per decade for fundamental constants, storage space will not be the bottleneck for that civilization.

^{^}

Also take care of accumulating errors, talk to your numerical analysis consultant for details.

Epistemic status and trigger warnings:

Not rigorous in either math or physics. Not proof-read by any third party yet. May contain original research. Lengthy. Some sections are tongue-in-cheek. Anthrophics. Cosmology. Theism.TL;DR: There are good reasons why both the number of possible distinct human-like conscious experiences and the number of possible physics-like observable universes are countable.

## Introduction

Catching up The Bayesian Conspiracy, I have recently listened to the episode from a month ago where they talked with Bentham's Bulldog about his argument for God, which was also recently linked by Scott, as an example of an anthropic mistake which is much more interesting.

Unsurprisingly, like Eneasz and Steve, I was also not convinced by their guest's arguments.

However, my difficulties with these arguments start even earlier. Bentham's Bulldog asserts:

His citation can be tracked back to D. Lewis: Counterfactuals (1973). The gist of it seems to be that you start with an Euklidian spacetime R4, and consider mappings of that to {0,1} which indicate if a particle is present at that location and time

^{[1]}. Finally you do some pruning of equivalent states and arrive at ℶ2 possible universes.From this, Bentham's Bulldog then argues that there are ℶ2 possible people.

I will first argue against that second proposition, then present another lengthy armchair argument against ℶ2 possible universes, and how to build universes like this one with finite resources. Finally I will try to shed some light on the unfortunate implications of x∈R in general.

## Counting people

People, that is, the subjective experience of sentient, sapient, possibly conscious beings are their brains. Compared to universes, brains have a few key advantages when trying to determine how many of them could possibly exist: they have a finite size and a finite amount of matter in them.

Let us consider the question how many distinct brain states there are. As an upper limit, we could consider the wave function

^{[2]}of all the n elementary particles in the brain: Ψ(→r1,→r2,…,→rn,t)^{[3]}near a fixed time t. Ψ is a continuous function R3n+1→C. Of these, there exist only as many as real numbers: ℶ1^{[4]}. Looping over n∈N will not increase this.^{[5]}But is not a human experience made up from a long stream of these brain-moments? Certainly not. All the memories of the things you thought you have experienced in your life-time are encoded in your brain. What your limited bandwidth senses are telling you at this given moment could just be implemented as a boundary condition. Granted, most of the wave functions would not describe a brain, and certainly not a brain which has coherent memories, but we are just interested in the upper limit.

We can debate if this level of description is warranted. Are two brains which are identical except that in the one brain, the expected value of the position of a proton is shifted by a femtometer yielding different human experiences?

You could argue that brains are complex systems whose behavior is chaotic, so that femtometer shift in (expected!) position could have huge cascading effects later on which are relevant to the human inhabiting the brain.

I do not think this is true. A lot of relevant microscopic processes are governed by quantum mechanics. For example, how long a neurotransmitter binds to a receptor -- which is certainly shaping the human experience -- should be influenced by quantum randomness. In the face of the noise of these processes, the impact of a very small perturbation should be drowned out rather than having a life-changing impact -- at most it changes the odds that you end up in some future brain state ever so slightly. A human having a 10% chance of picking their nose in the next minute and the same human with a 10.000001% chance are not having a different subjective experience.

Therefore, we do not actually have to track Ψ with arbitrary precision using real numbers, but we can track it with sufficient accuracy on some fixed scale. You certainly don't require more than one single particle state per Planck volume. We can describe every brain state contained in a given volume V which contains a certain amount of particles n using some finite precision coefficients describing how a various finite combinations of single particle states have to be combined to form our (silly quantum) brain state. Given quantum mechanics, the required information will likely be exponential in both n and V, but still very much finite.

If we allow arbitrary brain volumes and particle counts, we our estimate of meaningfully distinct human-like brain states should be ℶ0=ℵ0, or as kids say these days: countable.

But should we allow arbitrary brain sizes? Here I leave the thin ice of armchair physics and follow Bentham's Bulldog on the infinitesimal thin ice of anthropics. Suppose God was interested in running every one of the ℵ0 different brains (or every brain experiencing a good life). Is it not funny that you find yourself in one of the tiniest brains capable of appreciating its situation? Only a finite number of meaningfully distinct brain states fit into either a human skull or the (slightly larger) observable universe, almost all of the ℵ0 different brains are much larger than that. Even if God was limiting themself to brains with twice the volume of a typical human brain, the odds of randomly inhabiting a normal-sized human brain would be astronomically low. Imagine generating two petabytes of randomness, and then noticing that by chance, the first petabyte is all zeros. I would be surprised by this, but I am sure that there is some school of anthropics who would call that normal and expected.

So, in my estimation, the number of human-sized brain states is finite, and rather small, as finite numbers go -- an upper bound would certainly be computable.

## How many possible universes are there, really?

As a physicist, I have some comments about the Quine/Lewis world modelling. Classically, I would model a universe by using a phase space, but this does not change any cardinalities. I would however argue that whenever a physicist writes x∈R in a model, they are more likely to use the real numbers because they are convenient -- you can take square roots to your heart's desire -- than because they want to strongly assert that that there are ℶ1 different possible values for x.

I should also note that I will assume that the universe which we seem to inhabit is vaguely representative of the universes we want to consider. So if you want to argue that most universes actually run on magic, not physics, and that there are at least ℶBB(42) different magic particles and thus universes, I am not contradicting you, merely asking what the odds would be that I would end up in one of the few universes which seem to run on physics.

## The effects of quantum mechanics

Quantum mechanics changes the number of possible universes in two different ways:

The first fact alone would cut down on our possibilities. Instead of having to use two real numbers to uniquely specify a particle in phase space, for almost all

^{[6]}particles, I have a finite resolution for their position and momentum, so I can describe them with values from Q just fine. If my Δx=1, then I will not be able to experimentally distinguish x=3.1415 from x=π for any single particle.The effects of the latter one are a bit harder to quantify for me. As an upper limit, there exists a bijection of RN to R, so the number of functions RN→C is equal to the number of functions R→R, i.e. ℶ2. However, our wave functions have to satisfy certain criteria, for example their second derivative should exist and be continuous, which excludes almost all functions. I don't know enough math to be sure here, but my gut feeling is that we might just end up with a set whose cardinality is equal to the set of continuous real functions, that is ℶ1.

## Measuring values within universes: can inhabitants know real numbers?

If the inhabitants of a universe can determine fundamental parameters such as the fine structure constant α to arbitrary precision, that might be a good reason to define these parameters using real values when instantiating a universe. Here is what you would require for that:

If you want a cheap universe where the fundamental parameters are only accessible with a limited precision, here are some design choices you can make:

## The case against ℶ1 particles

Suppose you have a universe filled with particles and want to enumerate them to prove that there are at most a countable number of them. One thing you might do is to consider the number of particles contained in concentric spheres of radius r around an arbitrary point of origin. If each of these spheres contains only a finite number of particles, your enumeration is easy: just assign zero to the particle closest to the origin, one to the next and so on. Thus, if you want to have more than countable many particles, you would require one sphere of radius r which contains an infinite number of particles.

Now, an infinite amount of particles with

some non-zero massm confined in a finite volume would have a an infinite Schwarzschild radius. This is bad if you want your particles to fill your universe with fluff and uniqueness, because the whole mess will collapse into a black hole which can be described by just a few numbers.There is a caveat: from what I can see, you

mightbe okay having an infinite amount of photonsall going in the same directionwithin a finite volume.^{[7]}This may be my bias as a fermion-based life form speaking, but I don't find that solution terribly interesting. Either their cross section of interacting with matter is literally zero, in which case they might as well not be there, or it is non-zero, in which case they will accelerate that mass to their own frame of reference, which would likely go poorly for whatever complex structures that matter might have contained before.## A guide for countable-budget gods for building observable universes

So you are a budget god, and want to shop for universe parts at the Hilbert Hotel Budget Outlet instead of the exclusive Beth Boutique? Here is how you can get away with it.

First, you want to limit the precision with which inhabitants can know any parameter, as discussed above. This also limits the precision with which you have to track anything (if you do it right).

For example, suppose a carbon based life form whose observable universe you are simulating will be active for some ten billion years and gain access to some 1080 protons distributed over 1080 cubic meters. Say these primitives want to store any value with the maximum precision possible. As a (totally unrealistic) worst case, they can divide the volume of their realm into some 10185 Planck volumes and distribute their protons in them. So they have less than(10185)(1080)=10(1082.267) different states available, or (if my math is right) 2275 bits.

^{[8]}So standard 2256 bit shorts don't offer sufficient precision and you should track the parameters of every particle in the universe are calculated with long 21024 bit precision^{[9]}. How much storage do you need to buy for that? It depends. If most of the particles are non-entangled, whatever god-pad you are reading this on has likely enough RAM. However, if you did not nerf entanglement, the ungrateful inhabitants might turn all of the protons you selflessly provided them for building clothes and fusion rockets into a 2266 bit quantum computer. The basis of that thing has a size of 2(2266), so you need that many coefficients (of some tiny size like 21024 bit each). Most universes designed by beginner gods actually end when inhabitants build large entangled systems and the hit the memory limit of their god-cloud free account.If you pay for ℵ0 cloud storage, you can actually have a countable number of such universes. You can also have non-limited observable universes, but take care to impose measurement limits in some ways, inhabitants tend to form weird cults once they discover that free parameters of their models are neat fractions. Using diagonalization enumeration schemes, you can even have countably many such universes (but that really messes up CPU caching).

Among chaotic evil deities, it is common to increase the precision as the ability of the light cone of a civilization they are running to represent larger numbers increases. Many of them openly admit to doing this just to piss of the snobbish lawful good '

my universes all run on real numbers' types by pointing out thateither's inhabitants can't tell the difference. Getting that trick right consistently is not easy however (not that the chaotics care). Neutral evil gods prefer to rerun their simulation with higher precision from the start whenever they would run into detectable rounding errors.## Final thoughts: On 'real' numbers

When students travel through the education system and encounter the different sets of items we label numbers, typically the natural numbers N, the integers Z, the rational numbers Q, the real numbers R, and the complex numbers C, the point where they anecdotally tend to freak out is when encountering the complex numbers.

I think that this is due to a failure of the educational system. If the inhabitants of R were adequately explained, this would freak people out more than i2=−1 or eiπ=−1 (xkcd).

Instead, like some irresponsible summoner wanting trick people into believing that demons are cute and cuddly, the inhabitants of R∖Q the median math teacher introduces to students are carefully selected to not be representative. √2, π, e all seem nice enough. Their decimal representation runs on forever just like 199, only for them we have to follow slightly more complex algorithms to calculate more digits. Few teachers mention that the inhabitant of R which can be named is not a typical inhabitant. Even in analysis courses in university, students are unlikely to encounter a representative Cauchy sequence whose definition does not fit into any finite amount of space which will point them to a representative eldritch horror lurking in the depth of R between the sparse beacons computable numbers.

The very name 'real' numbers is part of the sugar-coating, like calling the inhabitants of the Warp 'cuddle-ponies'. From there, you have programming languages such as Fortran or Pascal calling their floating point numbers -- which are obviously unfit to represent even the slightest fraction of Q -- real. (K&R avoided stepping into that particular pile of poo, and their int is at least

named somewhat differentfrom integer.)I am not suggesting that we get rid of R -- it certainly has many useful properties, just like the power of the Warp enables the Sanctioned Psyker to do many useful physical things. (My gut feeling is that one could probably rewrite many theorems in analysis to deal with computable real numbers instead without too much trouble, but I am not exactly volunteering.) However, when modelling things and considering cardinalities, we should keep in mind that R is not necessarily the sane default choice.

^{^}This construction is called by Quine a Democritian world in Propositional Objects. (Yes, that Quine. From a cursory read of that article, he indeed postulates real-valued coordinates, but makes no claims on the cardinality of possible worlds based on that.)

^{^}This is total overkill to functionally describe a brain, of course. On the plus side, it also works in the face of 'but the brain is a quantum computer, woo!'

^{^}Physicists will notice that my wave function should also track a few more quantum numbers, such as spin. However, this will not increase the state space significantly.

^{^}Sketch of a proof: There are continuous transformations of R to (0,1), apply these to all the parameters to end up with a function f:(0,1)k→C which is continuous iff the original function is. While k>1, consider g(t,…)=f(x(t),y(t),…), where x(t), y(t) form a space-filling curve. g is a concatination of continuous functions, and thus continuous. The transformation from f to g is a bijection, thus the number of continuous functions f is at most the number of continuous functions g (not every continuous g ends up being transformed to a continuous f). Replace g with f and repeat this process until you arrive at a function g:(0,1)→C. g is continuous iff Reg and Img are continuous. Per Wikipedia, there exist ℶ1 continuous functions from R to R, so we have ℶ21=ℶ1 functions g, and thus had

at mostthat many original functions f. The proof that there areat leastℶ1 continuous functions f is left as an exercise.^{^}If you allow brains made out of an infinite number of particles, you might get more, but they would also be completely unlike human brains.

^{^}Mathematically, I can have Δp=0 for Δx=∞, i.e. plane waves whose momentum could be known to infinite precision, but I would argue that while this is a good approximation for some models, it is also a pathological case which does not appear in our reality.

^{^}I tried to consider the case where you have n photons and a single massive particle at rest, but if my math (SR only, I don't know GR) is right, this configuration will not collapse into a BH either for n→∞: The energy in the center of mass system will scale with √n, but the wave length of the photons in the CMS will also scale with √n. So adding more photons will increase the Schwarzschild radius and the spatial extend of your system by the same factor.

^{^}Given the rough speed of one digit per decade for fundamental constants, storage space will not be the bottleneck for that civilization.

^{^}Also take care of accumulating errors, talk to your numerical analysis consultant for details.