In Utilitarian ethics, one important factor in making moral decisions is the relative moral weight of all moral patients affected by the decision. For instance, when EAs try to determine whether or not shrimp or bee welfare (or even that of chickens or hogs) is a cause worth putting money and effort into advancing, the importance of an individual bee or shrimp’s hedonic state (relative that of a human, or a fish, or a far-future mind affected by the long-term fate of civilization) is a crucial consideration. If shrimp suffer, say, 10% as much as humans would in analogous mental states, then shrimp welfare charities are likely the most effective animal welfare organizations to donate to (in terms of suffering averted per dollar) by orders of magnitude, but if the real ratio is closer to 10-5 (like the ratio between shrimp and human brain neuron counts), then the cause seems much less important.
One property of a moral patient that many consider an important contributor to its moral worth is its size or complexity. As it happens, there are a number of different ways that moral worth could plausibly scale with a moral patient’s mental complexity, ranging from constant moral worth all the way up to exponential scaling laws. Furthermore, these are affected by one’s philosophy of consciousness and of qualia in perhaps unintuitive ways. I will break down some different plausible scaling laws and some beliefs about phenomenology that could lead to them one-by-one in the remainder of this essay.
ASSUMPTIONS AND DISCLAIMERS
In this post, I am assuming:
Physicalism
Computationalism
Hedonic Utilitarianism, and
That qualia exist and are the source of moral utility.
This blog post will likely be of little value to you if you think that these premises are incorrect, especially the second two, partially because I'm working from assumptions you think are wrong and partially because I frequently equivocate between things that are situationally equivalent under this worldview (e.g. components of a person’s mind and components of their brain or the computation it implements) for convenience.
I am not trying to argue that any of the scaling laws below are true per se, nor do I mean to suggest that any of the arguments below are bulletproof, or even all that strong (they support contradictory conclusions, after all). I aim instead to show that each of the scaling laws can be vaguely reasonably argued for based on some combination of phenomenological beliefs.
SCALING LAWS
1. Constant Scaling
This is the simplest possible scaling law. One can reasonably assume it by default if they don’t buy any of the suppositions used to derive the other scaling laws’ below. There’s not really much more to say about constant scaling.
2. Linear Scaling
This is perhaps the most intuitive way that moral worth could scale. One obtains linear scaling of moral importance if they assume that minds generate qualia through the independent action of a bunch of very small components.
This seems plausible if we imagine more complicated minds as an group of individually simpler minds in communication with each other, which preserve the moral status that they would have as individuals. I think that this is an excellent model of some morally relevant systems, but probably a poor model of others. The moral importance of a set of ten random non-interacting people, for instance, is clearly just the sum of the importances of of its individual members—it’s hard to argue that they become more or less important just because one mentally categorizes them together—but a moral patient composed solely of specialized components that are somehow entirely unlike each other in all possible ways, or a near-apophatic god with no constituent components, would be very difficult to shoehorn into this framework. The minds/brains of large animals like humans, in my view, fall inbetween these two extremes. While large animal brains strictly depend on each of several heterogeneous functional components (e.g. the human cerebral cortex, thalamus, hypothalamus, etc.) to perform morally relevant activity, these components can largely each be broken up into smaller subunits with similar structures and functions (the minicolumns of the cerebral cortex, individual white matter fibers, the cannonical microcircuit of the cerebellum, etc.). It seems reasonable enough that each of these units might contribute roughly equally much to a moral patient’s importance irrespective of global characteristics of the moral patient. One could imagine, for example, that positive or negative feelings in mammals come from the behavior of each cortical minicolumn individually being positively or negatively reinforced, and that the total hedonic value of the feelings can be obtained by adding up the contributions of each minicolumn. (This is, again, just an example—the actual causes of moral valence are probably much more complicated than this, but the point is that they could plausibly come from the largely-independent action of mental subunits, and that we should expect linear scaling in that case.)
3. Superlinear Integer Power Law
What if one accepts the division of minds into similar subunits like in the linear scaling argument, but thinks that moral relevance comes from aggregating the independent moral relevance of interactions between functional subunits of different kinds? For instance, perhaps the example from earlier where hedonic value comes from the reinforcement of minicolumn behavior is true, but reinforcement of a minicolumn coming from each subcortical nucleus is separable and independently morally relevant. For another example, one might find the origin of consciousness in the interactions between several different cortical regions and basal ganglia, and think that the superimposed effects of all circuits containing a subcomponent each contribute to conscious experience. In cases like these, moral weight scales with the product of the numbers of subcomponents of each functional role. If the numbers of each type of subcomponent each scale up with the complexity of the overall mind or brain, then this results in a power law with a positive integer exponent.
4. Non-Integer (incl. Sublinear) Power Law
Of course, it’s possible that adding more subunits to the system reduces the moral importance of each interaction between subunits. After all, if the number of morally relevant interactions involving each subunit scales up with the size of the system raised to, say, the fifth power, and one brain is a hundred times larger than another, then surely some of the 1010x more interactions any given subunit participates in in the larger brain fail to ever meaningfully influence its behavior (or those of any of the other interacting subunits). If actual, realized interaction effects (rather than the mere possibility thereof) are what cause moral importance, then you would get slower scaling than under the naive sixth-order law. If the chance of a possible interaction effect being realized drops off with brain size following a non-integer power law for some reason, then you get a non-integer power law for total moral scaling. More generally, you can get any scaling law that goes with the quotient of a power law and some other form of scaling that doesn’t go up as quickly as it from this.
You could also extend this argument to modify the earlier model where subunits just directly and independently generate moral valence. For instance, perhaps increasing the number of subunits causes higher sparsity or something, and the moral value of a subunit increases with its activity. In that case, moral value would specifically scale sublinearly.
5. Exponential Scaling
The previous three groups of scaling laws have been justified by modeling the brain as composed of non-overlapping subunits. Set those thoughts aside for now—exponential scaling of moral worth, if it happens, happens via a completely different mechanism.
One difficult philosophical problem is that of deciding what beings are moral patients. It may seem intuitively obvious that morally relevant systems cannot overlap, in the sense that you can’t have two of them that share some of the same physical substrate and generate qualia through some of the same individual computational operations. However, one can raise a number of objections to this claim:
Continuity when merging or splitting minds: If we suppose that overlapping moral patients are impossible, we are forced to draw unreasonable conclusions as to when exactly one becomes two (or two become one) when they are split or merged.
It’s a well-known fact that young children can survive having one of their brain hemispheres amputated or disconnected from the rest of the body, often even without major long-term motor or cognitive issues. This surgery, called hemispherectomy, is sometimes used as a treatment for severe epilepsy.
If one were to perform a hemispherectomy on a healthy person, one could remove either hemisphere, and the remaining one would probably be able to pilot the subject in a cognitively normal manner, as this is typically the case for the healthier hemisphere left over when hemispherectomy is performed in the usual clinical context. On this basis, after the hemispherectomy is completed, one could consider each hemisphere to be a moral patient, and, since they can’t interact, an independent one. There was only one moral patient before the surgery, so if moral patients can’t be overlapping computational and physical systems, the personhood of a hemispherectomy patient as a whole must be replaced with those of the two hemispheres at some point during the procedure.
You can probably see where I’m going with this. If a hemispherectomy was slowly performed on a conscious (if presumably immobilized etc.), healthy subject, when would the subject as a whole stop being a moral patient and each of their hemispheres start being one? This could happen either when the last communication between the hemispheres ceases, or sometime before then, when the degree to which the hemispheres are integrated falls below some threshold.
Let’s first consider the case in which it happens at the end. If we somehow undo the very last bit of the operation, restoring the last individual axon severed in each direction or whatever so that only a tiny amount of information can flow back and forth, does each hemisphere stop having qualia and the patient’s overall brain resume doing so? If we answer no, then we’re establishing that physically and computationally identical systems (the brain before and after the reversal of the last bit of the hemispherectomy; in practice, there’d probably be minute differences, but we can handwave this away on the grounds that the changes are too small to be meaningful or by positing an extremely short interval between severing and restoring connections or that the two hemispheres somehow evolve right back to their original states by the end the interval) can generate different qualia or do so in different manners, which violates physicalism and computationalism. (It also implies that qualia are at least sometimes epiphenomenal, given that the evolution of the universe’s state is wholly determined by its physical conditions in the present, which the patient’s qualia would not not be determined by.) If we answer yes, then we raise the possibility that moral patients can stop having qualia due to arbitrarily low-bandwidth communication with other moral patients. If restoring the last pair of axons causes the hemispheres to each stop generating qualia, would the same thing happen if we had some BCI replicate the effect of a single pair of white matter fibers between the cingulate cortices of two normal people? Or hell, even if they were in a conversation with each other?
Now, let’s consider the second case, in which the shift happens before the end of the procedure. This is still unappealing, because it posits a discontinuous change in qualia driven by a continuous (or nearly so) change in the computational system that generates them. It also raises the question of where exactly the cutoff is.
The idea that qualia are generated by the interaction of different types of brain component, like I described in the power law section, seems vaguely plausible, and that would entail different qualia-generating processes that share some computational components (i.e. interactions involving the same members of some of the brain component types, but not of all).
Various subsystems of anyone’s brain seem like they would definitely constitute moral patients if they stood alone (e.g. the brain but without this random square millimeter of the cortex, the brain but without this other little square millimeter of the cortex, and so on). Why would interacting with the rest of the brain (e.g. the little square millimeter of cortex) make them stop having independent consciousness?
If we hold that a system that would be a moral patient in isolation still is one when overlapping with or a component of another, then the total moral worth of complicated minds can grow very very quickly. If we suppose that some sort of animal animal would usually be a moral patient if it lost a random 3% of its cortical minicolumns, for example, then this would imply that the number of simultaneously qualia-generating subsystems in it scales exponentially (and extremely rapidly) with the area of its cerebral cortex. If the average moral weight of each of the subsystems is independent of scale, then this would make its total moral weight scale exponentially as well. Of course, this line of reasoning fails if the mean moral weight of each subsystem falls exponentially with overall scale (and with a base precisely the inverse of the one for the growth of the number of qualia-generating subsystems) somehow.
A corollary of this would be that more robust minds, from which more components could be removed without ending phenomenal consciousness, are vastly more morally important than less robust ones of comparable size.
7. Sublinear Scaling, but Without Direct Subunit Interference
If one accepts the model of qualia formation that I used to motivate linear moral scaling above, but doesn’t think that identical moral goods produced independently by different systems have stacking effects (see the linked post above for a defense of that opinion), then they may arrive at the conclusion that moral worth scales sublinearly with mental complexity because different qualia-generating subsystems in a mind generate qualia that are valuable in overlapping ways.
8. Constant Scaling, but the Constant Is 0
If all sentient systems that will be physically realized will be realized multiple times—as would follow if the universe is spatially homogeneous and infinite, or if the mathematical universe hypothesis is true—and the thing about identical moral goods being redundant from section seven is true, then one could say that all individual minds have zero moral worth (as the qualia they are generating at any given time are not unique to them).
PRACTICAL IMPLICATIONS
How would any of the nonlinear scaling laws presented in this post affect the optimal decisions for us to make here in physical reality if they were correct?
I briefly mentioned one in this post’s introduction: EA cause prioritization. If moral importance scales, ceteris paribus, with the square or cube of brain size (to say nothing of exponential scaling), then much of the money spent on animal welfare should be reallocated from helping smaller animals to helping larger ones, or likely even to causes affecting humans, in spite of potentially vast decreases in the number of individual animals affected. The semi-common EA-adjacent argument that beef consumption is preferable to chicken consumption due to the larger number of animals that need to be farmed to make some amount of chicken than to make some amount of beef (and the dramatically worse conditions factory farmed chickens experience) might also need to be revisited. (Of course, if moral worth scales sublinearly with brain size, everything would shift in the opposite direction.)
Superlinear scaling would also have interesting implications for the far future—the morally optimal thing to do in the long run would probably involve making a huge utility monster out of nearly all accessible matter and having it sustained in a slightly pleasant state for a spell, even if more intense happiness could be achieved by merely (e.g.) galaxy-sized brains. If the scaling is exponential, then we reach pretty extreme conclusions. One is that the utility monster would probably live for only about as long as necessary for its most widely-distributed subnetworks to start generating qualia, because storing energy to power the monster only linearly increases the utility generated by running it after that point, while using the energy to further build out the monster exponentially (and, seeing as the monster would literally be computer with an appreciable fraction of the mass of the Hubble sphere, and hence consume power extremely quickly, unfathomably rapidly) increases it. Another is that we should care less about AI alignment and steering, because spending time worrying about that instead of building ASI maximally quickly only increases the chance that the future singleton will do the optimal thing by, what, several orders of magnitude max, while delaying its rise by hours to months and as such causing countless solar masses of usable matter to leave the lightcone (decreasing the payoff if it does build the monster by vastly more orders of magnitude).
CONCLUSION
I have nowhere near the level of confidence around these issues necessary to write a proper conclusion to this post. Thoughts?
INTRODUCTION
In Utilitarian ethics, one important factor in making moral decisions is the relative moral weight of all moral patients affected by the decision. For instance, when EAs try to determine whether or not shrimp or bee welfare (or even that of chickens or hogs) is a cause worth putting money and effort into advancing, the importance of an individual bee or shrimp’s hedonic state (relative that of a human, or a fish, or a far-future mind affected by the long-term fate of civilization) is a crucial consideration. If shrimp suffer, say, 10% as much as humans would in analogous mental states, then shrimp welfare charities are likely the most effective animal welfare organizations to donate to (in terms of suffering averted per dollar) by orders of magnitude, but if the real ratio is closer to 10-5 (like the ratio between shrimp and human brain neuron counts), then the cause seems much less important.
One property of a moral patient that many consider an important contributor to its moral worth is its size or complexity. As it happens, there are a number of different ways that moral worth could plausibly scale with a moral patient’s mental complexity, ranging from constant moral worth all the way up to exponential scaling laws. Furthermore, these are affected by one’s philosophy of consciousness and of qualia in perhaps unintuitive ways. I will break down some different plausible scaling laws and some beliefs about phenomenology that could lead to them one-by-one in the remainder of this essay.
ASSUMPTIONS AND DISCLAIMERS
In this post, I am assuming:
This blog post will likely be of little value to you if you think that these premises are incorrect, especially the second two, partially because I'm working from assumptions you think are wrong and partially because I frequently equivocate between things that are situationally equivalent under this worldview (e.g. components of a person’s mind and components of their brain or the computation it implements) for convenience.
I am not trying to argue that any of the scaling laws below are true per se, nor do I mean to suggest that any of the arguments below are bulletproof, or even all that strong (they support contradictory conclusions, after all). I aim instead to show that each of the scaling laws can be vaguely reasonably argued for based on some combination of phenomenological beliefs.
SCALING LAWS
1. Constant Scaling
This is the simplest possible scaling law. One can reasonably assume it by default if they don’t buy any of the suppositions used to derive the other scaling laws’ below. There’s not really much more to say about constant scaling.
2. Linear Scaling
This is perhaps the most intuitive way that moral worth could scale. One obtains linear scaling of moral importance if they assume that minds generate qualia through the independent action of a bunch of very small components.
This seems plausible if we imagine more complicated minds as an group of individually simpler minds in communication with each other, which preserve the moral status that they would have as individuals. I think that this is an excellent model of some morally relevant systems, but probably a poor model of others. The moral importance of a set of ten random non-interacting people, for instance, is clearly just the sum of the importances of of its individual members—it’s hard to argue that they become more or less important just because one mentally categorizes them together—but a moral patient composed solely of specialized components that are somehow entirely unlike each other in all possible ways, or a near-apophatic god with no constituent components, would be very difficult to shoehorn into this framework. The minds/brains of large animals like humans, in my view, fall inbetween these two extremes. While large animal brains strictly depend on each of several heterogeneous functional components (e.g. the human cerebral cortex, thalamus, hypothalamus, etc.) to perform morally relevant activity, these components can largely each be broken up into smaller subunits with similar structures and functions (the minicolumns of the cerebral cortex, individual white matter fibers, the cannonical microcircuit of the cerebellum, etc.). It seems reasonable enough that each of these units might contribute roughly equally much to a moral patient’s importance irrespective of global characteristics of the moral patient. One could imagine, for example, that positive or negative feelings in mammals come from the behavior of each cortical minicolumn individually being positively or negatively reinforced, and that the total hedonic value of the feelings can be obtained by adding up the contributions of each minicolumn. (This is, again, just an example—the actual causes of moral valence are probably much more complicated than this, but the point is that they could plausibly come from the largely-independent action of mental subunits, and that we should expect linear scaling in that case.)
3. Superlinear Integer Power Law
What if one accepts the division of minds into similar subunits like in the linear scaling argument, but thinks that moral relevance comes from aggregating the independent moral relevance of interactions between functional subunits of different kinds? For instance, perhaps the example from earlier where hedonic value comes from the reinforcement of minicolumn behavior is true, but reinforcement of a minicolumn coming from each subcortical nucleus is separable and independently morally relevant. For another example, one might find the origin of consciousness in the interactions between several different cortical regions and basal ganglia, and think that the superimposed effects of all circuits containing a subcomponent each contribute to conscious experience. In cases like these, moral weight scales with the product of the numbers of subcomponents of each functional role. If the numbers of each type of subcomponent each scale up with the complexity of the overall mind or brain, then this results in a power law with a positive integer exponent.
4. Non-Integer (incl. Sublinear) Power Law
Of course, it’s possible that adding more subunits to the system reduces the moral importance of each interaction between subunits. After all, if the number of morally relevant interactions involving each subunit scales up with the size of the system raised to, say, the fifth power, and one brain is a hundred times larger than another, then surely some of the 1010x more interactions any given subunit participates in in the larger brain fail to ever meaningfully influence its behavior (or those of any of the other interacting subunits). If actual, realized interaction effects (rather than the mere possibility thereof) are what cause moral importance, then you would get slower scaling than under the naive sixth-order law. If the chance of a possible interaction effect being realized drops off with brain size following a non-integer power law for some reason, then you get a non-integer power law for total moral scaling. More generally, you can get any scaling law that goes with the quotient of a power law and some other form of scaling that doesn’t go up as quickly as it from this.
You could also extend this argument to modify the earlier model where subunits just directly and independently generate moral valence. For instance, perhaps increasing the number of subunits causes higher sparsity or something, and the moral value of a subunit increases with its activity. In that case, moral value would specifically scale sublinearly.
5. Exponential Scaling
The previous three groups of scaling laws have been justified by modeling the brain as composed of non-overlapping subunits. Set those thoughts aside for now—exponential scaling of moral worth, if it happens, happens via a completely different mechanism.
One difficult philosophical problem is that of deciding what beings are moral patients. It may seem intuitively obvious that morally relevant systems cannot overlap, in the sense that you can’t have two of them that share some of the same physical substrate and generate qualia through some of the same individual computational operations. However, one can raise a number of objections to this claim:
Continuity when merging or splitting minds: If we suppose that overlapping moral patients are impossible, we are forced to draw unreasonable conclusions as to when exactly one becomes two (or two become one) when they are split or merged.
It’s a well-known fact that young children can survive having one of their brain hemispheres amputated or disconnected from the rest of the body, often even without major long-term motor or cognitive issues. This surgery, called hemispherectomy, is sometimes used as a treatment for severe epilepsy.
If one were to perform a hemispherectomy on a healthy person, one could remove either hemisphere, and the remaining one would probably be able to pilot the subject in a cognitively normal manner, as this is typically the case for the healthier hemisphere left over when hemispherectomy is performed in the usual clinical context. On this basis, after the hemispherectomy is completed, one could consider each hemisphere to be a moral patient, and, since they can’t interact, an independent one. There was only one moral patient before the surgery, so if moral patients can’t be overlapping computational and physical systems, the personhood of a hemispherectomy patient as a whole must be replaced with those of the two hemispheres at some point during the procedure.
You can probably see where I’m going with this. If a hemispherectomy was slowly performed on a conscious (if presumably immobilized etc.), healthy subject, when would the subject as a whole stop being a moral patient and each of their hemispheres start being one? This could happen either when the last communication between the hemispheres ceases, or sometime before then, when the degree to which the hemispheres are integrated falls below some threshold.
Let’s first consider the case in which it happens at the end. If we somehow undo the very last bit of the operation, restoring the last individual axon severed in each direction or whatever so that only a tiny amount of information can flow back and forth, does each hemisphere stop having qualia and the patient’s overall brain resume doing so? If we answer no, then we’re establishing that physically and computationally identical systems (the brain before and after the reversal of the last bit of the hemispherectomy; in practice, there’d probably be minute differences, but we can handwave this away on the grounds that the changes are too small to be meaningful or by positing an extremely short interval between severing and restoring connections or that the two hemispheres somehow evolve right back to their original states by the end the interval) can generate different qualia or do so in different manners, which violates physicalism and computationalism. (It also implies that qualia are at least sometimes epiphenomenal, given that the evolution of the universe’s state is wholly determined by its physical conditions in the present, which the patient’s qualia would not not be determined by.) If we answer yes, then we raise the possibility that moral patients can stop having qualia due to arbitrarily low-bandwidth communication with other moral patients. If restoring the last pair of axons causes the hemispheres to each stop generating qualia, would the same thing happen if we had some BCI replicate the effect of a single pair of white matter fibers between the cingulate cortices of two normal people? Or hell, even if they were in a conversation with each other?
Now, let’s consider the second case, in which the shift happens before the end of the procedure. This is still unappealing, because it posits a discontinuous change in qualia driven by a continuous (or nearly so) change in the computational system that generates them. It also raises the question of where exactly the cutoff is.
If we hold that a system that would be a moral patient in isolation still is one when overlapping with or a component of another, then the total moral worth of complicated minds can grow very very quickly. If we suppose that some sort of animal animal would usually be a moral patient if it lost a random 3% of its cortical minicolumns, for example, then this would imply that the number of simultaneously qualia-generating subsystems in it scales exponentially (and extremely rapidly) with the area of its cerebral cortex. If the average moral weight of each of the subsystems is independent of scale, then this would make its total moral weight scale exponentially as well. Of course, this line of reasoning fails if the mean moral weight of each subsystem falls exponentially with overall scale (and with a base precisely the inverse of the one for the growth of the number of qualia-generating subsystems) somehow.
A corollary of this would be that more robust minds, from which more components could be removed without ending phenomenal consciousness, are vastly more morally important than less robust ones of comparable size.
7. Sublinear Scaling, but Without Direct Subunit Interference
c.f. this
If one accepts the model of qualia formation that I used to motivate linear moral scaling above, but doesn’t think that identical moral goods produced independently by different systems have stacking effects (see the linked post above for a defense of that opinion), then they may arrive at the conclusion that moral worth scales sublinearly with mental complexity because different qualia-generating subsystems in a mind generate qualia that are valuable in overlapping ways.
8. Constant Scaling, but the Constant Is 0
If all sentient systems that will be physically realized will be realized multiple times—as would follow if the universe is spatially homogeneous and infinite, or if the mathematical universe hypothesis is true—and the thing about identical moral goods being redundant from section seven is true, then one could say that all individual minds have zero moral worth (as the qualia they are generating at any given time are not unique to them).
PRACTICAL IMPLICATIONS
How would any of the nonlinear scaling laws presented in this post affect the optimal decisions for us to make here in physical reality if they were correct?
I briefly mentioned one in this post’s introduction: EA cause prioritization. If moral importance scales, ceteris paribus, with the square or cube of brain size (to say nothing of exponential scaling), then much of the money spent on animal welfare should be reallocated from helping smaller animals to helping larger ones, or likely even to causes affecting humans, in spite of potentially vast decreases in the number of individual animals affected. The semi-common EA-adjacent argument that beef consumption is preferable to chicken consumption due to the larger number of animals that need to be farmed to make some amount of chicken than to make some amount of beef (and the dramatically worse conditions factory farmed chickens experience) might also need to be revisited. (Of course, if moral worth scales sublinearly with brain size, everything would shift in the opposite direction.)
Superlinear scaling would also have interesting implications for the far future—the morally optimal thing to do in the long run would probably involve making a huge utility monster out of nearly all accessible matter and having it sustained in a slightly pleasant state for a spell, even if more intense happiness could be achieved by merely (e.g.) galaxy-sized brains. If the scaling is exponential, then we reach pretty extreme conclusions. One is that the utility monster would probably live for only about as long as necessary for its most widely-distributed subnetworks to start generating qualia, because storing energy to power the monster only linearly increases the utility generated by running it after that point, while using the energy to further build out the monster exponentially (and, seeing as the monster would literally be computer with an appreciable fraction of the mass of the Hubble sphere, and hence consume power extremely quickly, unfathomably rapidly) increases it. Another is that we should care less about AI alignment and steering, because spending time worrying about that instead of building ASI maximally quickly only increases the chance that the future singleton will do the optimal thing by, what, several orders of magnitude max, while delaying its rise by hours to months and as such causing countless solar masses of usable matter to leave the lightcone (decreasing the payoff if it does build the monster by vastly more orders of magnitude).
CONCLUSION
I have nowhere near the level of confidence around these issues necessary to write a proper conclusion to this post. Thoughts?