Sorted by New

Wiki Contributions


What we talk about when we talk about maximising utility

I don't see what the problem is. Utilitarianism says that there is, or ought to be, some objective utility function, the maximization of which is what determines "good" and "evil". This function need not be a linear combination of people's personal utility functions, it can be "well-being" as you describe, but this doesn't make it fundamentally different from other utility functions, it's simply a set of preferences (even if nobody in real life actually has these preferences in this precise order). Theoretically, if someone did posess this as their actual utility function they would be a perfectly good person, and if we knew exactly how to formulate it we could describe people's goodness or evilness based on how well their personal utility function aligned with this one.

The Pyramid And The Garden

A somewhat moderate nitpick, a lot of your degrees of freedom don't multiply with each other. For example, #4 and #6. We have 2 types of lattitude systems, and we have 3 types of measurements (location, height, width) But this only gives us 4 possible combinations, not 6 (lattitude 1, lattitude 2, width, height). And a lot of your other degrees of freedom don't multiply like longitude/lattitude, or location also don't multiply with width/height. Similarly, #3 and #7 don't multiply. You only have two degrees of freedom corresponding to different monuments if every alternate location also has two valid monuments.

Some of this can be fixed by fusing some categories and treating them as additive, but in general this means your number of combinations should be smaller than you think (and more mathematically complicated to compute).