I kinda want to outline a blog post that starts with a giant list of philosophical dilemmas that AIs will have to think about and come up with answers to
This sounds important, please do it.
The singularity means that superintelligence has arrived and is in charge of everything. Are you supposing that even in this situation, humans could be used as oracles to answer philosophical questions, in a way that AI can't?
From the introduction ("everyone with... the right to vote in..."), I assumed that this was a checklist of questions for persons navigating the moral maze of American politics, especially, to help them identify what they really want and need, whether there's honesty or integrity in the organizations and movements with which they may have affiliated themselves, and so on. Such questions are pertinent for every society, but the maze takes different forms. In a society with a fixed power center (whether that's a person or a party), the central fact of life is how you relate to that center and its affiliates. America is fluid and has two power centers that take turns being in charge, and which war constantly over the interpretation of everything of consequence. That's what I mean by polarized and propagandized.
I thought it was interesting as a very first-principles exercise in evaluating one's situation, but far too abstract for most people. I thought it would be good if there was an analogous, but far simpler, ethical and epistemological checklist for regular people who aren't philosophers, scientists, or other intelligentsia; and it occurred to me that an LLM might be able to whittle it down in a good way.
However, it seems it was actually meant for AIs, and AI safety engineers, navigating the smaller (but very consequential) moral maze of the world of AI R&D?
OK, I see it referenced in the fourth comment. Usually Löwenheim-Skolem is referenced in order to state that any uncountably large object has a countably large model (part of "downward L-S"), but here he's citing "upward L-S", about the existence of models with arbitrarily greater cardinalities.
L-S is logically independent of well-foundedness, and in any case the speaker appeals to some vague further principle about the conditions under which you "find yourself" (to be existing? to be existing at an exactly identifiable time?). The role of upward L-S seems to be to argue that if past time is infinite, the cardinality of that infinity is indeterminate and therefore so is your exact location in time.
Bear in mind that this is metaphysical technobabble from a work of fiction about beings who know more about reality than we do. Its primary job is to sound like an example of such knowledge. The author may or may not take it seriously.
Google can't find any reference to Skolem, Lowenheim-Skolem, Löwenheim-Skolem in the projectawful site...
an existential battle between human executive function and ourselves... eventually humanity loses its mind as the boundaries of reality become irreconcilable
This description is confusing, but I assume you're talking about a process in which decision-making in a human-AI hybrid ends up entirely in the AI part rather than the human part.
It's logical to worry about such a thing because AI is faster than human already. However, if we actually knew what we were doing, perhaps AI superintelligence could be incorporated into an augmented human, in such a way that there is continuity of control. Wherever the executive function or the Cartesian theater is localized, maybe you can migrate it onto a faster substrate, or give it accelerated "reflexes" which mediate between human-speed conscious decision-making and faster-than-human superintelligent subsystems... But we don't know enough to do more than speculate at this point.
For the big picture, your items 1 and 2 could be joined by choice 3 (don't make AI) and non-choice 4 (the AI takes over and makes the decisions). I think we're headed for 4, personally, in which case you want to solve alignment in the sense that applies to an autonomous superintelligence.
If you ran this through an LLM, and asked for e.g. a summary suitable for a typical liberal arts graduate, or a one-paragraph summary suitable for someone with an eighth-grade reading level... maybe you'd even get something useful for most denizens of America's polarized and propagandized political and cultural landscape!
CEV is not meant to depend on the state of human society. It is supposed to be derived from "human nature", e.g. genetically determined needs, dispositions, norms and so forth, that are characteristic of our species as a whole. The quality of the extrapolation process is what matters, not the social initial conditions. You could be in "viatopia", and if your extrapolation theory is wrong, the output will be wrong. Conversely, you could be in a severe dystopia, and so long as you have the biological facts and the extrapolation method correct, you're supposed to arrive at the right answer.
I have previously made the related point that the outcome of CEV should not be different, whether you start with a saint or a sinner. So long as the person in question is normal Homo sapiens, that's supposed to be enough.
Similarly, CEV is not supposed to be about identifying and reconciling all the random things that the people of the world may want at any given time. It is supposed to identify a value system or decision procedure which is the abstract kernel of how the smarter and better informed version of the human race would want important decisions to be made, regardless of the details of circumstance.
This is, I argue, all consistent with the original intent of CEV. The problem is that neither the relevant facts defining human nature, nor the extrapolation procedure, are known or specified with any rigor. If we look at the broader realm of possible Value Extrapolation Procedures, there are definitely some "VEPs" in which the outcome depends crucially on the state of society, the individuals who are your prototypes, and/or even the whims of those individuals at the moment of extrapolation.
Furthermore, it is likely that individual genotypic variation, and also the state of culture, really can affect the outcome, even if you have identified the "right" VEP. Culture can impact human nature significantly, and so can genetic variation.
I think it's probably for the best that the original manifesto for CEV, was expressed in these idealistic terms - that it was about extrapolating a universal human nature. But if "CEV theory" is ever to get anywhere, it must be able to deal with all these concrete questions.
(For examples of CEV-like alignment proposals that include dependence on neurobiological facts, see PRISM and metaethical.ai.)
I notice that I am confused. In what sense would, say, Larry Page or whoever own distant galaxies?
There is a recurring model of the future in these circles, according to which the AI race culminates in superintelligence, which then uses its intelligence advantage to impose its values on every part of the universe it can reach (thus frequent references to "the future lightcone" as what's at stake).
The basic mechanism of this universal dominion is usually self-replicating robot probes ("von Neumann machines"), which maintain fidelity to the purposes and commands of the superintelligence, spreading at the maximum possible fraction of lightspeed. It is often further argued that there must be no alien intelligence elsewhere in the universe, because if there was, it would already have launched such a universe-colonizing wave that would control this part of space already. (Thus a version of the Fermi paradox, "where is everybody?")
That there are no alien intelligences is possible in principle, it just requires that some of the numbers in the Drake equation are small enough. It is also possible to have more sophisticated models which do not assume that intelligence leads to aggressive universe colonization with probability 1, or in which there are multiple universe colonizers (Robin Hanson wrote an influential paper about the latter scenario, "Burning the Cosmic Commons").
I don't know the exact history of these ideas, but already in chapter 10 of Eric Drexler's 1986 "Engines of Creation", one finds a version of these arguments.
The idea that individual human beings end up owning galaxies is a version of the "superintelligence conquers the universe" scenario, in which Earth's superintelligence is either subordinate to its corporate creators, or follows some principled formula for assigning cosmic property rights to everyone alive at the moment of singularity (for example). Roko Mijic of basilisk fame provided an example of the latter in 2023. If you believe in Fedorovian resurrection via quantum archeology, you could even propose a scheme in which everyone who ever lived gets a share.
Your two main questions about ownership of distant galaxies (apart from alien rights) seem to be (1) how would ownership be enforced (2) what would the owner do with it? These scenarios generally suppose that the replicating robot fleets which plant their flags all over the universe won't deviate from their prime imperative. It's reasonable to suppose that they would eventually do so, and become a de-facto independent species of machine intelligence. But I suppose digital security people might claim that through sufficiently intense redundancy of internal decision-making and sufficiently strict protocols of mutual inspection, you could reduce the probability of successful defection from the prime imperative to a satisfactorily low number.
If you can swallow that, then Larry Page can have his intergalactic property rights reliably enforced across billions of years and light-years. But what would he do with a galaxy of his own? I think it's possible to imagine things to do, even just as a human being - e.g. you could go sightseeing in a billion solar systems, with von Neumann machines as your chauffeurs and security details - and if we suppose that Larry himself has transcended his humanity and become a bit of a godlike intelligence himself, then he may wish to enact any number of SF scenarios e.g. from Lem or Stapledon.
All this is a staring into the unknown, a mix of trying to stay within the apparent limits implied by physics, while imagining what humans and posthumans would do with cosmic amounts of power. These scenarios have their internal logic, but I think it's unwise to believe too hard in them. (If you take anthropic arguments like the Self Indication Assumption seriously, you can almost deduce that they are incorrect, since it should be very unlikely to find ourselves in such a uniquely privileged position at the dawn of time, though that does not in itself tell you what the wrong ingredient is.)
Some years back I proposed here, to little effect, that it would be wiser to take as one's baseline scenario, just that the spawn of Earth will spread out into this solar system and use its resources in a transhuman way. That's already radical enough and it doesn't make assumptions about cosmic demography or the long-term future.
Have you never heard it argued that "terminal values" in an AI are arbitrary?