Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

(this post has been written for the first Refine blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research)

this is the follow-up to the Insulated Goal-Program idea in which i suggest doing alignment by giving an AI a program to run as its ultimate goal, the running of which would hopefully realize our values. in this post, i talk about what pieces of software could be used to put together an appropriate goal-program, as well as some example of plans built out of them.

  • "ems": uploaded people, who could for example evaluate how much a given situation satisfies our values; if they are uploads of AI alignment researchers and engineers, they could also be put to work on alignment and AI software — all of this *inside* the goal-program.
  • "elves": neural net models, or patchworks of software likely containing those, designed to be a rough representation of our values, carry a rough subset of our skills, or be some other subset of the human mind. those might have to make do if running ems is either impossible due to for example brain scan technology being unavailable, or if running elves poses less of an S-risk than running ems in some situations.
  • collaborative environments, such as collaborative programming environments or full 3D virtual environments, for ems and/or elves to work in together. those are instrumental environments designed to let their users develop something.
  • "utopia infrastructure": pieces of software designed to robustly support beings living together in utopia, as i've previously designed for a video game idea (which i'm no longer working on). these are places designed for long-term (possibly forever-term) inhabitation by endless persons, under hopefully utopic conditions.
  • program searches: programs iterating through programspace, typically in order to find worlds or models or programs which match some criteria. just like "a bunch of ems and/or elves programming together", program searches can be used to produce more of the things in this list. that said, program search finds can find demons, which is something to look out for; a general program search utilizing its output for anything must either fully sanitize what it does use, or skip demonic programs to begin with.
  • observer programs: programs which consume a slice of computation (typically a world simulation) for examination, and maybe even editing, typically by an em or an elf.
  • a simulation of earth would be useful if it were somehow obtainable in reasonable computational time. it could serve to extract alignment researchers from it in order to spawn a simulation of them without having to figure out brain scanning; it could be used to create an alternate history where AI researchers are somehow influenced, possibly at an early date; it could also be used to recover the full population of earth in order to give them access to utopia once we have a satisfactory instance of it.
  • a dump of (as much as possible of) the internet, which could be useful to both locate the earth, or re-extrapolate things like humans or earth or maybe specific persons.

here are some naive examples of outlines for goal-program which seem like they could be okay:

  • a simulation of a bunch of researchers, with a lot of time to figure out alignment (as in the peerless).
  • a bunch of elves forever evaluating various light-cones of a program search for worlds, keeping ones with seemingly good contents and discarding ones with seemingly bad contents — although this idea is potentially quite vulnerable to demon-laden worlds.
  • a bunch of elves working to, using a copy of the internet, re-extrapolate ems which could then figure out AI alignment
  • any of these schemes, except with ems or elves checking at a level above that everything goes well, with the ability to abort or change plans

these feel like we could be getting somewhere in terms of figuring out actual goal-program that could contain to valuable outcomes; at the very least, it seems like a valuable avenue of investigation. in addition, unlike AGI, individual many pieces of the goal-program can be individually tested, iterated on, etc. in the usual engineering fashion.

27

Ω 14

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 8:02 AM

It's important to note that the only person who can determine how much a situation satisfies their values, is that person. I think a lot of alignment proposals have as a subtle underlying assumption that someone (or some set of someones) is going to be making decisions for everyone else. I think a world where that is the case has already been lost. I'm not sure to what extent these suggestions are examples of that, but they feel similar to me.

Btw, your vision of utopia on your website sounds roughly identical to mine which I've been thinking about for roughly ten years now. I am bad at writing in an organized way (and I constantly improve my understanding of things) so I've never written it up, but it might be nice for you and I to talk. In particular I've lately been thinking deeply about the ethics of consent and the need for counterfactual people (particularly the unborn and the dead) to have enforced rights - and the structures you call "nonperson forces" (I'd call them egregores) I consider to possibly have moral rights as well. I would really love to be able to think about all this stuff with you.

This is defnitely interesting. I have some specific issues I'll ask about if you haven't addressed them yourself in a few months.