carado

i blog at carado.moe.

Wiki Contributions

Comments

i've made some work towards building that machinery (see eg here) but yes still there are still a bunch of things to be figured out, though i'm making progress in that direction (see the posts about blob location).

My own thinking would be that the counterfactual reasoning should be responsive to the system's overall estimates of how-humans-would-want-it-to-reason, in the same way that its prior needs to be an estimate of the human-endorsed prior, and values should approximate human-endorsed values.

are you saying this in the prescriptive sense, i.e. we should want that property? i think if implemented correctly, accuracy is all we would really need right? carrying human intent in those parts of the reasoning seems difficult and wonky and plausibly not necessary to me, where straightforward utility maximization should work.

the counterfactuals might be defined wrong but they won't be "under-defined". but yes, they might locate the blob somewhere we don't intend to (or insert the counterfactual question in a way we don't intend to); i've been thinking a bunch about ways this could fail and how to overcome them (1, 2, 3).

on the other hand, if you're talking about the blob-locating math pointing to the right thing but the AI not making accurate guesses early enough as to what the counterfactuals would look like, i do think getting only eventual alignment is one of the potential problems, but i'm hopeful it gets there eventually, and maybe there are ways to check that it'll make good enough guesses even before we let it loose.

(cross-posted as a top-level post on my blog)

QACI and plausibly PreDCA rely on a true name of phenomena in the real world using solomonoff induction, and thus talk about locating them in a theoretical giant computation of the universe, from the beginning. it's reasonable to be concerned that there isn't enough compute for an aligned AI to actually do this. however, i have two responses:

  • isn't there enough compute? supposedly, our past lightcone is a lot smaller than our future lightcone, and quantum computers seem to work. this is evidence that we can, at least in theory, build within our future lightcone a quantum computer simulating our past lightcone. the major hurdle here would be "finding out" a fully explanatory "initial seed" of the universe, which could take exponential time, but also could maybe not.
  • we don't need to simulate past lightcone. if you ask me what my neighbor was thinking yesterday at noon, the answer is that i don't know! the world might be way too complex to figure that out without simulating it and scanning his brain. however, i have a reasonable distribution over guesses. he was more likely to think about french things than korean things. he was more likely to think about his family than my family. et cetera. an aligned superintelligence can hold an ever increasingly refined distribution of guesses, and then maximize the expected utility of utility functions corresponding to each guess.

sounds maybe kinda like a utopia design i've previously come up with, where you get your private computational garden and all interactions are voluntary.

that said some values need to come interfere into people's gardens: you can't create arbitrarily suffering moral patients, you might have to in some way be stopped from partaking in some molochianisms, etc.

i don't think determinism is incompatible with making decisions, just like nondeterminism doesn't mean my decisions are "up to randomness"; from my perspective, i can either choose to do action A or action B, and from my perspective i actually get to steer the world towards what those action lead to.

put another way, i'm a compatibilist; i implement embedded agency.

put another way, yes i LARP, and this is a world that gets steered towards the values of agents who LARP, so yay.

what i mean here is "with regards to how much moral-patienthood we attribute to things in it (eg for if they're suffering), rather than secondary stuff we might care about like how much diversity we gain from those worlds".

Answer by caradoMar 11, 202370

(this answer is cross-posted on my blog)

here is a list of problems which i seek to either resolve or get around, in order to implement my formal alignment plans, especially QACI:

  • formal inner alignment: in the formal alignment paradigm, "inner alignment" means refers to the problem of building an AI which, when ran, actually maximizes the formal goal we give it (in tractable time) rather than doing something else such as getting hijacked by an unaligned internal component of itself. because its goal is formal and fully general, it feels like building something that maximizes it should be much easier than the regular kind of inner alignment, and we could have a lot more confidence in the resulting system. (progress on this problem could be capability-exfohazardous, however!)
  • continuous alignment: given a utility function which is theoretically eventually aligned such that there exists a level of capabilities at which it has good outcomes for any level above it, how do we bridge the gap from where we are to that level? will a system "accidentally" destroy all values before realizing it shouldn't have done that?
  • blob location: for QACI, how do we robustly locate pieces of data stored on computers encoded on top of bottom-level-physics turing-machine solomonoff hypotheses for the world? see 1, 2, 3 for details.
  • observation data precision and computation substrate: related to the previous problem, how precisely does the prior we're using need to capture our world, for the intended instance of the blobs to be locatable? can we just find the blobs in the universal program — or, if P≠BQP, some universal quantum program? do we need to demand worlds to contain, say, a dump of wikipedia to count as ours? can we use the location of such a dump as a prior for the location of the blobs?
  • infrastructure design: what formal-math language will the formal goal be expressed in? what kind of properties should it have? should it include some kind of proving system, and in what logic? in QACI, will this also be the language for the user's answer? what kind of checksums should accompany the question and answer blobs? these questions are at this stage premature, but they will need some figuring out at some point if formal alignment is, as i currently believe, the way to go.

i would love a world-saving-plan that isn't "a clever scheme" with "many moving parts" but alas i don't expect it's what we get. as clever schemes with many moving parts go, this one seems not particularly complex compared to other things i've heard of.

  1. That would be a bad thing because I value other things (such as culture, love, friendship, art, philosophy, diversity, freedom, me, my friends, my planet, etc) which are not covered by just "paperclips".

  2. See this, this, and maybe this.

Only a single training example needed through use of hypotheticals.

(to be clear, the question and answer serve less as "training data" meant to represent the user, but as "IDs" or "coordinates" menat to locate the user in past-lightcone.)

We need good inner alignment. (And with this, we also need to understand hypotheticals).

this is true, though i think we might not need a super complex framework for hypotheticals. i have some simple math ideas that i explore a bit here, and about which i might write a bunch more.

for failure modes like the user getting hit by a truck or spilling coffee, we can do things such as at each step asking not 1 cindy the question, but asking 1000 cindy's 1000 slight variations on the question, and then maybe have some kind of convolutional network to curate their answers (such as ignoring garbled or missing output) and pass them to the next step, without ever relying on a small number of cindy's except at the very start of this process.

it is true that weird memes could take over the graph of cindy's; i don't have an answer to that apart that it seems sufficiently not likely to me that i still think this plan has promise.

Chaos theory. Someone else develops a paperclip maximizer many iterations in, and the paperclip maximizer realizes it's in a simulation, hacks into the answer channel and returns "make as many paperclips as possible" to the AI.

hmm. that's possible. i guess i have to hope this never happens on the question-interval, on any simulation day. alternatively, maybe the mutually-checking graph of a 1000 cindy's can help with this? (but probly not; clippy can just hack the cindy's).

So all the virtual humans get saved on disk, and then can live in the utopia. Hey, we need loads of people to fill up the dyson sphere anyway.

yup. or, if the QACI user is me, i'm probly also just fine with those local deaths; not a big deal compared to an increased chance of saving the world. alternatively, instead of being saved on disk, they can also just be recomputed later since the whole process is deterministic.

I am not confident that your "make it complicated and personal data" approach at the root really stops all the aliens doing weird acausal stuff.

yup, i'm not confident either. i think there could be other schemes, possibly involving cryptography in some ways, to entangle the answer with a unique randomly generated signature key or something like that.

Load More