Tamsin Leake

hi! i'm tammy :3

i research the QACI plan for formal-goal AI alignment at orthogonal.

check out my blog and my twitter.

Wiki Contributions

Comments

the QACI target sort-of aims to be an implementation of CEV. There's also PreDCA and UAT listed on my old list of (formal) alignment targets.

something like that yes; something like a dot-product but of distributions over world-states, and those distributions are downstream of world-physics functions.

note that i'm not sure if i get what your comment is asking, let me know if i'm failing to answer your question.

unsure if this applies but pannenkoek2012 is known for having done an immense amount of research into the Super Mario 64 A Button Challenge; this is similar, though not quite the same, as speedrun strategy research.

what do you mean my intensive property, and why do you think i don't want that?

that's fair, but if "amounts of how much this is matters"/"amount of how much this is real" is not "amounts of how much you expect to observe things", then how could we possibly determine what it is? (see also this)

where did you get to in the post? i believe this is addressed afterwards.

even the very vague general notion that the government is regulating at all could maybe help make investment in AI more frisky, which is a good thing.

the main risk i'm worried about is that it brings more attention to AI and causes more people to think of clever AI engineering tricks.

one solution to this problem is to simply never use that capability (running expensive computations) at all, or to not use it before the iterated counterfactual researchers have developed proofs that any expensive computation they run is safe, or before they have very slowly and carefully built dath-ilan-style corrigible aligned AGI.

nothing fundamentally, the user has to be careful what computation they invoke.

an approximate illustration of QACI:

Load More