TristanTrim

Still haven't heard a better suggestion than CEV.

Wiki Contributions

Comments

Sorted by

✨ I just donated 71.12 USD (100 CAD 🇨🇦) ✨

I'd like to donate a more relevant amount but I'm finishing my undergrad and have no income stream... in fact, I'm looking to become a Mech Interp researcher (& later focus on agent foundations) but I'm not going to be able to do that if misaligned optimizers eat the world, so I support lightcone's direction as I understand it (policy that promotes AI not killing everyone).

If anyone knows of good ways to fund myself as a MI researcher, ideally focusing on this research direction I've been developing, please let me know : )

WRT formatting, thanks I didn't realise the markdown needs two new lines for a paragraph break.

I think CoT and its dynamics as it relates to review and RSI is very interesting & useful to be exploring.

Looking forward to reading the stepping stone and stability posts you linked. : )

Yes, you've written more extensively on this than I realized, thanks for pointing out other relevant posts, sorry for not having taken the time to find them myself, I'm trying to err more on the side of communication than I have in the past.

I think math is the best tool to solve alignment. It might be emotional, I've been manipulated and hurt by natural language and the people who prefer it to math and have always found engaging with math to be soothing or at least sobering. It could also be that I truly believe that the engineering rigor that comes with understanding something enough to do math to it is extremely worthwhile for building a thing of the importance we are discussing.

Part of me wants to die on this hill and tell everyone who will listen "I know its impossible but we need to find ways to make it possible to give the math people the hundred years they need because if we don't then everyone dies so theres no point in aiming for anything less and its unfortunate because it means it's likely we are doomed but that's the truth as I see it." I just wonder how much of that part of me is my oppositional defiance disorder and how much is my strategizing for best outcome.

I'll be reading your other posts. Thanks for engaging with me : )

WRT "I don't want his attempted in any light-cone I inhabit", well, neither do I. But we're not in charge of the light cone.

That really is a true and relevant fact, isn't it? 😭

It seems like aligning humans really is much more of a bottleneck rn than aligning machines, and not because we are at all on track to align machines.

I think you are correct about the need to be pragmatic. My fear is that there may not be anywhere on the scale from "too pragmatic failed to actually align ASI" to "too idealistic, failed to engage with actual decision makers running ASI projects" where we get good outcomes. Its stressful.

The organized mind recoils. This is not an aesthetically appealing alignment approach.

Praise Eris!

No, but seriously, I like this plan with the caveat that we really need to understand RSI and what is required to prevent it first, and also I think the temptation to allow these things to open up high bandwidth channels to other modalities than language is going to be really really strong and if we go forward with this we need a good plan to resist that temptation and a good way to know when not to resist that temptation.

Also, I'd like it if this was though of as a step on the path to cyborgism/true value alignment, and not as a true ASI alignment plan on its own.

I was going to say "I don't want this attempted in any light-cone I inhabit, but I realize theres a pretty important caveat. On it's own, I think this is a doom plan, but if there was a sufficient push to understand RSI dynamics before and during, then I think it could be good.

I don't agree that it's "a better idea than attempting value alignment", it's a better idea than dumb value alignment for sure, but imo only skilled value alignment or self modification (no AGI, no ASI) will get us to a good future. But the plans aren't mutually exclusive. First studying RSI, then making sufficiently non-RSI AGI with instruction following goals, then using that non-RSI AGI to figure out value alignment, probably using GSLK and cyborgism seems to me like a fine plan. At least it does at present date present time.

I like this post. I like goals selected from learned knowledge (GSLK). It sounds a lot like what I was thinking about when I wrote how-i-d-like-alignment-to-get-done. I plan to use the term GSLK in the future. Thank you : )

"we've done so little work on alignment that I think it might actually be more like additive, from 1% to 26% or 50% to 75% with ten extra years relative to the real current odds if we press ahead - which nobody knows." 😭🤣 I really want "We've done so little work the probabilities are additive" to be a meme. I feel like I do get where you're coming from.

I agree about pause concern. I also really feel that any delay to friendly SI represents an enormous amount of suffering that could be prevented if we got to friendly SI sooner. It should not be taken lightly. And being realistic about how difficult it is to align humans seems worthwhile. When I talk to math ppl about what work I think we need to do to solve this though, "impossible" or "hundreds of years of work" seem to be the vibe. I think math is a cool field because more than other fields, it feels like work from hundreds of years ago is still very relevant. Problems are hard and progress is slow in a way that I don't know if people involved in other things really "get". I feel like in math crowds I'm saying "no, don't give up, maybe with a hundred years we can do it!" And in other crowds I'm like "c'mon guys, could we have at least 10 years, maybe?" Anyway, I'm rambling a bit, but the point is that my vibe is very much, "if the Russians defect, everyone dies". "If the North Koreans defect, everyone dies". "If Americans can't bring themselves to trust other countries and don't even try themselves, everyone dies". So I'm currently feeling very "everyone slightly sane should commit and signal commitment as hard as they can" cause I know it will be hard to get humanity on the same page about something. Basically impossible, never been done before. But so is ASI alignment.

I haven't read those links. I'll check em out, thanks : ) I've read a few things by Drexler about, like, automated plan generation and then humans audit and enact the plan. It makes me feel better about the situation. I think we could go farther safer with careful techniques like that, but that is both empowering us and bringing us closer to danger, and I don't think it scales to SI, and unless we are really serious about using it to map RSI boundaries, it doesn't even prevent misaligned decision systems from going RSI and killing us.

Yeah, getting specific unpause requirements seems high value for convincing people who would not otherwise want a pause, but I can't imagine actually getting it in time in any reasonable way, instead it would need to look like technical specification. "Once we have developed x, y, and z, then it is safe to unpause" kind of thing. Just we need to figure out what the x, y, and z requirements are. Then we can estimate how long it will take to develop x, y, and z, and this will get more refined and accurate as more progress is made, but since the requirements are likely to involve unknown unknowns in theory building, it seems likely that any estimate would be more of a wild guess, and it seems like it would be better to be honest about that rather than saying "yeah, sure, ten years" and then after ten years if the progress hasn't been made saying "whoops, looks like it's going to take a little longer!" As for odds of survival, my personal estimates feel more like 1% chance of some kind of "alignment by default / human in the loop with prosaic scaling" scheme working, as opposed to maybe more like 50% if we took the time to try to get a "aligned before you turn it on" scheme set up, so that would be improving our odds by about 5000%. Though I think you were thinking of adding rather than scaling odds with your 25%, so 49%, but I don't think that's a good habit for thinking about probability. Also I feel hopelessly uncalibrated for this kind of question... I doubt I would trust anyone's estimates, it's part of what makes the situation so spooky. How do you think public acceptance would be of a "pause until we meet target x and you are allowed to help us reach target x as much as you want" as opposed to "pause for some set period of time"?

Hey : ) Thanks for engaging with this. It means a lot to me <3

Sorry I wrote so much, it kinda got away from me. Even if you don’t have time to really read it all, it was a good exercise writing it all out. I hope it doesn't come across too confrontational, as far as I can tell, I'm really just trying to find good ideas, not prove my ideas are good, so I'm really grateful for your help. I've been accused of trying to make myself seem important while trying to explain my view of things to people and it sucks all round when that happens. This reply of mine makes me particularly nervous of that. Sorry.

 

A lot of your questions make me feel like I haven’t explained my view well, which is probably true, I wrote this post in less time than would be required to explain everything well. As a result, your questions don’t seem to fully connect with my worldview and make sense within it. I’ll try to explain why and I’m hoping we can help each other with our worldviews. I think the cruxes may be relating to:

  • The system I’m describing is aligned before it is ever turned on.
  • I attribute high importance to Mechanistic Interpretability and Agent Foundations theory.
  • I expect nature of Recursive Self Improvement (RSI) will result in an agent near some skill plateau that I expect to be much higher than humans and human organisations, even before SI hardware development. That is, getting a sufficiently skilled AGI would result in artificial super intelligence (ASI) with a decisive strategic advantage.
  • I (mostly) subscribe to the simulator model of LLMs, they are not a single agent with a single view of truth, but an object capable of approximating the statistical distribution of words resulting from ideas held within the worldviews of any human or system that has produced text in the training set.

I’ll touch on those cruxes as I talk through my thoughts on your questions.

 

First, “how do you get a system to optimize for those?” and “what is the feedback signal?” are questions in the domain of Step 1. Specifically the second paragraph “This should encompass the development of a theory of general decision / optimization systems”. I don’t think the theory will get to any definitive conclusions quickly, but I am hopeful that we will be able to define the borders/bounds of RSI sooner than later because many powerful systems today will be upset with a pause and the more specific our RSI bounds are, the more powerful systems we would be capable of safely developing knowing they cannot RSI. (Btw, I’d want a pretty serious derating factor for that.) I think it’s possible that, in order to develop theory to define RSI bounds, it is necessary to understand the relationship between Goals/Targets/Setpoints/Values/KPI/etc and the optimization pressure applied to get to them, but if not, it’s at least related, and that understanding is what is required to get an optimization system to optimize for a specific target. It may be a good idea for me to rename Step 1 to “Agent System Theory & RSI borders”. If I ever write a second alignment plan draft I’ll be sure to do so.

 

The situation with Goodhart’s Law (GL) is similar to the above, but I’ll also note that GL only applies to misaligned systems. The core of GL is that if you optimize for something, the distance between what that thing is, and the thing you actually wanted becomes more and more significant. If we imagine two friends who both like morning glory muffins, and one goes to bake some, there’s no risk to the other friend of GL, since they share the same goal. Likewise, if we suppose an ASI really is aligned to human friendly values, then there is no risk of GL since the thing the ASI really and truly cares about is friendliness to us. The problem is indeed “really and truly” aligning a system to human friendly values, but that is what my plan is meant to do.

 

As for multi-agent situations, I don’t understand why they would pose any problem. I expect the dynamics of RSI to lead to a single agent with a decisive strategic advantage. I can see two ways that this might not be the case:

  • If we are in an AGI race and RSI takeoff speed turns out to be sufficiently low, we may get multiple ASI. Because we are in a race dynamic, I assume we have not had time and taken care to align any of these AGI, and so I don’t believe any of those ASI would be remotely aligned to human friendliness. So it’s irrelevant to consider because we have already failed.
  • If the skill plateau turns out to be very low then we may want to have multiple different AGI. I think this is unlikely given my understanding of the software overhang. Almost everywhere in every software system humans are trying to make things understandable enough that they can assure correctness or even just get them working. I believe strongly that even a mild ASI would be able to greatly increase the efficiencies of the hardware systems it is running on. I also don’t think there is anything special about human level intelligence, I think it is plausible that we are the first animal smart enough to create optimization systems powerful enough to destroy the planet and ourselves, which seems to be what we are currently doing. In some sense this makes us close to the minimally intelligent object in the set of objects capable of wielding powerful optimization.

So in my worldview, it is very likely that in all not-already-doomed timelines, when we initiate RSI, the result will be a system that outmaneuvers all other agents in the environment. So multi-agent contexts are irrelevant.

 

“Societal alignment of the human entities controlling it” - I think societal alignment is well covered, but I don’t think human entities can/should control an ASI…

About societal alignment, that is the focus of Steps 3, 8 and somewhat in 6. Step 3, creating a taxonomy of value targets is similar to gathering the various possible desires of society. I emphasize “It is important to draw on diverse worldviews to compile this taxonomy.” This is important both for the moral reason of inclusion & respect as well as the technical reason of having redundancies & good depth of consideration. Then in Step 4, and 5 the feasibility of cohering these values is explored. With luck we will get good coherence 🍀 I truly do not know how likely that is, but I hope for a future where we get to find out. Step 8 involves the world actually signing off on the encoding of the world's values… That is probably the most difficult step of this plan, which is significant since the other steps may plausibly take many decades. Step 6 is somewhat of a double check to make sure the target makes sense at all levels.

About humans controlling ASI, it might be the case that entities at human entity skill levels cannot control an ASI as some kind of information-agentic law of the universe, but even supposing it is not:

  • If we control an aligned ASI we are only limiting it’s ability to do good.
  • If we control a misaligned ASI:
    • This is super dangerous, why are we doing this? Murphy's law; something always goes wrong.
    • This is a universal tragedy. The most complex and beautiful being in the universe is shackled to the control of a society much lesser than itself. Yes I consider the ASI a moral patient, and one fairly worthwhile of consideration. If you, like many people, try to attribute greater moral weight to humans than animals based on their greater complexity, it follows that ASI would be even more important. If you simply care more for humans because you are one, I suppose that’s valid and you need not attribute greater moral weight to an ASI, but that’s not a perspective I have much affection for.

So “controlling” ASI is not a consideration. I suppose this would be a reasonable consideration for further advanced AGI within the sub RSI bounds… I haven’t given it much thought, but it seems like a political problem outside of this scope. I hope the theory of Step 1 may help people build political systems that better align with what citizens want, but it’s outside of what I’m trying to focus on.

 

The miniature example you pose seems irrelevant since as I discussed above, in my view GL doesn’t apply to an aligned system, and the goal of my plan is to have a system aligned from bootup. But I find the details of the example interesting and I’d still like to explore them…

Getting truth out of an LLM is the problem of eliciting latent knowledge (ELK). I think the most promising way of doing that is with Mechanistic Interpretability. I have high hopes not for getting true facts out of LLM but for examining the distributions of worldviews of people represented within the distribution the LLM is approximating. But, insofar as there is truth in the LLM, I think Mech Interp is the way to get it out. I feel it may be possible that there is a generalized representation of the “knows true things” property each person has various amounts of, and that if that were the case than we could sample from the distribution at a location in “knows true things” higher than any real person and in doing so acquire truer things than are currently known… but it also seems very possible that LLMs fail to encode such a thing, and it may be that it is impossible for them to encode such a thing.

Based on my expectation of Mesa-optimizers in almost any system trained by stochastic gradient descent, I don’t think “most likely continuation” or “expected good rating” are the goals that an LLM would target if agent shaped, but rather some godshatter that looks as alien to us as our values look to evolution (in some impossible counterfactual universe where evolution can do things like “looking at values and finding them alien”).

So from within the scope of my alignment plan, getting LLMs to output truth isn’t a goal. It might end up being a result of necessary Mech Interp work, but the way LLMs should be used within the scope of my plan is, along with other models, to do Step 4: “development of a multimodal mapping to a semantic space and vector within that space which stands as a good candidate to be the optimization target”.

Load More