moridinamael

Wiki Contributions

Comments

Great points. I would only add that I’m not sure the “atomic” propositions even exist. The act of breaking a real-world scenario into its “atomic” bits requires magic, meaning in this case a precise truncation of intuited-to-be-irrelevant elements.

Good point. You could also say that even having the intuition for which problems are worth the effort and opportunity cost of building decision trees, versus just "going with what feels best", is another bit of magic.

I probably should have listened to the initial feedback on this post along the lines that it wasn't entirely clear what I actually meant by "magic" and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don't really understand how they work at a granular level, and we can't replicate them except in the most narrow circumstances.

One weakness of GPT-4 is it can't really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn't have the kind of insight into its motives that we do.

Short answer, yes, it means deferring to a black-box.

Longer answer, we don't really understand what we're doing when we do the magic steps, and nobody has succeeded in creating an algorithm to do the magic steps reliably. They are all open problems, yet humans do them so easily that it's difficult for us to believe that they're hard. The situation reminds me back when people thought that object recognition from images ought to be easy to do algorithmically, because we do it so quickly and effortlessly.

Maybe I'm misunderstanding your specific point, but the operations of "listing possible worlds" and "assigning utility to each possible world" are simultaneously "standard" in the sense that they are basic primitives of decision theory and "magic" in the sense that we haven't had any kind of algorithmic system that was remotely capable of doing these tasks until GPT-3 or -4.

I spent way too many years metaphorically glancing around the room, certain that I must be missing something that is obvious to everyone else. I wish somebody had told me that I wasn't missing anything, and these conceptual blank spots are very real and very important.

As for the latter bit, I am not really an Alignment Guy. The taxonomy I offer is very incomplete. I do think that the idea of framing the Alignment landscape in terms of "how does it help build a good decision tree? what part of that process does it address or solve?" has some potential.

So do we call it in favor of porby, or wait a bit longer for the ambiguity over whether we've truly crossed the AGI threshold to resolve?

That is probably close to what they would suggest if this weren't mainly just a metaphor for the weird ways that I've seen people thinking about AI timelines.

It might be a bit more complex than a simple weighted average because of discounting, but that would be the basic shape of the proper hedge.

These would be good ideas. I would remark that many people definitely do not understand what is happening when naively aggregating, or averaging together disparate distributions. Consider the simple example of the several Metaculus predictions for date of AGI, or any other future event. Consider the way that people tend to speak of the aggregated median dates. I would hazard most people using Metaculus, or referencing the bio-anchors paper, think the way the King does, and believe that the computed median dates are a good reflection of when things will probably happen.

Generally, you should hedge. Devote some resources toward planting and some resources toward drought preparedness, allocated according to your expectation. In the story, the King trust the advisors equally, and should allocate toward each possibility equally, plus or minus some discounting. Just don't devote resources toward the fake "middle of the road" scenario that nobody actually expects. 

If you are in a situation where you really can only do one thing or the other, with no capability to hedge, then I suppose it would depend on the details of the situation, but it would probably be best to "prepare now!" as you say.

"Slow vs. fast takeoff" is a false dichotomy. At least, the way that the distinction is being used rhetorically, in the present moment, implies that there are two possible worlds, one where AI develops slowly and steadily, and one where nothing much visibly happens and then suddenly, FOOM.

That's not how any of this works. It's permitted by reality that everything looks like a "slow takeoff" until some unknown capabilities threshold is reached, and then suddenly FOOM.

Load More