by Nisan1 min read12th Sep 20217 comments
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
New Comment
7 comments, sorted by Click to highlight new comments since: Today at 8:43 PM

Recent interviews with Eliezer:

People are fond of using the neologism "cruxy", but there's already a word for that: "crucial". Apparently this sense of "crucial" can be traced back to Francis Bacon.

The point of using a word like this is to point to different habits of thoughts. If you use an existing word that's unlikely to happen in listerners.

If you don't do that you get a lot of motte-and-bailey issues. 

A cruxy point doesn't have to be important, the whole question being considered doesn't have to be important. This is an unfortunate connotation of "crucial", because when I'm pointing out that the sky is blue, I'm usually not saying that it's important that it's blue, or that it's important for this object level argument to be resolved. It's only important to figure out what caused a simple mistake that's usually reliably avoided, and to keep channeling curiosity to fill out the map, so that it's not just the apparently useful parts that are not wild conjecture.

I think it's relative.  A crux is crucial to a question, whether the question is crucial to anything else or not.  If you're pointing out the sky is blue, that's only a crux if it's important to some misunderstanding or disagreement.  

I'm with Nisan.  "Crucial" is simply the proper and common term that should be used instead of the backformation "cruxy".  

Agents who model each other can be modeled as programs with access to reflective oracles. I used to think the agents have to use the same oracle. But actually the agents can use different oracles, as long as each oracle can predict all the other oracles. This feels more realistic somehow.

I'm not sure there's a functional difference between "same" and "different" oracles at this level of modeling.