LESSWRONG
LW

1471
Alex Amadori
42140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Company Man
Alex Amadori13d12

Very well written horror story! Props :)

Reply1
Omelas Is Perfectly Misread
Alex Amadori13d00

Believe what?

That's kind of the point... people reject premises all the time. And I don't mean in the "this premise seems unrealistic" sense. I mean more in the "I refuse to participate in this thought experiment!" sense. Even when the point wasn't to give a lesson about the shape of reality but to give a lesson about the shape of the reader's mind and how they respond to the thought experiment.

People just hate inspecting their own mind with a passion. It's also common for people not to trust you when you suggest a thought experiment if they can't see where it's going. It's very easy to get an anger reaction this way (literally raised voice, tense muscles, raised heartbeat type of reaction).

Reply
Anthropic CEO calls for RSI
Alex Amadori9mo101

Recursive self improvement

Reply
Teaching ML to answer questions honestly instead of predicting human answers
Alex Amadori4y10

For concreteness, let’s say that the world model requires a trillion (“N”) bits to specify, the intended head costs
10,000 bits, and the instrumental head costs 1,000 bits. If we just applied a simplicity prior directly, we expect to spend N + 1,000 bits to learn the instrumental model rather than N + 10,000 bits to learn the intended model.  That’s what we want to avoid.

Not sure if I'm misunderstanding this, but it seems to me that if it takes 10,000 bits to specify the intended head and 1000 bits to specify the instrumental head, that's because the world model - which we're assuming is accurate - considers humans that answer a question with a truthful and correct description of reality much rarer than humans who don't. Or at least that's the case when it comes to the training dataset. 10,000 - 1000 equals 9,000, so in this context "much rarer" means 2^{9,000} times rarer.

However, 

Now we have two priors over ways to use natural language: we can either sample the intended head at random from the simplicity prior (which we’ve said has probability 2^{-10,000} of giving correct usage), or we can sample the environment dynamics from the simplicity prior and then see how humans answer questions. If those two are equally good priors, then only 2^{-10,000} of the possible humans would have correct usage, so conditioning on agreement saves us 10,000 bits.

So if I understand correctly, the right amount of bits saved here would be 9,000.

So now we spend (N/2 + 11,000) + (N/2 − 10,000) bits altogether, for a total of N + 1,000.

Unless I made a mistake, this would mean the total is N + 2,000 - which is still more expensive than finding the instrumental head.

Reply
47Three main views on the future of AI
2mo
1