Posts

Sorted by New

Wiki Contributions

Comments

If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.

I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it's basically random.

… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

I read up to 3.1. The arguments in 3.1 are weak. It seems dubious that any AI would not be aware of the risks pertaining to disobedience. Persuasion to be corrigible seems too late - either already this would already work because its goals were made sufficiently indirect that this question would be obvious and pressing, or it doesn't care to have 'correct' goals in the first place; I really don't see how persuasion would help. The arguments for allowing itself to be turned off are especially weak, doubly-especially the MWI.

What do you mean by natural experiment, here? And what was the moral, anyway?

I remember poking at that demo to try to actually get it to behave deceptively - with the rules as he laid them out, the optimal move was to do exactly what the humans wanted it to do!

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

I would really want a cite on that claim. It doesn't sound right.

Load More