Luke_A_Somers

Luke_A_Somers' Posts

Sorted by New

Luke_A_Somers' Comments

Why Bayesians should two-box in a one-shot

If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.

Why Bayesians should two-box in a one-shot

I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it's basically random.

Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

I read up to 3.1. The arguments in 3.1 are weak. It seems dubious that any AI would not be aware of the risks pertaining to disobedience. Persuasion to be corrigible seems too late - either already this would already work because its goals were made sufficiently indirect that this question would be obvious and pressing, or it doesn't care to have 'correct' goals in the first place; I really don't see how persuasion would help. The arguments for allowing itself to be turned off are especially weak, doubly-especially the MWI.

Fables grow around missed natural experiments

What do you mean by natural experiment, here? And what was the moral, anyway?

Toy model of the AI control problem: animated version

I remember poking at that demo to try to actually get it to behave deceptively - with the rules as he laid them out, the optimal move was to do exactly what the humans wanted it to do!

The Reality of Emergence

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

The Reality of Emergence

I would really want a cite on that claim. It doesn't sound right.

Load More