Wiki Contributions

Comments

I don't know what caused it exactly, and it seems like I'm not rate limited anymore.

If moderators started rate-limiting Nora Belrose or someone else whose work I thought was particularly good

I actually did get rate-limited today, unfortunately.

Unclear why this is supposed to be a scary result.

"If prompting a model to do something bad generalizes to it being bad in other domains, this is also evidence for the idea that prompting a model to do something good will generalize to it doing good in other domains" - Matthew Barnett

Yeah, I think Evan is basically opportunistically changing his position during that exchange, and has no real coherent argument.

I do think that Solomonoff-flavored intuitions motivate much of the credence people around here put on scheming. Apparently Evan Hubinger puts a decent amount of weight on it, because he kept bringing it up in our discussion in the comments to Counting arguments provide no evidence for AI doom.

The strong version as defined by Yudkowsky... is pretty obvious IMO

I didn't expect you'd say that. In my view it's pretty obviously false. Knowledge and skills are not value-neutral, and some goals are a lot harder to instill into an AI than others bc the relevant training data will be harder to come by. Eliezer is just not taking into account data availability whatsoever, because he's still fundamentally thinking about things in terms of GOFAI and brains in boxes in basements rather than deep learning. As Robin Hanson pointed out in the foom debate years ago, the key component of intelligence is "content." And content is far from value neutral.

As I argue in the video, I actually think the definitions of "intelligence" and "goal" that you need to make the Orthogonality Thesis trivially true are bad, unhelpful definitions. So I both think that it's false, and even if it were true it'd be trivial.

I'll also note that Nick Bostrom himself seems to be making the motte and bailey argument here, which seems pretty damning considering his book was very influential and changed a lot of people's career paths, including my own.

Edit replying to an edit you made: I mean, the most straightforward reading of Chapters 7 and 8 of Superintelligence is just a possibility-therefore-probability fallacy in my opinion. Without this fallacy, there would be little need to even bring up the orthogonality thesis at all, because it's such a weak claim.

If it's spontaneous then yeah, I don't expect it to happen ~ever really. I was mainly thinking about cases where people intentionally train models to scheme.

What do you mean "hugely edited"? What other things would you like us to change? If I were starting from scratch I would of course write the post differently but I don't think it would be worth my time to make major post hoc edits; I would like to focus on follow up posts.

Isn't Evan giving you what he thinks is a valid counting argument i.e. a counting argument over parameterizations? 

Where is the argument? If you run the counting argument in function space, it's at least clear why you might think there are "more" schemers than saints. But if you're going to say there are "more" params that correspond to scheming than there are saint-params, that looks like a substantive empirical claim that could easily turn out to be false.

Load More