RobinHanson

RobinHanson's Comments

What can the principal-agent literature tell us about AI risk?

"models are brittle" and "models are limited" ARE the generic complaints I pointed to.

What can the principal-agent literature tell us about AI risk?

We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.

What can the principal-agent literature tell us about AI risk?

There are THOUSANDS of critiques out there of the form "Economic theory can't be trusted because economic theory analyses make assumptions that can't be proven and are often wrong, and conclusions are often sensitive to assumptions." Really, this is a very standard and generic critique, and of course it is quite wrong, as such a critique can be equally made against any area of theory whatsoever, in any field.

What can the principal-agent literature tell us about AI risk?

The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.

What can the principal-agent literature tell us about AI risk?

"Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high."

I didn't say that. This is what I actually said:

"surely the burden of 'proof' (really argument) should lie on those say this case is radically different from most found in our large and robust agency literatures."

Don't Double-Crux With Suicide Rock

Uh, we are talking about holding people to MUCH higher rationality standards than the ability to parse Phil arguments.

Characterising utopia

"At its worst, there might be pressure to carve out the parts of ourselves that make us human, like Hanson discusses in Age of Em."

To be clear, while some people do claim that such such things might happen in an Age of Em, I'm not one of them. Of course I can't exclude such things in the long run; few things can be excluded in the long run. But that doesn't seem at all likely to me in the short run.

Don't Double-Crux With Suicide Rock

You are a bit too quick to allow the reader the presumption that they have more algorithmic faith than the other folks they talk to. Yes if you are super rational and they are not, you can ignore them. But how did you come to be confident in that description of the situation?

Another AI Winter?

Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.

Another AI Winter?

To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.

Load More