Wiki Contributions

Comments

Is this not kosher? The minimum karma requirement seems like an anti-spam and anti-troll measure, with the unfortunate collateral damage of temporarily gating out some potential good content. The post seems clear to me as good content, and my suggestion to MazeHatter in the open thread that this deserved its own thread was upvoted.

If that doesn't justify skirting the rule, I can remove the post.

I think you should post this as its own thread in Discussion.

This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.

See also Tool AI, from the discussions between Holden Karnofsky and LW.

Interesting. I wonder to what extent this corrects for people's risk-aversion. Success is evidence against the riskiness of the action.

Having circular preferences is incoherent, and being vulnerable to a money pump is a consequence of that.

I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.

This means that you won't, in fact, trade your X for .95Y. That in turn means that you do not actually value X at .9Y, and so the initially stated exchange rates are meaningless (or rather, they don't reflect your true preferences).

Your strategy requires you to refuse all trades at exchange rates below the money-pumpable threshold, and you'll end up only making trades at exchange rates that are non-circular.

Judging from the comments this is receiving on Hacker News, this post is a mindkiller. HN is an audience more friendly to LW ideas than most, so this is a bad sign. I liked it, but unfortunately it's probably unsuitable for general consumption.

I know we've debated the "no politics" norm on LW many times, but I think a distinction should be made when it comes to the target audience of a post. In posts aimed to make a contribution to "raising the sanity waterline", I think we're shooting ourselves in the foot by invoking politics.

I like the combination of conciseness and thoroughness you've achieved with this.

There are a couple of specific parts I'll quibble about:

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

"The Automation of Science" section seems weaker to me than the others, perhaps even superfluous. I think the line I've quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.

Will we be able to build an artificial general intelligence? Yes, sooner or later.

I assign a high probability to the proposition that we will be able to build AGI, but I think a straight "yes" is too strong here.

Out of curiosity, what are your current thoughts on the arguments you've laid out here?

Load More