How do we know that it's improved? Isn't it equally plausible that Franklin would be horrified because some things in our world are horrifying, and his own moral thinking was more rational than our own? Does moral thought gets more rational all on its own? It seems as though it might be difficult for moderns to know if moral thought were less rational than it used to be.
The point of the exercise (somewhat more clear in the full post) is not that every moral decision on which we differ with Ben Franklin represents a moral improvement, but that at least some... (read more)
What about Monte Carlo methods? There are many problems for which Monte Carlo integration is the most efficient method available.
Monte Carlo methods can't buy you any correctness. They are useful because they allow you to sacrifice an unnecessary bit of correctness in order to give you a result in a much shorter time on otherwise intractable problem. They are also useful to simulate the effects of real world randomness (or at least behavior you have no idea how to systematically predict).
So, for example, I used a Monte Carlo script to determine expected ... (read more)
Mike Plotz: I got the point of Eliezer's post, and I don't see why I'm wrong. Could you tell me more specifically than "for the reasons stated" why I'm wrong? And while you're at it, explain to me your optimal strategy in AnneC's variation of the game (you're shot if you get one wrong), assuming you can't effectively cheat.
In some games, your kind of strategy might work, but in this one it doesn't. From the problem statement, we are to assume the cards are replaced and reshuffled between each trials so that every trial has a 70% chance of being... (read more)
So how about the following advice for Jo: try really hard to forget about "rationality", perhaps go see a hypnotist to get rid of your doubts about Christianity.
If it were really so, just how rational would rationality be?
As Eliezer has pointed out at least once before -- shouldn't the ideal rationalist be the one sitting on the giant heap of utility?
If X isn't true to your best estimate, it's got to be more ideal to recognize that and start figuring out how to deal with that, than to simply ignore that. Ignoring things doesn't make them go awa... (read more)
Human level AI is still dangerous. Look how dangerous we are.
Consider that a human level AI which is not friendly, is likely to be far more unfriendly or difficult to bargain with than any human. (The total space of possible value systems is far far greater than the space of value systems inhabited by functioning humans). If there are enough of them, then they can cause the same kind of problem that a hostile society could.
But it's worse than that. A sufficiently unfriendly AI would be like a sociopath or psychopath by human standards. But unlike indi... (read more)
Interesting. There's a paradox involving a game in which players successively take a single coin from a large pile of coins. At any time a player may choose instead to take two coins, at which point the game ends and all further coins are lost. You can prove by induction that if both players are perfectly selfish, they will take two coins on their first move, no matter how large the pile is.
I'm pretty sure this proof only works if the coins are denominated in utilons.