Posts

Sorted by New

Wiki Contributions

Comments

How do we know that it's improved? Isn't it equally plausible that Franklin would be horrified because some things in our world are horrifying, and his own moral thinking was more rational than our own? Does moral thought gets more rational all on its own? It seems as though it might be difficult for moderns to know if moral thought were less rational than it used to be.

The point of the exercise (somewhat more clear in the full post) is not that every moral decision on which we differ with Ben Franklin represents a moral improvement, but that at least some do and there are many. So, there are many things about our world today that are, in fact, better than the world of the 1700s, and at least some of them would nonetheless shock or horrify someone like Ben Franklin, at least at first, even if he could ultimately be convinced wholly that they are an improvement.

So in designing any real utopia, we have to include things that are different enough to horrify us at first glance. We have to widen our scope of acceptable outcomes to include things with an argument to be better that would horrify us. And that will, in fact, potentially include outcomes that hearken back to previous times, and things that Ben Franklin (or any other rational person of the past) might consider more comforting than we would.

What about Monte Carlo methods? There are many problems for which Monte Carlo integration is the most efficient method available.

Monte Carlo methods can't buy you any correctness. They are useful because they allow you to sacrifice an unnecessary bit of correctness in order to give you a result in a much shorter time on otherwise intractable problem. They are also useful to simulate the effects of real world randomness (or at least behavior you have no idea how to systematically predict).

So, for example, I used a Monte Carlo script to determine expected scale economies for print order flow in my business. Why? Because it's simple and the behavior I am modeling is effectively random to me. I could get enough information to make a simulation that gives me 95% accuracy with a few hours of research and another few hours of programming time. Of course there is somewhere out there a non-randomized algorithm that could do a more accurate job with a faster run time, but the cost of discovering it and coding it would be far more than a day's work, and 95% accuracy on a few dozen simulations was good enough for me to estimate more accurately than most of my competition, which is all that mattered. But Eliezer's point stands. Randomness didn't buy me any accuracy, it was a way of trading accuracy for development time.

Mike Plotz: I got the point of Eliezer's post, and I don't see why I'm wrong. Could you tell me more specifically than "for the reasons stated" why I'm wrong? And while you're at it, explain to me your optimal strategy in AnneC's variation of the game (you're shot if you get one wrong), assuming you can't effectively cheat.

In some games, your kind of strategy might work, but in this one it doesn't. From the problem statement, we are to assume the cards are replaced and reshuffled between each trials so that every trial has a 70% chance of being blue or red.

In every single case, it is more likely that the next card is blue. Even in the game where you are shot if you get one wrong, you should still pick blue every time. The reason is that of all the possible combinations of cards chosen for the whole game, the combination that consists of all blue cards is the most likely one. It is more likely than any particular combination that includes a red card. Because at every step, a blue card is more likely than a red one. Just because you pick a red card, doesn't give you credit for anywhere a red card might pop up. You have to pick it in the right spot if you want to live. And your chances of doing that in any particular spot are less than the chances of picking the blue card correctly.

There are games where you adopt a strategy with greater variance in order to maximize the possibility of an unlikely win, rather than go for the highest expected value (within the game), because the best expected outcome is a loss. Classic example would be the hail mary pass in football. Expected outcome is worse (in yards) than just running a normal play, or teams would do it all the time. But if there are only 5 seconds on the clock and you need a touchdown, the normal play might win 1 in 1000 games, while the hail mary wins 1 in 50. But there is no difference in variance in choosing red or blue in the game described here, so that kind of strategy doesn't apply.

So how about the following advice for Jo: try really hard to forget about "rationality", perhaps go see a hypnotist to get rid of your doubts about Christianity.

If it were really so, just how rational would rationality be?

As Eliezer has pointed out at least once before -- shouldn't the ideal rationalist be the one sitting on the giant heap of utility?

If X isn't true to your best estimate, it's got to be more ideal to recognize that and start figuring out how to deal with that, than to simply ignore that. Ignoring things doesn't make them go away.

Whatever happens, she ought to come to a place where she believes what she believes, because it's her best attempt to discover the truth.

Human level AI is still dangerous. Look how dangerous we are.

Consider that a human level AI which is not friendly, is likely to be far more unfriendly or difficult to bargain with than any human. (The total space of possible value systems is far far greater than the space of value systems inhabited by functioning humans). If there are enough of them, then they can cause the same kind of problem that a hostile society could.

But it's worse than that. A sufficiently unfriendly AI would be like a sociopath or psychopath by human standards. But unlike individual sociopaths among humans (who can become very powerful and do extraordinary damage, consider Stalin), they would not need to fake [human] sanity to work with others if there were a large community of like-minded unfriendly AIs. Indeed, if they were unfriendly enough and more comfortable with violence than say, your typical european/american, the result could look a lot like the colonialism of the 15th-19th centuries or earlier migrations of more warlike populations with all humans on the short end of the stick. And that's just looking at the human potential for collective violence. Surely the space of all human level intelligences contains some that are more brutally violent than the worst of us.

Could we conceivably hold this off? Possible, but it would be a big gamble, and unfriendliness would ensure that such a conflict would be inevitable. If the AI were significantly more efficient than we are (cost of upkeep and reproduction), that would be a huge advantage in any potential conflict. And it's hard to imagine an AI of strictly human level being commercially useful to build unless unless its efficiency is superior to ours.

Interesting. There's a paradox involving a game in which players successively take a single coin from a large pile of coins. At any time a player may choose instead to take two coins, at which point the game ends and all further coins are lost. You can prove by induction that if both players are perfectly selfish, they will take two coins on their first move, no matter how large the pile is.

I'm pretty sure this proof only works if the coins are denominated in utilons.