Posts

Sorted by New

Wiki Contributions

Comments

OK, so all that makes sense and seems basically correct, but I don't see how you get from there to being able to map confidence for persons across a question the same way you can for questions across a person.

Adopting that terminology, I'm saying for a typical Less Wrong user, they likely have a similar understanding-the-question module. This module will be right most of the time and wrong some of the time, so they correctly apply the outside view error afterwards on each of their estimates. Since the understanding-the-question module is similar for each person, though, the actual errors aren't evenly distributed across questions, so they will underestimate on "easy" questions and overestimate on "hard" ones, if easy and hard are determined afterwards by percentage that get the answer correct.

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).

For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing most people answered WOW or Solitaire/Minesweeper or Tetris, each of which would be the correct answer if you remove on of those restraiints.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked! So you'd end up distributing that model error relatively evenly over all the questions, and so you'd end up underconfident on the questions where the model was straightfoward and correct and overconfident when the question wasn't as simple as it appeared.

I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).

We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.

To quote Nozick from wikipedia: "Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." That is exactly what happens on the spaceship, but most people here would find it pretty reasonable. A real utility monster would look more like that than some super-happy alien.

Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.

Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.

The stuff you want is called Jevity. It's a complete liquid diet that's used for feeding tube patients (Ebert after cancer being one of the most famous). It can be consumed orally, and you can buy it in bulk from Amazon. It's been designed by people who are experts in nutrition and has been used for years by patients as a sole food source.

Of course, Jevity only claims to keep you alive and healthy as your only food source, not to trim your fat, sharpen your brain, etc. But I'm fairly sure that has more to do with ethics, a basic knowledge of the subject, and an understanding of the necessity of double blind studies for medical claims than someone finding out the secrets to perfect health who forgot iron and sulfur in their supplement.

The problem is Objectivism was actually an Ayn Rand personality cult more than anything else, so you can't really get a coherent and complete philosophy out of it. Rothbard goes into quite a bit of detail about it in The Sociology of the Ayn Rand Cult.

http://www.lewrockwell.com/rothbard/rothbard23.html

Some highlights:

"The philosophical rationale for keeping Rand cultists in blissful ignorance was the Randian theory of "not giving your sanction to the Enemy." Reading the Enemy (which, with a few carefully selected exceptions, meant all non- or anti-Randians) meant "giving him your moral sanction," which was strictly forbidden as irrational. In a few selected cases, limited exceptions were made for leading cult members who could prove that they had to read certain Enemy works in order to refute them."

"The psychological hold that the cult held on the members may be illustrated by the case of one girl, a certified top Randian, who experienced the misfortune of falling in love with an unworthy non-Randian. The leadership told the girl that if she persisted in her desire to marry the man, she would be instantly excommunicated. She did so nevertheless, and was promptly expelled. And yet, a year or so later, she told a friend that the Randians had been right, that she had indeed sinned and that they should have expelled her as unworthy of being a rational Randian."

This is not to say Rand didn't have any valid insights, but since Rand really believed that things she said were by definition rational since she was rational (and as a bonus, the only possible rational thing)... there's a lot of junk and cruft in there, so there's no real good reason to take the whole label.

You could try "adulterating" the candy with something non-edible, like colored beads. It would fix the volume concerns, be easily adjustable, and possibly add a bit of variable reinforcement.