The Irrationality Game

by Will_Newsome4 min read3rd Oct 2010931 comments

44

Open ThreadsContrarianismCommunity
Frontpage

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

44

Rendering 500/931 comments, sorted by (show more) Highlighting new comments since Today at 5:02 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.

5wedrifid11yGiven that most of the top comments are meta in one way or another it would seem that the 'top comments' list belongs somewhere other than on the front page. Can't we hide the link to it on the wiki somewhere?
4LukeStebbing11yThe majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them. Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).
5AngryParsley11yJust to clarify: by "unknown entities" do you mean non-human intelligent beings?
3Will_Newsome10yI would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.
2[anonymous]11yI find it vaguely embarrassing that this post, taken out of context, now appears at the top of the "Top Comments" listing.
5Vladimir_Nesov11yI think "top comments" was an experiment with a negative result, and so should be removed.

Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%

I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.

7ata11yAgreed. I think they've explicitly denied that they're working on AGI, but I'm not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they're probably among the entities most likely (along with, I'd say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).

If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.

Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.

5Kevin11yGoogle has one employee working (sometimes) on AGI. http://research.google.com/pubs/author37920.html [http://research.google.com/pubs/author37920.html]
6khafra11yIt's comforting, friendliness-wise, that one of his papers cites "personal communication with Steve Rayhawk."

Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Can you rephrase this statement tabooing the words experience and qualia.

If he could, he wouldn't be making that mistake in the first place.

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

5LucasSloan11yWhat do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.
5Mass_Driver11yPropositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.
4Will_Newsome11yPah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.
2[anonymous]11yThis raises an interesting point: what do you think about the Presumptuous Philosopher [http://cosmologist.info/anthropic.html] thought experiment?
3Jonathan_Graehl11yYep. Over-reliance on anthropic arguments IMO.
3Will_Newsome11yHuh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe. (ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)
4Nick_Tarleton11yI'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)
2Will_Newsome11yThat is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%. What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.
2AlephNeil11yIf 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect [http://www.cavehill.uwi.edu/bnccde/ph29a/putnam.html]. On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)
5Will_Newsome11yIt's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot. First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way. (Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).) What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?
2AlephNeil11yOh sure - non-trivial philosophical questions are funny like that. Anyway, my idea is that for any description of a universe, certain elements of that description will be ad hoc mathematical 'scaffolding' which could easily be changed without meaningfully altering the 'underlying reality'. A basic example of this would be a choice of co-ordinates in Newtonian physics. It doesn't mean anything to say that this body rather than that one is "at rest". Now, specifying a manner in which the universe is being simulated is like 'choosing co-ordinates' in that, to do a simulation, you need to make a bunch of arbitrary ad hoc choices about how to represent things numerically (you might actually need to be able to say "this body is at rest"). Of course, you also need to specify the laws of physics of the 'outside universe' and how the simulation is being implemented and so on, but perhaps the difference between this and a simple 'choice of co-ordinates' is a difference in degree rather than in kind. (An 'opaque' chunk of physics wrapped in a 'transparent' mathematical skin of varying thickness.) I'm not saying this account is unproblematic - just that these are some pretty tough metaphysical questions, and I see no grounds for (near-)certainty about their correct resolution. He's not talking about ensemble vs 'single universe' models of reality, he's talking about reference - what's it's possible for someone to refer to. He may be wrong - I'm not sure - but even when he's wrong he's usually wrong in an interesting way. (Like this [http://consc.net/papers/rock.html].) I'm unmoved - it's trite to point out that even smart people tend to be overconfident in beliefs that they've (in some way) invested in. (And please note that the line you were responding to is specifically about the scenario where there is 'intervention'.)
2wedrifid11yErr... I'm not intimately acquainted with the sport myself... What's the approximate difficulty rating of that kind of verbal gymnastics stunt again? ;)
2AlephNeil11yIt's a tricky one - read the paper. I think what he's saying is that there's no way for a person in a simulation (assuming there is no intervention) to refer to the 'outside' world in which the simulation is taking place. Here's a crude analogy: Suppose you were a two-dimensional being living on a flat plane, embedded in an ambient 3D space. Then Putnam would want to say that you cannot possibly refer to "up" and "down". Even if you said "there is a sphere above me" and there was a sphere above you, you would be 'incorrect' (in the same paradoxical way).
6MugaSofer9yBut ... we can describe spaces with more than three dimensions.
[-][anonymous]11y 63

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

What reason do you have for assigning such high probability to time travel being possible?

3Perplexed11yAnd what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation? ;) Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.
2RobinZ11yI have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability. Edit: Of course, evidence for that 95%+ would be appreciated.

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

8Swimmy11yYou have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.
7wedrifid11yWait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!
8RobinZ11yIt is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.
8RobinZ11yI see in your posting history that you identify as a Christian - but this story contains more details [http://lesswrong.com/lw/jk/burdensome_details/] than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?

There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.

Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.

Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.

This (modulo the chance it was made up) is pretty strong evidence that you're wrong. I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

Here's another case:

"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.

6AdeleneDawner11yThey could probably do some relevant research by talking to Alzheimer's patients - they wouldn't get anything as clear as that, I think, but I expect they'd be able to get statistically-significant data.
7[anonymous]11yHow detailed of a model are you thinking of? It seems like there are at least easy and somewhat trivial predictions we could make e.g. that a human will eat chocolate instead of motor oil.
4dyokomizo11yI would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.
9AdeleneDawner11yHow about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)
4Douglas_Knight11yI think "vague" is a poor word choice for that concept. "(not) informative" is a technical term with this meaning. There are probably words which are clearer to the layman.
2dyokomizo11yI agree vague is not a good word choice. Irrelevant (using relevancy as it's used to describe search results) is a better word.
5Perplexed11yDownvoted in agreement. But I think that the randomness comes from what programmers call "race conditions" in the timing of external stimuli vs internal stimuli. Still, these race conditions make prediction impossible as a practical matter.
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

4wedrifid11yI want to upvote each of these points a dozen times. Then another few for the first. It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.
2mattnewport11yI guess I'm playing the game right then :) I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.
4Mass_Driver11yFunny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)
1wedrifid11yMost of them, no. Some, yes. Particularly since the alternative is the inevitable loss of everything that is valuable to me in the universe.
7Will_Newsome11yThis is incredibly tangential, but I was talking to a friend earlier and I realized how difficult it is to instill in someone the desire for altruism. Her reasoning was basically, "Yeah... I feel like I should care about cancer, and I do care a little, but honestly, I don't really care." This sort of off-hand egoism is something I wasn't used to; most smart people try to rationalize selfishness with crazy beliefs. But it's hard to argue with "I just don't care" other than to say "I bet you will have wanted to have cared", which is gramatically horrible and a pretty terrible argument.
9Jordan11yI respect blatant apathy a whole hell of a lot more than masked apathy, which is how I would qualify the average person's altruism.

75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.

At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.

(Edited for clarity.)

(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)

2Risto_Saarelma11yJust to make sure, is this talking about something different from people committing less crimes when they are themselves practicing TM or in daily contact with someone who does? I don't really understand the second paragraph. What arm TM-Sidhis, are they something distinct from regular TM (are these different types of practicioners). And what's with the sqrt(1%)? One in ten people in the total population need to be TM-Sidhis for the crime rate reduction effect to kick in?

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

4wedrifid11yDo you apply this to yourself?
3SimonF11yYes! Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
3RomanDavis11yBut we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using, And this is before the computer uses it's knowledge to reoptimize it's optimization process.
1SimonF11yI understand the concept of recursive self-optimization und I don't consider it to be very implausible. Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization? I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
3wedrifid11yI count "algorithm-space is really really really big" as at least some form of evidence. ;) Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid. Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
2Risto_Saarelma11yYou're putting 'effectively' here in place of 'intelligently' in the original assertion.
2timtyler10ySure there is - see: * Legg, Shane Tests of Machine Intelligence [http://www.vetta.org/documents/TestsOfMachineIntelligence.pdf]. Shane Legg and Marcus Hutter. In Proc. 50th Anniversary Summit of Artificial Intelligence, Monte Verità, Switzerland. 2007. * Hutter, M.: Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. [http://www.hutter1.net/ai/uaibook.htm] Springer, Berlin (2004) * Hernández-Orallo, J., Dowe, D.: Measuring universal intelligence: Towards an anytime intelligence test [http://users.dsic.upv.es/proy/anynt/measuring.pdf] . Artificial Intelligence. 17, 1508-1539 (2010) * Solomonoff, R. J.: A Formal Theory of Inductive Inference: Parts 1 [http://world.std.com/~rjs/1964pt1.pdf] and 2 [http://world.std.com/~rjs/1964pt2.pdf]. Information and Control 7, 1-22 and 224-254 (1964). The only assumption about the environment is that Occam's razor applies to it.
4SimonF10yOf course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
1timtyler10yIMO, it is best to think of power and breadth being two orthogonal dimensions - like this. * narrow <-> broad; * weak <-> powerful. The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct. I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can. I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
2[anonymous]9yThat is a very good point, with wideness orthogonal to power. Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
2whpearson11yCan you unpack algorithm and why you think an intelligence is one?
1SimonF11yI'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way. Wikipedia [http://en.wikipedia.org/wiki/Algorithm#Formalization] says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system." When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.
3whpearson11yDoes it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute [http://en.wikipedia.org/wiki/Fetch-execute_cycle] cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine [http://www.cs.brown.edu/people/pw/papers/ficacm.ps] or persistent Turing machine. So some may say it is not an algorithm. The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.
1ata11yDo you behave intelligently in domains you were not specifically designed(/selected) for?

Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)

I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.

4JoshuaZ11yUpvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.
5James_Miller11yI wrote about it here: http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html [http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html] Once we have identified genes that play a key role in intelligence then eugenics through massive embryo selection has a good chance at producing lots of super-geniuses especially if you are willing to tolerate a high "error rate." The Chinese are actively looking for the genetic keys to intelligence. (See http://vladtepesblog.com/?p=24064 [http://vladtepesblog.com/?p=24064]) The Chinese have a long pro-eugenics history (See Imperfect Conceptions by Frank Dikötter) and I suspect have a plan to implement a serious eugenics program as soon as it becomes practical which will likely be within the next five years.
5JoshuaZ11yI think the main point of disagreement is the estimate that such a program would be practical in five years (hence my longer-term estimate). My impression is that actual studies of the genetic roots of intelligence are progressing but at a fairly slow pace. I'd give a much lower than 40% chance that we'll have that good an understanding in five years.
2Jack11yCan you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.
2gwern11yRecorded: * http://predictionbook.com/predictions/1834 [http://predictionbook.com/predictions/1834]

There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)

7JenniferRM11yIf I'm interpreting the terms charitably, I think I put this more like 70%... which seems like a big enough numerical spread to count as disagreement -- so upvoted! My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe's Leviathan, and personal musings about Fukuyama's End Of History [http://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man] extrapolated into transhuman contexts, and more ideas in this vein. It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out... but given arbitrary computing resources and no ethical constraints [http://lesswrong.com/lw/2sl/the_irrationality_game/2q6f?c=1], I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a "theory of morality". But even then, being able to generate evidence about the absence of an objective object level "theory of morality" would itself seem to offer a strategy for taking a universally acceptable position on the general subject... which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel's "Last Word" [http://www.phil.cam.ac.uk/~swb24/reviews/Nagel.htm]: "If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it."
4jimrandomh11yThis probably isn't what you had in mind, but any single complete human brain is a (or contains a) morality, and it's objectively real.
4WrongBot11yIndeed, that was not at all what I meant.
3Will_Newsome11yDoes the morality apply to paperclippers? Babyeaters?

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?

Upvoted for 'not even being wrong'.

3NihilCredo11yCould you expand a little on this?
7erratio11ySure. Here's a version of the analogy that first got me thinking about it: If I turn on a lamp at night, it sheds both heat and light. But I wouldn't say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn't produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn't do much for us. There's a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.
2Perplexed11y(I'm not sure why I pushed the button to reply, but here I am so I guess I'll just make something up to cover my confusion.) Do you also believe that we use language - speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?

The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)

2Will_Newsome11yThis seems reasonable with the help of FAI, though I doubt CEV would do it; or are you thinking of possible non-FAI technologies?

It does not all add up to normality. We are living in a weird universe. (75%)

6Interpolate11yMy initial reaction was that this is not a statement of belief but one of opinion, and to think like reality [http://lesswrong.com/lw/hs/think_like_reality/]. I'm still not entirely sure what you mean (further elaboration would be very welcome), but going by a naive understanding I upvoted your comment based on the principle of Occam's Razor - whatever your reasons for believing this (presumably perceived inconsistencies, paradoxes etc. in the observable world, physics etc.) I doubt your conceived "weird" universe would the simplest explanation. Additionally, that conceived weird universe in addition to lacking epistemic/empirical ground begs for more explanation than the understanding/lack thereof of the universe/reality that's more of less shared by current scientific consensus. If I'm understanding correctly, your argument for the existence of a "weird universe" is analagous to an argument for the existence of God (or the supernatural, for that matter): where by introducing some cosmic force beyond reason and empiricism, we eliminate the problem of there being phenomena which can't be explained by it.
5Risto_Saarelma11yWould "Fortean phenomena [http://en.wikipedia.org/wiki/Forteana#Fortean_phenomena] really do occur, and some type of anthropic effect keeps them from being verifiable by scientific observers" fit under this statement?
5Eugine_Nier11yPlease specify what you mean by a weird universe.
7Kevin11yWe are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.
3[anonymous]11yThe more I hear about this the more intrigued I get. Could someone with a strong belief in this hypothesis write a post about it? Or at the very least throw out hints about how you updated in this direction?
2Will_Newsome11yDownvoted in agreement (I happen to know generally what Kevin's talking about here, but it's really hard to concisely explain the intuition).
1Clippy11yWhy do you think so?
2Kevin11yFor some definitions of weird, our deal (assuming it continues to completion) is enough to land this universe in the block of weird universes.
[-][anonymous]11y 38

I think that there are better-than-placebo methods for causing significant fat loss. (60%)

ETA: apparently I need to clarify.

It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.

[This comment is no longer endorsed by its author]Reply

voted up because 60% seems WAAAAAYYYY underconfident to me.

5Eugine_Nier11yNow that we're up-voting underconfidence I changed my vote.
2magfrump11yFrom the OP:
3[anonymous]11yshoot... I'm just scared to bet, is all. You can tell I'm no fun at Casino Night.
7Will_Newsome11yAh, but betting for a proposition is equivalent to betting against its opposite. Why are you so certain that there's no better-than-placebo methods for causing significant fat loss? But If you do change your mind, please don't change the original, as then everyone's comments would be irrelevant.
6Jonathan_Graehl11yAbsolutely right. This is an important point that many people miss. If you're uncertain about your estimated probability, or even merely risk averse, then you may want to take neither side of the implied bet. Fine, but at least figure out some odds where you feel like you should have an indifferent expectation.
3Will_Newsome11yVoted down for agreement! (Liposuction... do you mean dietary methods? I'd still agree with you though.) Edit: On reflection, 60% does seem too low. Changed to upvote.
2[anonymous]11yI meant diet, exercise, and perhaps supplements; liposuction is trivially true.
1Normal_Anomaly11yUpvoted, because I say diet and exercise work at 85% (for a significant fraction of people; there may be some with unlucky genes who can't lose weight that way).

the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)

*that humans have invented so far

5Mass_Driver11yThe proposition strikes me as either circular or wrong, depending on your definitions of "peaceful" and "prosperous." If by "peaceful" you mean "devoid of violence," and by "violence" you essentially mean "transfers of wealth that are contrary to just laws," and by "just laws" you mean "laws that honor private property rights above all else," then you should not be surprised if joint stock corporations are the most peaceful entities the world has seen so far, because joint stock corporations are dependent on private property rights for their creation and legitimacy. If by "prosperous" you mean "full of the kind of wealth that can be reported on an objective balance sheet," and if by "objective balance sheet" you mean "an accounting that will satisfy a plurality of diverse, decentralized and marginally involved investors," then you should likewise not be surprised if joint stock corporations increase prosperity, because joint stock corporations are designed so as to maximize just this sort of prosperity. Unfortunately, they do it by offloading negative externalities in the form of pollution, alienation, lower wages, censored speech, and cyclical instability of investments onto individual people. When your 'goals' are the lowest common denominator of materialistic consumption, joint stock corporations might be unbeatable. If your goals include providing a social safety net, education, immunizations, a free marketplace of ideas, biodiversity, and clean air, you might want to consider using a liberal democracy. Using the most charitable definitions I can think of for your proposition, my estimate for the probability that a joint-stock system would best achieve a fair and honest mix of humanity's crasser and nobler goals is somewhere around 15%, and so I'm upvoting you for overconfidence.
5blogospheroid11yComing from the angle of competition in governance, I think you might be mixing up a lot of stuff. A joint stock corporation which is sovereign is trying to compete in the wider world for customers , i.e. willing taxpayers. If the people desire the values you have mentioned then the joint-stock government will try to provide those cost effectively. Clean Air and Immunizations will almost certainly be on the agenda of a city government Biodiversity will be important to a government which includes forests in its assets and wants to sustainably maintain the same. A free marketplace of ideas, free education and social safety nets would purely be determined by the market for people. Is it an important value enough that people would not come to your country and would go to another? if it is, then the joint stock government would try to provide the same. If not, then they wouldn't.
5wedrifid11yAll of this makes sense in principle. (I'm assuming you're not thinking that any of it would actually work in practice with either humans or ideal rational agents, right?)
1Mass_Driver11yGood response, but I have to agree with wedrifid here: you can't compete for "willing taxpayers" at all if you're dealing with hard public goods, and elsewhere competition is dulled by (a) the irrational political loyalties of citizens, (b) the legitimate emotional and economic costs of immigration, (c) the varying ability of different kinds of citizens to move, and (d) protectionist controls on the movement of labor in whatever non-libertopian governments remain, which might provide them with an unfair advantage in real life, the theoretical axioms of competitive advantage theory be damned. I'd be all for introducing some features of the joint stock corporation into some forms of government, but that doesn't sound very much like what you were proposing would lead to peace and prosperity -- you said the jsc was better than other forms, not a good thing to have a nice dose of.
3blogospheroid11yOr how I would call it, no representation without taxation. Those who contribute equity to society rule it. Everyone else contracts with the corporate in some way or another.
2knb11yWhat is the term for this mode of governance? Corporate Monarchy? Seems like a good idea to me.
2gwern11yEngland had property-rights based monarchy. It's basically gone now. So pace Mencius Moldbug, it can't be especially good a system - else it would not have died.

Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)

(Absolutely certain.)

I'm not sure whether to chide you or giggle at the self-reference. I suspect, though, that "absolutely certain" is not a confidence level.

I want to vote you down in agreement, but I don't have enough karma.

assigning numerical probabilities to common-sense conclusions and beliefs is meaningless

It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.

The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.

Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian prob... (read more)

2Vladimir_M11yI think you're not drawing a clear enough distinction between two different things, namely the mathematical relationships between numbers, and the correspondence between numbers and reality. If you ask an astronomer what is the mass of some asteroid, he will presumably give you a number with a few significant digits and and uncertainty interval. If you ask him to justify this number, he will be able to point to some observations that are incompatible with the assumption that the mass is outside this interval, which follows from a mathematical argument based on our best knowledge of physics. If you ask for more significant digits, he will say that we don't know (and that beyond a certain accuracy, the question doesn't even make sense, since it's constantly losing and gathering small bits of mass). That's what it means for a number to be rigorously justified. But now imagine that I make an uneducated guess of how heavy this asteroid might be, based on no actual astronomical observation. I do of course know that it must be heavier than a few tons or otherwise it wouldn't be noticeable from Earth as an identifiable object, and that it must be lighter than 10^20 or so tons since that's roughly the range where smaller planets are, but it's clearly nonsensical for me to express that guess with even one digit of precision. Yet I could insist on a precise guess, and claim that it's "meaningful" in a way analogous to your above justification of subjective probability estimates, by deriving various mathematical and physical implications of this fact. If you deprecate my claim that its mass is 4.5237 x 10^15kg, then you cannot also deprecate my claim that it is a sphere of radius 1km and average density 1000kg/m^3, since the conjunction of these claims is by the sheer force of mathematics false. Therefore, I don't see how you can argue that a number is meaningful by merely noting its relationships with other numbers that follow from pure mathematics. Or am I missing somet
8komponisto11yUpvoted. Definitely can't back you on this one. Are you sure you're not just worried about poor calibration?
4wedrifid11yAnother upvote. That's crazy talk.
7prase11yI have read most of the responses and still am not sure whether to upvote or not. I doubt among several (possibly overlapping) interpretations of your statement. Could you tell to what extent the following interpretations really reflect what you think? 1. Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless. 2. Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.) 3. Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure. 4. The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason. 5. Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning of subjective probabilities is pre
3Vladimir_M11yThat’s an excellent list of questions! It will help me greatly to systematize my thinking on the topic. Before replying to the specific items you list, perhaps I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight. Therefore, I believe that whenever one encounters people talking about numbers of any sort that look even slightly suspicious, they should be considered guilty until proven otherwise -- and this entire business with subjective probability estimates for common-sense beliefs doesn’t come even close to clearing that bar for me. Now to reply to your list. -------------------------------------------------------------------------------- My answer to (1) follows from my opinion about (2). In my view, a number that gives any information about the real world must ultimately refer, either directly or via some calculation, to something that can be measured or counted (at least in principle, perhaps using a thought-experiment). This doesn’t mean that all sensible numbers have to be derived from concrete empirical measurements; they can also follow from common-sense insight and generalization. For example, reading about Newton’s theory leads to the common-sense insight that it’s a very close approximation of reality under certain assumptions. Now, if we look at the gravity formula F=m1*m2/r^2 (in units set so that G=1), the number 2 in the denominator is not a product of any concrete measurement, but a generalization from common sense. Yet what makes it sensible is that it ultimately refers to measurable reality via a well-defined formula: me
4komponisto11yI'll point out here that reversed stupidity is not intelligence [http://lesswrong.com/lw/lw/reversed_stupidity_is_not_intelligence/], and that for every possible error, there is an opposite possible error. In my view, if someone's numbers are wrong, that should be dealt with on the object level (e.g. "0.001 is too low", with arguments for why), rather than retreating to the meta level of "using numbers caused you to err". The perspective I come from is wanting to avoid the opposite problem, where being vague about one's beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.) But I'll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here [http://lesswrong.com/lw/2sl/the_irrationality_game/2qtc?c=1] for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities: As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they're well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it's a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice. If my internal "Bayesian calculator" believes P(X) = 0.001, and X turns out to be true, I'm not made less wrong by having concealed the number, saying "I don't think X is true" instead. Less embarrassed, perhaps, but not less wrong.
3Vladimir_M11y[Continued from the parent comment.] I have revised my view about this somewhat [http://lesswrong.com/lw/2sl/the_irrationality_game/2qtc] thanks to a shrewd comment by xv15. The use of unjustified numerical probabilities can sometimes be a useful figure of speech that will convey an intuitive feeling of certainty to other people more faithfully than verbal expressions. But the important thing to note here is that the numbers in such situations are mere figures of speech, i.e. expressions that exploit various idiosyncrasies of human language and thinking to transmit hard-to-convey intuitive points via non-literal meanings. It is not legitimate to use these numbers for any other purpose. Otherwise, I agree. Except in the above-discussed cases, subjective probabilities extracted from common-sense reasoning are at best an unnecessary addition to arguments that would be just as valid and rigorous without them. At worst, they can lead to muddled and incorrect thinking based on a false impression of accuracy, rigor, and insight where there is none, and ultimately to numerological pseudoscience. Also, we still don’t know whether and to what extent various parts of our brains involved in common-sense reasoning approximate Bayesian networks. It may well be that some, or even all of them do, but the problem is that we cannot look at them and calculate the exact probabilities involved, and these are not available to introspection. The fallacy of radical Bayesianism that is often seen on LW is in the assumption that one can somehow work around this problem so as to meaningfully attach an explicit Bayesian procedure and a numerical probability to each judgment one makes. Note also that even if my case turns out to be significantly weaker under scrutiny, it may still be a valid counterargument to the frequently voiced position that one can, and should, attach a numerical probability to every judgment one makes. -----------------------------------------------------------
5jimrandomh11ySuppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn't been conducted, you wouldn't give up and act like you didn't have any probability at all; you'd use the one from the small study. You might have to do some extra sanity checks, and your results wouldn't be as reliable, but they'd still be better than if you didn't have a probability at all. A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you're really trying to do is set a minimum quality level. Since probabilities that're based on studies and calculation are generally better than probabilities that aren't, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren't any relevant statistical calculations or studies to compare against. I think what's confusing you is an intuition that if someone gives a probability, you
1Vladimir_M11yAfter thinking about your comment, I think this observation comes close to the core of our disagreement: Basically, yes. More specifically, the quality level I wish to set is that the numbers must give more useful information than mere verbal expressions of confidence. Otherwise, their use at best simply adds nothing useful, and at worst leads to fallacious reasoning encouraged by a false feeling of accuracy. Now, there are several possible ways to object my position: * The first is to note that even if not meaningful mathematically, numbers can serve as communication-facilitating figures of speech. I have conceded this point [http://lesswrong.com/lw/2sl/the_irrationality_game/2qtc]. * The second way is to insist on an absolute principle that one should always attach numerical probabilities to one's beliefs. I haven't seen anything in this thread (or elsewhere) yet that would shake my belief in the fallaciousness of this position, or even provide any plausible-seeming argument in favor of it. * The third way is to agree that sometimes attaching numerical probabilities to common-sense judgments makes no sense, but on the other hand, in some cases common-sense reasoning can produce numerical probabilities that will give more useful information than just fuzzy words. After the discussion with mattnewport and others, I agree that there are such cases, but I still maintain that these are rare exceptions. (In my original statement, I took an overly restrictive notion of "common sense"; I admit that in some cases, thinking that could be reasonably called like that is indeed precise enough to produce meaningful numerical probabilities.) So, to clarify, which exact position do you take in this regard? Or would your position require a fourth item to summarize fairly? I agree that there is a non-zero amount of meaning, but the question is whether it exceeds what a simple verbal statement of confide
1[anonymous]11yAs a matter of fact I can think of one reason - a strong reason in my view - that the consciously felt feeling of certainty is liable to be systematically and significantly exaggerated with respect to the true probability assignment assigned by the person's mental black box - the latter being something that we might in principle elicit through experimentation by putting the same subject through variants of a given scenario. (Think revealed probability assignment - similar to revealed preference as understood by the economists.) The reason is that whole-hearted commitment is usually best whatever one chooses to do. Consider Buridan's ass, but with the following alterations. Instead of hay and water, to make it more symmetrical suppose the ass has two buckets of water, one on either side about equally distant. Suppose furthermore that his mental black box assigns a 51% probability to the proposition that the bucket on the right side is closer to him than the bucket on the left side. The question, then, is what should the ass consciously feel about the probability that the bucket on the right is closest? I propose that given that his black box assigns a 51% probability to this, he should go to the bucket on the right. But given that he should go to the bucket on the right, he should go there without delay, without a hesitating step, because hesitation is merely a waste of time. But how can the ass go there without delay if he is consciously feeling that the probability is 51% that the bucket on the right is closest? That feeling will cause within him uncertainty and hesitation and will slow him down. Therefore it is best if the ass consciously is absolutely convinced that the bucket on the right is closest. This conscious feeling of certainty will speed his step and get him to the water quickly. So it is best for Buridan's ass that his consciously felt degrees of certainty are great exaggerations of his mental black box's probability assignments. I think this genera
2RichardKennaway11yI don't agree with this conflation of commitment and belief. I've never had to run from a predator, but when I run to catch a train, I am fully committed to catching the train, although I may be uncertain about whether I will succeed. In fact, the less time I have, the faster I must run, but the less likely I am to catch the train. That only affects my decision to run or not. On making the decision, belief and uncertainty are irrelevant, intention and action are everything. Maybe some people have to make themselves believe in an outcome they know to be uncertain, in order to achieve it, but that is just a psychological exercise, not a necessary part of action.
1[anonymous]11yThe question is not whether there are some examples of commitment which do not involve belief. The question is whether there are (some, many) examples where really, absolutely full commitment does involve belief. I think there are many. Consider what commitment is. If someone says, "you don't seem fully committed to this", what sort of thing might have prompted him to say this? It's something like, he thinks you aren't doing everything you could possibly do to help this along. He thinks you are holding back. You might reply to this criticism, "I am not holding anything back. There is literally nothing more that I can do to further the probability of success, so there is no point in doing more - it would be an empty and possibly counterproductive gesture rather than being an action that truly furthers the chance of success." So the important question is, what can a creature do to further the probability of success? Let's look at you running to catch the train. You claim that believing that you will succeed would not further the success of your effort. Well, of course not! I could have told you that! If you believe that you will succeed, you can become complacent, which runs the risk of slowing you down. But if you believe that there is something chasing you, that is likely to speed you up. Your argument is essentially, "my full commitment didn't involve belief X, therefore you're wrong". But belief X is a belief that would have slowed you down. It would have reduced, not furthered, your chance of success. So of course your full commitment didn't involve belief X. My point is that it is often the case that a certain consciously felt belief would increase a person's chances of success, given their chosen course of action. And in light of what commitment is - it is commitment of one's self and one's resources to furthering the probability of success - then if a belief would further a chance of success, then full, really full commitment will include that belief. S
2prase11yThanks for the lengthy answer. Still, why it is impossible to calibrate people in general, looking at how often they get the anwer right, and then using them as a device for measuring probabilities? If a person is right on approximately 80% of the issues he says he's "sure", then why not translating his next "sure" into an 80% probability? Doesn't seem arbitrary to me. There may be inconsistency between measurements using different people, but strictly speaking, the thermometers and clocks also sometimes disagree.
4xv1511yI tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error. Meaningless is a very strong word. In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.
2Vladimir_M11ySo why stop there? If you can justify 54%, then why not go further and calculate a dozen or two more significant digits, and stand behind them all with unshaken resolve?
9wnoise11yYou can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is. You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it's not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right? Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data. It seems to me that this is somewhat related to the problem of logical uncertainty [http://lesswrong.com/lw/ms/is_reality_ugly/].
7xv1511yAgain, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision. In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning. Now, maybe I will hold the line at 54% exactly, not feeling any gain to thinking harder about the cutoff (as it gets harder AND less important to nail down further digits). Heck, maybe on some other issue I only care to go out to the nearest 10%. But so what? There are plenty of cases where I know my common sense belief probability to within 10%. That suggests such an estimate is not meaningless.
2Vladimir_M11yxv15: To be precise, I wrote "meaningless, except perhaps as a vague figure of speech ." I agree that the claim would be too strong without that qualification, but I do believe that "vague figure of speech" is a fair summary of the meaningfulness that is to be found there. (Note also that the claim specifically applies to " common-sense conclusions and beliefs," not things where there is a valid basis for employing mathematical models that yield numerical probabilities.) You seem to be saying that since you perceive this number as meaningful, you will be willing to act on it, and this by itself renders it meaningful, since it serves as guide for your actions. If we define "meaningful" to cover this case, then I agree with you, and this qualification should be added to my above statement. But the sense in which I used the term originally doesn't cover this case.
2xv1511yFair. Let me be precise too. I read your original statement as saying that numbers will never add meaning beyond what a vague figure of speech would, i.e. if you say "I strongly believe this" you cannot make your position more clear by attaching a number. That I disagree with. To me it seems clear that: i) "Common-sense conclusions and beliefs" are held with varying levels of precision. ii) Often even these beliefs are held with a level of precision that can be best described with a number. (Best=most succinctly, least misinterpretable, etc...indeed it seems to me that sometimes "best" could be replaced with "only." You will never get people to understand 60% by saying "I reasonably strongly believe"...and yet your belief may be demonstrably closer to 60 than 50 or 70). I don't think your statement is defensible from a normal definition of "common sense conclusions," but you may have internally defined it in such a way as to make your statement true, with a (I think) relatively narrow sense of "meaningfulness" also in mind. For instance if you ignore the role of numbers in transmission of belief from one party to the next, you are a big step closer to being correct.
2Vladimir_M11yxv15: You have a very good point here. For example, a dialog like this could result in a real exchange of useful information: A: "I think this project will probably fail." B: "So, you mean you're, like, 90% sure it will fail?" A: "Um... not really, more like 80%." I can imagine a genuine meeting of minds here, where B now has a very good idea of how confident A feels about his prediction. The numbers are still used as mere figures of speech, but "vague" is not a correct way to describe them, since the information has been transmitted in a more precise way than if A had just used verbal qualifiers. So, I agree that "vague" should probably be removed from my original claim.
7HughRistik11yOn point #2, I agree with you. On point #1, I had the same reaction as xv15. Your example conversation is exactly how I would defend the use of numerical probabilities in conversation. I think you may have confused people with the phrase "vague figure of speech," which was itself vague. Vague relative to what? "No idea / kinda sure / pretty sure / very sure?", the ways that people generally communicate about probability, are much worse. You can throw in other terms like "I suspect" and "absolutely certain" and "very very sure", but it's not even clear how these expressions of belief match up with others. In common speech, we really only have about 3-5 degrees of probability. That's just not enough gradations. In contrast, when expressing a percentage probability, people only tend to use multiples of 10, certain multiples of 5, 0.01%, 1%, 2%, 98%, 99% and 99.99%. If people use figures like 87%, or any decimal places other than the ones previously mentioned, it's usually because they are deliberately being ridiculous. (And it's no coincidence that your example uses multiples of 10.) I agree with you that feelings of uncertainty are fuzzy, but they aren't so fuzzy that we can get by with merely 3-5 gradations in all sorts of conversations. On some subjects, our communication becomes more precise when we have 10-20 gradations. Yet there are diminishing returns on more degrees of communicable certainty (due to reasons you correctly describe), so going any higher resolution than 10-20 degrees isn't useful for anything except jokes. Yes. Gaining the 10-20 gradations that numbers allow when they are typically used does make conversations relatively more precise than just by tacking on "very very" to your statement of certainty. It's similar to the infamous 1-10 rating system for people's attractiveness. Despite various reasons that rating people with numbers is distasteful, this ranking system persists because, in my view, people find it useful for communicating subj
4wedrifid11yOr, you could slide up your arbitrary and fallacious slippery slope and end up with Shultz.
3torekp11yUpvoted, because I think you're only probably right. And you not only stole my thunder, you made it more thunderous :(
2[anonymous]11yDownvote if you agree with something, upvote if you disagree. EDIT: I missed the word only. I just read "I think you're probably right." My mistake.
3magfrump11yUpvote for disagreements of overconfidence OR underconfidence.
2orthonormal11yUm, so when Nate Silver [http://fivethirtyeight.blogs.nytimes.com/2010/10/01/house-forecast-as-october-dawns-novembers-math-still-strong-for-g-o-p/] tells us he's calculated odds of 2 in 3 that Republicans will control the house after the election, this number should be discarded as noise because it's a common-sense belief that the Republicans will gain that many seats?
2[anonymous]11yIn your linked comment you write: Do you not think that this feeling response can be trained through calibration exercises and by making and checking predictions? I have not done this myself yet, but this is how I've thought others became able to assign numerical probabilities with confidence.

The many worlds interpretation of Quantum Mechanics is false in the strong sense that the correct theory of everything will incorporate wave-function collapse as a natural part of itself. ~40%

I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.

2[anonymous]9yWow, telepathy is a pretty big thing to discuss. Sure there isn't a simpler hypothesis? Upvoted.

Religion is a net positive force in society. Or to put it another way religious memes, (particularly ones that have survived for a long time) are more symbiotic than parasitic. Probably true (70%).

7orthonormal11yIf you changed "is" to "has been", I'd downvote you for agreement. But as stated, I'm upvoting you because I put it at about 10%.
3Eugine_Nier11yI'd be curious to know when you think the crossover point was.

Around the time of J. S. Mill, I think. The Industrial Revolution helped crystallize an elite political and academic movement which had the germs of scientific and quantitative thinking; but this movement has been far too busy fighting for its life each time it conflicts with religious mores, instead of being able to examine and improve itself. It should have developed far more productively by now if atheism had really caught on in Victorian England.

Anyway, I'm not as confident of the above as I am that we've passed the crossover point now. (Aside from the obvious political effects, the persistence of religion creates mental antibodies in atheists that make them extremely wary of anything reminiscent of some aspect of religion; this too is a source of bias that wouldn't exist were it not for religion's ubiquity.)

4Perplexed11yI think this is ambiguous. It might be interpreted as * Christianity is good for its believers - they are better off to believe than to be atheist. * Christianity is good for Christendom - it is a positive force for majority Christian societies, as compared to if those societies were mostly atheist. * Christianity makes the world a better place, as compared to if all those people were non-believers in any religion. Which of these do you mean?
3Jayson_Virissimo11yI think a better question is "would the world a better place if people who are currently Christian became their next most likely alternative belief system?". I'm going to go out on a limb here and speculate that if the median Christian lost his faith he wouldn't become a rational-empiricist.
3Eugine_Nier11yI'd change this one to: * Christianity is good for most of its believers - they are better off to believe than to be atheist. ~62% ~69% ~58% Edit: I case it wasn't clear the 70% refers to the disjunction of the above 3.

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

5wedrifid10yHow do the votes work in this game again? "Upvote for insane", right?

Unless you are familiar with the work of a German patent attorney named Gunter Wachtershauser, just about everything you have read about the origin of life on earth is wrong. More specifically, there was no "prebiotic soup" providing organic nutrient molecules to the first cells or proto-cells, there was no RNA world in which self-replicating molecules evolved into cells, the Miller experiment is a red herring and the chemical processes it deals with never happened on earth until Miller came along. Life didn't invent proteins for a long time after life first originated. 500 million years or so. About as long as the time from the "Cambrian explosion" to us.

I'm not saying Wachtershauser got it all right. But I am saying that everyone else except people inspired by Wachtershauser definitely got it all wrong. (70%)

Meh. What's the chances of some germanic guy sitting around looking at patents all day coming up with a theory that revolutionizes some field of science?

4JohannesDahlstrom11yYou make the "metabolism first" school of thought sound like a minority contrarian position to the mainstream "genes first" hypothesis. I was under the impression that they were simply competing hypotheses with the jury being still out on the big question. That's how they presented the issue in my astrobiology class, anyway.
2Perplexed11yIt was a minority, contrarian position just a decade ago. But Wachtershauser's position is not just "metabolism first". It is also "strictly autotrophic" and "lipid first". So I think it is still fair to call it a minority opinion.
2wedrifid11yDownvoted because it approximately matches what I (literally) covered in Biology 101 a month ago. (70% seems right because to be perfectly honest I didn't pay that much attention and the Gunter guy may or may not have been relevant.)

Eating lots of bacon fat and sour cream can reverse heart disease. Very confident (>95%).

3JGWeissman11yI doubt you are following this rule.
4MrShaggy11yI was worried people would think that, but if I posted links to present evidence, I ran the risk of convincing them so they wouldn't vote it up! All I've eaten in the past three weeks is: pork belly, butter, egg yolks (and a few whites), cheese, sour cream (like a tub every three days), ground beef, bacon fat (saved from cooking bacon) and such. Now, that's no proof about the medical claim but I hope it's an indication that I'm not just bullshiting. But for a few links:http://www.ncbi.nlm.nih.gov/pubmed/19179058 [http://www.ncbi.nlm.nih.gov/pubmed/19179058] (the K2 in question is virtually found only in animal fats and meats, see http://www.westonaprice.org/abcs-of-nutrition/175-x-factor-is-vitamin-k2.html#fig4)--the [http://www.westonaprice.org/abcs-of-nutrition/175-x-factor-is-vitamin-k2.html#fig4)--the] pubmed is on prevention of heart disease in humans http://wholehealthsource.blogspot.com/2008/11/can-vitamin-k2-reverse-arterial.html [http://wholehealthsource.blogspot.com/2008/11/can-vitamin-k2-reverse-arterial.html] shows reversal in rat studies from K2http://trackyourplaque.com/ [http://trackyourplaque.com/] -- a clinic that uses K2 among other things to reverse heart diseasenote that I am not trying to construct a rational argument but to convince people that I do hold this belief. I do think a rational argument can be constructed but this is not it.
4jefftk10yThis was about a year ago: do you still hold this belief? Has eating like you described worked out?
1MrShaggy10yNot just hold the belief but eat that way even more consistently (more butter and less sour cream just because tastes change, but same basic principles). I'm young and didn't have any obvious signs of heart disease personally so can't say it "worked out" for me personally in that literal, narrow sense but I feel better, more mentally clear, etc. (I know that's kinda whatever of evidence, just saying since you asked). Someone else recently posted their success with butter lowering their measurement of arterial plaque: "the second score was better (lower) than the first score. The woman in charge of the testing center said this was very rare — about 1 time in 100. The usual annual increase is about 20 percent." ( http://blog.sethroberts.net/2011/08/04/how-rare-my-heart-scan-improvement/ [http://blog.sethroberts.net/2011/08/04/how-rare-my-heart-scan-improvement/]) (Note: I disagree with the poster's reasoning methods in general, just noting his score change.) There was a recent health symposium that discussed this idea and related ones: http://vimeo.com/ancestralhealthsymposium/videos/page:1/sort:newest [http://vimeo.com/ancestralhealthsymposium/videos/page:1/sort:newest]. For those specifically related to heart health, these are most of them: http://vimeo.com/ancestralhealthsymposium/videos/search:heart/sort:newest [http://vimeo.com/ancestralhealthsymposium/videos/search:heart/sort:newest]
2RomanDavis11yDownvoted. I've seen the evidence, too.
3MrShaggy11yDownvoted means you agree (on this thread), correct? If so, I've wanted to see a post on rationality and nutrition for a while (on the benefits of high-animal fat diet for health and the rationality lessons behind why so many demonize that and so few know it).

There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)

Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.

I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.

(I agree with your premise, but not your conclusion.)

2dilaudid11yTo directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness. I probably stole from The Economist [http://www.economist.com/node/16990700] too.
5RichardKennaway11yDid you have this in mind?Cognitive Surplus [http://www.guardian.co.uk/books/2010/jun/27/cognitive-surplus-clay-shirky-book-review] .
1dilaudid11yYes - thank you for the cite.

Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)

If we replaced "mystical experiences" with something of less religious connotations like "raging hard-ons", you wouldn't think that 'souls brushing up against each other' is the most natural explanation -- you'd instead conclude that some aspect of psychology/biochemistry/pheromones is causing you to have a more intense reaction towards certain people and vice-versa.

From a physicalist perspective the brain is as much an organ as the penis, and "mystical experiences" as much a physical event in the brain as erections are a physical event in the penis.

Life on earth was seeded, accidentally or on purpose, from outer space.

1magfrump11yNo probability estimate. I assign this hypothesis some probability, but unless you list yours I can only guess as to whether it is similar to mine. Mine is quite low, however, so upvoted.

Let's see if we can try to hug the query here. What exactly is the mistake I'm making when I say that I believe such-and-such is true with probability 0.001?

Is it that I'm not likely to actually be right 999 times out of 1000 occasions when I say this? If so, then you're (merely) worried about my calibration, not about the fundamental correspondence between beliefs and probabilities.

Or is it, as you seem now to be suggesting, a question of attire: no one has any business speaking "numerically" unless they're (metaphorically speaking) "wearing a lab coat"? That is, using numbers is a privilege reserved for scientists who've done specific kinds of calculations?

It seems to me that the contrast you are positing between "numerical" statements and other indications of degree is illusory. The only difference is that numbers permit an arbitrarily high level of precision; their use doesn't automatically imply a particular level. Even in the context of scientific calculations, the numbers involved are subject to some particular level of uncertainty. When a scientist makes a calculation to 15 decimal places, they shouldn't be interpreted as distinguishing betwe... (read more)

4Mass_Driver11yLove the logic and the scale, although I think Vladimir_M pokes some important holes specifically at the 10^(-2) to 10^(-3) level. May I suggest "un-planned for errors?" In my experience, it is not useful to plan for contingencies with about a 1/300 chance in happening per trial. For example, on any given day of the year, my favorite cafe might be closed due to the owner's illness, but I do not call the cafe first to confirm that it is open each time I go there. At any given time, one of my 300-ish acquaintances is probably nursing a grudge against me, but I do not bother to open each conversation with "Hi, do you still like me today?" When, as inevitably happens, I run into a closed cafe or a hostile friend, I usually stop short for a bit; my planning mechanism reports a bug; there is no 'action string' cached for that situation, for the simple reason that I was not expecting the situation, because I did not plan for the situation, because that is how rare it is. Nevertheless, I am not 'surprised' -- I know at some level that things that happen about 1/300 times are sort of prone to happening once in a while. On the other hand, I would be 'surprised' if my favorite cafe had been burned to the ground or if my erstwhile buddy had taken a permanent vow of silence. I expect that these things will never happen to me, and so if they happen I go and double-check my calculations and assumptions, because it seems equally likely that I am wrong about my assumptions and that the 1/30,000 event would actually occur. Anyway, the point is that a category 3 event is an event that makes you shut up for a moment but doesn't make you reexamine any core beliefs. If you hold most of your core beliefs with probability > .993 then you are almost certainly overconfident in your core beliefs. I'm not talking about stuff like "my senses offer moderately reliable evidence" or "F(g) = GMm/(r^2)"; I'm talking about stuff like "Solominoff induction predicts that hyperintelligent AIs will em
4soreff11y10^-3 is roughly the probability that I try to start my car and it won't start because the battery has gone bad. Is the scale intended only for questions one asks once per lifetime? There are lots of questions that one asks once a day, hence my car example.
1komponisto11yThat is precisely why I added the phrase "on an important question". It was intended to rule out exactly those sorts of things. The intended reference class (for me) consists of matters like the Amanda Knox case. But if I got into the habit of judging similar cases every day, that wouldn't work either. Think "questions I might write a LW post about".
3Vladimir_M11ykomponisto: It's not that I'm worried about your poor calibration in some particular instance, but that I believe that accurate calibration in this sense is impossible in practice, except in some very special cases [http://lesswrong.com/lw/2sl/the_irrationality_game/2qgm]. (To give some sense of the problem, if such calibration were possible, then why not calibrate yourself to generate accurate probabilities about the stock market movements and bet on them? It would be an easy and foolproof way to get rich. But of course that there is no way you can make your numbers match reality, not in this problem, nor in most other ones.) The way you put it, "scientists" sounds too exclusive. Carpenters, accountants, cashiers, etc. also use numbers and numerical calculations in valid ways. However, their use of numbers can ultimately be scrutinized and justified in similar ways as the scientific use of numbers (even if they themselves wouldn't be up to that task), so with that qualification, my answer would be yes. (And unfortunately, in practice it's not at all rare to see people using numbers in ways that are fundamentally unsound, which sometimes gives rise to whole edifices of pseudoscience. I discussed one such example from economics in this thread [http://lesswrong.com/lw/2cp/open_thread_june_2010_part_3/25od].) Now, you say: However, when a scientist makes a calculation with 15 digits of precision, or even just one, he must be able to rigorously justify this degree of precision by pointing to observations that are incompatible with the hypothesis that any of these digits, except the last one, is different. (Or in the case of mathematical constants such as pi and e, to proofs of the formulas used to calculate them.) This disclaimer is implicit in any scientific use of numbers. (Assuming valid science is being done, of course.) And this is where, in my opinion, you construct an invalid analogy: But these disclaimers are not at all the same! The scientist's -- or
4Mass_Driver11yI think this statement reflects either an ignorance of finance or the Dark Arts. First, the stock market is the single worst place to try to test out ideas about probabilities, because so many other people are already trying to predict the market, and so much wealth is at stake. Other people's predictions will remove most of the potential for arbitrage (reducing 'signal'), and the insider trading and other forms of cheating generated by the potential for quick wealth will further distort any scientifically detectable trends in the market (increasing 'noise'). Because investments in the stock market must be made in relatively large quantities to avoid losing your money through trading commissions, a causal theory tester is likely to run out of money long before hitting a good payoff even if he or she is already well-calibrated. Of course, in real life, people might be moderately-calibrated. The fact that one is capable of making some predictions with some accuracy and precision is not a guarantee that one will be able to reliably and detectably beat even a thin market like a political prediction clearinghouse. Nevertheless, some information is often better than none: I am (rationally) much more concerned about automobile accidents than fires, despite the fact that I know two people who have died in fires and none who have died in automobile accidents. I know this based on my inferences from published statistics, the reliability of which I make further inferences about. I am quite confident (p ~ .95) that it is sensible to drive defensively (at great cost in effort and time) while essentially ignoring fire safety (even though checking a fire extinguisher or smoke detector might take minimal effort.) I don't play the stock market, though. I'm not that well calibrated, and probably nobody is without access to inside info of one kind or another.

Conditional on this universe being a simulation, the universe doing the stimulating has laws vastly different from our own. For example, it might contain more than 3 extended-spacial dimensions, or bear a similar relation to our universe as our universe does to second life. 99.999%

8wedrifid11yUpvoted for excessive use of nines. :) (ie. Gross overcondidence.)
3Snowyowl11yUpvoted for disagreement. The most detailed simulations our current technology is used to create (namely, large networks of computers operating in parallel) are created for research purposes, to understand our own universe better. Galaxy/star formation, protein folding, etc. are fields where we understand enough to make a simulation but not enough that such a simulation is without value. A lot of our video games have three spatial dimensions, one temporal one, and roughly Newtonian physics. Even Second Life (which you named in your post) is designed to resemble our universe in certain aspects. Basically, I fail to see why anyone would create such a detailed simulation if it bore absolutely no resemblance to reality. Some small differences, yes (I bet quantum mechanics works differently), but I would give a ~50% chance that, conditional on our universe being a simulation, the parent universe has 3 spatial dimensions, one temporal dimension, matter and antimatter, and something that approximates to General Relativity.
3NancyLebovitz11yThis is much less than obvious-- if the parent universe has sufficient resources, it's entirely plausible that it would include detailed simulations for fun-- art or gaming or some costly motivation that we don't have.
2bogdanb10yI have seen simulators of Conway’s Game of Life (or similar) that contain very complex things, including an actual Turing machine. I could see someone creating a simulator for CGL that simulates a Turing machine that simulates a universe like ours, at least as a proof of concept. With ridiculous amounts of computation available I’m quite sure they’d run the inner universe for a few billion of years. If by accident a civilization arises in the bottom universe and they found some way of “looking above” they’d find a CGL universe before finding the one similar to theirs.
1[anonymous]9yI'm supposed to downvote if I think the probability of that is >= 99.999% and upvote otherwise? I'm upvoting, but I still the probability of that is > 90%.
2Salivanth9yArmy1987: Not sure what the rules are for comments replying to the original, but hell. Voted down for agreement.
1Mass_Driver11yI'd be with you with that much confidence if the proposition were "the top layer of reality has laws vastly different from our own." One level up, there's surely at least an 0.1% chance that Snowyowl is right.

Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.

1RobinZ11yConfidence level?
3dfranke11yLet's say 65%.
[-][anonymous]9y 12

I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).

3Zetetic9yThis idea has been implied before [http://lesswrong.com/lw/1zt/the_map_that_is_the_territory] and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory [http://wiki.lesswrong.com/wiki/The_map_is_not_the_territory] distinction just provides a nice way of framing a problem we already know exists [http://wiki.lesswrong.com/wiki/Mind_projection_fallacy]. In more detail: Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism [http://en.wikipedia.org/wiki/Automorphism], so the map/territory distinction is not dissolved. Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient [http://www.people.fas.harvard.edu/~seberry/philofmath/omniscience/]; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us. So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.

I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:

  1. God exists.
  2. God created the universe.
  3. God prefers not to violate natural laws.
  4. The stories about people seeing angels are based on real events.
  5. The angels seen during these events were actually just robots.
  6. The angels seen during these events were wielding laser turrets.

Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.

But yes - I'm not good at arguing.

Talent is mostly a result of hard work, passion and sheer dumb luck. It's more nurture than nature (genes). People who are called born-geniuses more often than not had better access to facilities at the right age while their neural connections were still forming. (~90%)

Update: OK. It seems I've to substantiate. Take the case of Barrack Obama. Nobody would've expected a black guy to become the US President 50 years ago. Or take the case of Bill Gates, Bill Joy or Steve Jobs. They just happened to have the right kind of technological exposure at an early age and were ready when the technology boom arrived. Or take the case of mathematicians like Fibonacci, Cardano, the Bernoulli brothers. They were smart. But there were other smart mathematicians as well. What separates them is the passion and the hard work and the time when they lived and did the work. A century earlier, they would've died in obscurity after being tried and tortured for blasphemy. Take Mozart. He didn't start making beautiful original music until he was twenty-one by when he had enough musical exposure that there was no one to match him. Take Darwin and think what he would have become if he hadn't boarded the Beagle. He would have been some pastor studying bugs and would've died in obscurity.

In short a genius is made not born. I'm not denying that good genes would help you with memory and learning, but it takes more than genes to be a genius.

I was with you right up until that second sentence. And then I thought about my sister who was speaking in full sentences by 1 and had taught herself to read by 3.

7Will_Sawin11ythe level of genius of geniuses, especially the non-hardworking ones, is too high & rare to be explained entirely by this.
1magfrump11yThough I should talk to others about this as it is testable, I have seen evidence of affective intelligence spirals. Faith in oneself and hard work lead to success and a work ethic, making it easier to have faith and keep working. I would expect this hypothesis (conditional on affective genius cycles which are more readily testable) to predict MORE "geniuses of geniuses," not fewer.
3Risto_Saarelma11yCould this be more precisely rephrased as, "for a majority of people, say 80 %, there would have been a detailed sequence of life experiences that are not extraordinarily improbable or greatly unlike what you would expect to have in a 20th century first world country, which would have resulted them becoming what is regarded as genius by adulthood"?
2Perplexed11yUpvoting, even though I agree with the first sentence. But I disagree with the rest because I'm pretty sure that hard work and passion have a strong genetic component as well.

Note that it is in general very hard to tell if the artistic and cultural contributions associated with religion are actually due to religion. In highly religious cultures that's often the only form of expression that one is able to get funding for. Dan Barker wrote an essay about this showing how a lot of classical composers were agnostics, atheists or deists who wrote music with religious overtones mainly because that was their only option.

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.

1Multiheaded9yI don't understand..By what plausible mechanism could such a disastrous loss of knowledge happen specifically NOW?
1NancyLebovitz9yThe good news is that some version of this knowledge keeps getting rediscovered. The bad news is that the knowledge seems to be mostly tacit and (so far) unteachable.

The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)

NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.

I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.

3MattMahoney10yI disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

Predicated on MWI being correct, and Quantum Immortality being true:

It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%

3magfrump11yPhrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don't work out. This seems like a great reason not to trust quantum immortality.
2Risto_Saarelma11yNot sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that'd be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with. However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head [http://www.youtube.com/watch?v=tzHpCOPsVbo#t=0m50s] would make for an interesting rationality game point.
1wedrifid11yWhich way do I vote things that aren't so much wrong as they are fundamentally confused? Thinking about QI as something about which to ask 'true or false?' implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to 'desired or undesired'.
1Nisan11ySo, the question to ask is whether quantum immortality ought to be reflected in our preferences, right? It's clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be "true" is if it turns out that there's an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions. (Note: I have endorsed [http://lesswrong.com/lw/2e0/mwi_copies_and_probability/276h] the related idea of quantum suicide in the past, but now I am highly skeptical.)

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Far too confident.

The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.

I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.

All right, I'll try to mount a defence.

I would be modestly surprised if any member of Congress has an IQ below 100. You just need to have a bit of smarts to get elected. Even if the seat you want is safe, i.e. repeatedly won by the same party, you likely have to win a competitive primary. To win elections you need to make speeches, answer questions, participate in debates and so on. It's hard. And you'll have opponents that are ready to pounce on every mistake you make and try make a big deal out of it. Even smart people make lots of mistakes and say stupid things when put on the spot. I doubt a person of below average intelligence even has a chance.

Even George W. Bush, who's said and done a lot of stupid things and is often considered dim for a politician, likely has an IQ above 120.

As for decency and honesty, a useful rule of thumb is that most people are good. Crooked people are certainly a significant minority but most of them don't hide their crookedness very well. And you can't be visibly crooked and still win elections. Your opponents are motivated to dig up the dirt on you.

As for honestly trying to serve their country I admit that this is a bit tricky. Congresspeople certa... (read more)

Conflating people with politicians is an egregious category error.

3magfrump11yIf by not-corrupt you meant "would consciously and earnestly object to being offered money for the explicit purpose of pursuing a policy goal that they perceived as not in the favor of their electorate or the country" and by "above-average intelligence" you meant "IQ at least 101" then I would downvote for agreement. But if you meant "tries to assure that their actions are in the favor of their constituents and country, and monitors their information diet to this end" and "IQ above 110 and conscientiousness above average" then I maintain my upvote. When I think of not-corrupt I think of someone who takes care not to betray people, rather than someone who does not explicitly betray them. When I think "above average intelligence" I think of someone who regularly behaves more intelligently than most, not someone who happens to be just to the right of the bell curve.
2Apprentice11yPoint taken. And I concede that there are probably some congressmen with 100<IQ<110. But my larger point, which Vladimir made a bit more explicit, is that contrary to popular belief the problems of the USA are not caused by politicians being unusually stupid or unusually venal. I think a very good case can be made that politicians are less stupid and less venal than typical people - the problems are caused by something else.
1magfrump11yI would certainly agree that politicians are unlikely to be below the mean level of competence, since they must necessarily run a campaign, be liked by a group of people, etc. I would be surprised if most politicians were very far from the median, although in the bell curve of politician intelligence there is probably a significant tail to the high-IQ side and a very small tail to the low-IQ side. I would also agree that blaming politicians' stupidity for problems is, at the very least, a poor way of dealing with problems, which would be much better addressed with reform of our political systems; by, say, abolishing the senate or some kind of regulation of party primaries. At the very least I'm not willing to give up on thinking that there are a lot of dumb and venal politicians, but I am willing to cede that that's not really a huge problem most of the time.
2wnoise11y(Assuming US here). Abolishing the senate seems to be an overreaction at this point, though some reforms of how it does business certainly should be in order. I think one of the biggest useful changes would be to reform voting so that the public gets more bits of input [http://lesswrong.com/lw/mi/stop_voting_for_nincompoops/], by switching to approval [http://en.wikipedia.org/wiki/Approval_voting] or Condorcet [http://en.wikipedia.org/wiki/Condorcet_method] style voting.
1bogdanb10yAbout the first paragraph: does your definition include in “corrupt” people who do not object in that situation because they believe that the benefit to the country of receiving the money (because they’d be able to use it for good things) exceeds the damage done to the country by whatever they’re asked to do? I ask because I suspect many people in high positions have an honest but incorrectly high opinion about their worth to whatever cause they’re nominally supporting. (E.g., “without this money I’ll lose the election and the country would be much worse off because the other guy is evil”.)
6Vladimir_M11yApprentice: Downvoted for agreement. However, I must add that it would be extremely fallacious to conclude from this fact that the country is being run competently and not declining or even headed for disaster. This fallacy would be based on the false assumption [http://lesswrong.com/lw/2qq/politics_as_charity/2o8q] that the country is actually run by the politicians in practice. (I am not arguing for these pessimistic conclusions, at least not in this context, but merely that given the present structure of the political system, optimistic conclusions from the above fact are generally unwarranted.)

Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes this number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

Disagree here. Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.

Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Disagree here also. The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.

For example, consider my current probability estimate of 10^(-3) that Aman... (read more)

At -20 it looks like you're winning the 'most rational belief, least rational time to say it' award!

Hahaha indeed. Oh well. I was afraid of that, but opted to because I was worrying about the karma hit. It seems like a good habit to not take karma seriously.

I guess I'll have to go be insightful in some other thread now or something.

2wedrifid11y"Least rational time to say it" does not necessarily or even primarily refer to karma. By making your claim here you are asserting, via the rules of the post, that you believe you understand this better than lesswrong does. Apart from being potentially condescending it is also a suboptimal way of achieving a desirable influence. It is better to act as if people already know this and are working on enhancing their social skills, encouraging continued efforts as appropriate.
6Relsqui11yI was asserting that, and I'm delighted to be incorrect. Granted, but that would be true regardless of the topic. (Every proposition commented to this post implies condescension about the topic in question.) I'm not sure I agree with that in general. The people who DO know this and are trying to enhance their social skills will simply agree with me (no change); the ones who don't and aren't will either continue not trying (no change) or perhaps consider whether they're incorrrect (positive effect, in my mind). Now, if I knew I were speaking to a particular individual who was already working on this, then yes, reminding them it was important would be rude. But I'm addressing a group of people, among whom that is true of some and not others; I'm trusting the ones of whom it's already true not to interpret it as if I were speaking to them alone. Did I offend you?
[-][anonymous]9y 10

Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%

The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)

Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.

3RobinZ11yAllow me to provide the obligatory complaint about (mainstream) conflation of sentience and sapience, said complaint of course being a display the former but not the latter.
2wedrifid11yOur? :)
2simplicio11yFixed.
3wedrifid11yBut possibly introducing a new problem in as much as the very term 'sentient' and some of the concept it represents isn't even present in the mainstream. I recall back in my early high school years writing an essay that included a reference to sentience and was surprised when she didn't know what it meant. She was actually an extremely good English teacher and quite well informed generally... just not in the same subculture. While I didn't have the term for it back then it stuck in my mind as significant lesson on the topic of inferential distance.

Many-world interpretation of quantum physics is wrong. Reasonably certain (80%).

I suppose the MWI is an artifact of our formulation of physics, where we suppose systems can be in specific states that are indexed by several sets of observables. I think there is no such thing as a state of the physical system.

4Vladimir_M11yprase: Could you elaborate by any chance? I can't really figure out what exactly you mean by this, but I suspect it is very interesting.
9prase11yDisclaimer: If I had something well thought through, consistent, not vague and well supported, I would be sending it to Phys.Rev. instead of using it for karma-mining in the Irrationality thread on LW. Also, I don't know your background in physics, so I will probably either unnecessarily spend some time explaining banalities, or leave something crucial unexplained, or both. And I am not sure how much of what I have written is relevant. But let me try. The standard formulation of the quantum theory is based on the Hamiltonian formalism. In its classical variant, it relies on the phase space, which is coordinatised by dynamical variables (or observables; the latter term is more frequent in the quantum context). The observables are conventionally divided into pairs of canonical coordinates and momenta. The set of observables is called complete if their values determine the points in the phase space uniquely. I will distinguish between two notions of state of a physical system. First, the instantaneous state corresponds to a point in the phase space. Such a state evolves, which means that as time passes, the point moves through the phase space along a trajectory. It has sense to say "the system at time t is in instantaneous state s" or "the instantaneous state s corresponds to the set of observables q". In the quantum mechanics, the instantaneous state is described by state vectors in the Schrödinger picture. Second, the permanent state is fixed and corresponds to a parametrised curve s=s(t). It has sense to say "the system in the state s corresponds to observable values q(t)". In quantum mechanics, this is described by the state vectors in the Heisenberg picture. The quantum observables are represented by operators, and either state vectors evolve and operators remain still (Schrödinger), or operators evolve and state vectors remain still (Heisenberg). The distinction may feel a bit more subtle on the classical level, where the observables aren't "reified", so to s
6prase11yLike in the classical mechanics, one can resort to the relativistic Hamiltonian formalism. The formalism can be adopted to use in quantum theory, but now there are no observable operators q(t) with time-dependent eigenvectors: both q and t are (commuting) operators. There are indeed wave functions ψ(q,t), but their interpretation is not obvious. For details see here [http://arxiv.org/abs/gr-qc/0111016] (the article partly overlaps with the one which I link in the remark 2, but gets deeper into the relativistic formalism). The space-time states discussed in the article are redundant - many distinct state vectors describe the same physical situation. So what we have: either violation of the Lorentz symmetry, or a non-transparent representation of states. Of course, all physical questions in quantum physics can be formulated as questions of the second type as described four paragraphs above. One measures the observables twice (the first measurement is called preparation), and can then ask: "What's the probability of measuring q2, when we have prepared the system into q1?" Which is equivalent to "what's the probability of measuring q1 and q2 on the same system?" And of course, there is the path integral formulation of quantum theory, which doesn't even need to speak about state space, and is manifestly Lorentz-covariant. So it seems to me that the notion of a state of a system is redundand. The problem with collapse (which is really a problem - my original statement doesn't mean an endorsement of collapse, although some readers may perceive it as such) doesn't exist when we don't speak about the states. Of course, the state vectors are useful in some calculations. I only don't give them independent ontological status. Remarks: 1. The fact that the quantum mechanics and relativity don't fit together is often presented as a "feature, not bug": it points out to the necessity of field theory, which, as we know, is a more precise description of the world.
3wnoise11yOf course it is wrong, because standard quantum physics is an approximate model that only applies in certain conditions. Wrong, of course, is not the same as "not useful", nor does "MWI is wrong" mean "there is an objective collapse".

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

9Incorrect9yI'm trying to figure out what this statement means. What would the universe look like if it were false?
8TheOtherDave9yIn context, I took it to predict something like "Above a certain limit, as a system becomes more intelligent and thus more able to discern the true nature of existence, it will become less able to motivate itself to achieve goals."
4Locaha8yYou can't. We live in an intrinsically meaningless universe, where all statements are intrinsically meaningless. :-)
1thomblake9yI'm not sure it's a bug if "all existence is meaningless" turns out to be meaningless.
2TimS9yAren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect. I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.
2Wrongnesslessness9yBut humans are crazy! Aren't they?
1ArisKatsaris9yThis prediction isn't falsifiable -- the word "crazy" is not precise enough, and the word "sufficient" is a loophole you can drive the planet Jupiter through.

No. I intend to revive one. Possibly all four, if necessary. Consider it thawing technology so advanced it can revive even the pyronics crowd.

4JenniferRM11yDid you coin the term "pyronics"?

My reasoning is that it would take more then a universe's worth of computronium to completely stimulate a comprable universe.

One could argue that they're taking shortcuts with, e.g., the statistics of bulk matter, but I think we'd notice the edge cases caused by something like that.

Define "virtually perfect gender egalitarianism".

1tenshiko11yI have to admit that I knew in my heart I should define it but didn't, mostly because I know that the tenets are purely subjective and there's no way I can cover everything that would be involved. Here are a couple points: 1. No personality traits are considered acceptable in males and unacceptable in females, or vice versa. E.x. aggressiveness, confinement to the domestic sphere, sexual conquest. 2. Gender is absent from your evaluation of a person's potential utility, except in specific cases where reproduction is relevant (e.g., concern about maternity leave). Even if it is conclusively proven that average men cannot work in business companies without getting into some kind of scandal eventually or that average women cannot think about math as seriously, that shouldn't affect your preconceptions of Jane Doe or John Smith. 3. For the love of ice, please let the notion of the man as the default human just die, like it should have SO LONG AGO. PLEASE. I hope this doesn't fall into a semantics controversy.
9Alicorn11y1. "Considered" by whom? Can I have, say, an aesthetic preference about these things (suppose I think that women look better in aprons than men do, can I prefer on this obviously trivial basis that women do more of the cooking?), or is any preference about the division of traits amongst sexes a problem for this criterion? 2. "Potential utility" meaning the utility that the person under consideration might experience/get, or might produce? Also, does this lack of preconception thing seem to you to be compatible with Bayesianism? If I have no reason to suspect that John and Jane are anything other than average, on what epistemic basis do I not guess that he is likelier (by the hypothetical proofs you suppose) to be better at math and more likely to cause scandal? 3. So what gender should the default human be, or should we somehow have two defaults, or should the default human be one with a set of sex/gender characteristics that rarely appear together in the species, or should there be no default at all (in which case what will serve the purposes currently served by having a default)? I'm totally in favor of gender egalitarianism as I understand it, but it seems a little wooly the way you've written it up here. I'm sincerely trying to figure out what you mean and I'll back off if you want me to stop.
1tenshiko11y1. Perhaps an aesthetic preference isn't a problem (obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences). Note that I used the word "personality traits" - some division of other traits is inevitable. Things that upset me with the current state of affairs are where one boy fights with another and it is dismissed as boys being boys, while any other combination of genders would probably result in disciplinary action. Or how the general social trends (in Western cultures, at least) think that women wearing suits is commendable and becoming ordinary, but a man in a dress is practically lynched. 2. Potential utility produced, for your company or project. I think I phrased this one a little wonkily earlier - you're right, under the proofs I layed out, if all you know about John and Jane are their genders, then of course the Bayesian thing to do is assume John will be better at math. What I mean is more that, if you do know more about John and Jane, having had an interview or read a resume, the assumption that they necessarily reflect the averages of their gender is like not considering whether a woman's positive mammogram could be false. For an extreme example, the majority of homocides in many countries are committed by men. Should the employer therefore assume that Jane is less likely than John to commit such a crime, even if she has a criminal record? 3. I don't see why having an ungendered default is so difficult, besides for the linguistic dance associated with it in our language (and several others, but far from all of them), which is probably not going to be a problem for many more generations due to the increasing use of "they" as a singular pronoun. For instance, having a raceless or creedless default has proven not to be that hard, even if members of di
4[anonymous]11ySome personality traits are considered attractive in one sex and not another.
1Relsqui11yWhat are those purposes, anyway?
2Alicorn11yLiterary "everyman" types, not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults, and probably something I'm not remembering.
1Relsqui11yHow do you do that in English as it is now?
1Alicorn11yPeople say things like "Take your average human. He's thus and such." If you want to start a paragraph with "Take your average human" and not use gendered language, you have to say things like "They're thus and such" (sometimes awkward, especially if you're also talking about plural people or objects in the same paragraph) or "Ey's thus and such", which many people don't understand and others don't like.
9Vladimir_M11yAlicorn: I find these invented pronouns awful, not only aesthetically, but also because they destroy the fluency of reading. When I read a text that uses them, it suddenly feels like I'm reading some language in which I'm not fully fluent so that every so often, I have to stop and think how to parse the sentence. It's the linguistic equivalent of bumps and potholes on the road.
2JGWeissman11yAfter reading one story that used these pronouns, I was sufficiently used to them that they do not impact my reading fluency.
1NancyLebovitz11yI don't have an average human, and I don't think the universe does either. I think there's a lot to be said for not having a mental image of an average human. Furthermore, since there are nearly equal numbers of male and female humans, gender is trait where the idea of an average human is especially inaccurate. I think the best substitute is "Take typical humans. They're thus and such." Your average alert listener will be ready to check on just how typical (modal?) those humans are.
1shokwave11yExactly. People make a fuss about a lack of singular nongendered pronouns. The plural nongendered pronouns are right there.
1Relsqui11yHmm. It's true, people do, but I think it's getting less common already. Were you asking, then, which of those alternatives the original commenter preferred?
1Alicorn11yNot really, I'm just pointing out that gendered language isn't a one-sided policy debate. (I favor a combination of "they" and "ey", personally, or creating specific example imaginary people who have genders).

This comment currently (at the time of reading) has at least 10 net upvotes.

Confidence: 99%.

9Perplexed11yYou realize, of course, that your confidence level is too high. Eventually, the score should cycle between +9 and +10. Which means that the correct confidence level should be 50%. Nonetheless, it is very cute. So, I'll upvote it for overconfidence, to say nothing of currently being wrong.
6JGWeissman11yOnce it gets to 10 points, it should be voted up for underconfidence.
5magfrump11yExcept that there's a chance that it's been downvoted by someone else that's sufficient for 99% to warrant agreement rather than a statement of underconfidence (if and only if people decide that this is true!) which would be easily broken if it got up to 11 but would be far more easily broken if the confidence was set at say, 75%.
4magfrump11yCycle's broken! Now upvoted for underconfidence.

I think you're nitpicking; if what she's saying sounds completely obviously unreasonable then it's probably not what she meant. She means something like "There's a 60% chance that diets, legal supplements, fasting, and/or exercise, in amounts that Western culture would count as memetically reasonable, and in amounts that can be reasonably expected to be undertaken by members of Western culture, can cause significant weight loss." To which everyone says, "No, more like 95%", not "Haha obviously liposuction works, and so does starvation, you imprecise person: next time write a paragraph's worth of disclaimers and don't count on the ability of your audience to make charitable interpretations."

Here is one of many detailed accounts, this one is from Dr. José Maria de Almeida Garrett, professor at the Faculty of Sciences of Coimbra, Portugal

I was looking at the place of the apparitions, in a serene, if cold, expectation of something happening, and with diminishing curiosity, because a long time had passed without anything to excite my attention. Then I heard a shout from thousands of voices and saw the multitude suddenly turn its back and shoulders away from the point toward which up to now it had directed its attention, and turn to look at the sk... (read more)

I just realized that either I get karma for this or I get warm fuzzies from people agreeing with me. Suddenly the magic of the game is clear.

ETA: ... although now I'm wondering how strongly I would have to word it before people stopped agreeing with it.

Upvoted for drastic underconfidence.

Edit: 99.8% assumes independence, which is certainly violated in the proposed case.

Here's the thing: in order for nick012000's stated confidence to be justified, every one of these six points must be justified to a level over 99% - and the geometric average must be over 99.8%. The difference between 99% and 99.8% may not be huge in the grand scheme of things, but for historical events it's far from negligible.

What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.

Want to make a bet on that?

I wouldn't have said the number of nines indicated overconfidence if you were talking about the sun rising. I do not believe you have enough evidence to reach that level of certainty on this subject. I would include multiple nines in my declaration of confidence in that claim.

1Will_Newsome11yYou think there's a 999,999/100,000 chance the sun will rise tomorrow? I think you may be overconfident here...

Maybe I've misunderstood.

It seems to me that your original prediction has to refer either to humans as a group, in which case Luke's counterexample is a good one, or humans as individuals, in which case my counterexample is a good one.

It also seems to me that either counterexample can be refined into a useful prediction: Humans in general don't eat petroleum products. I don't eat spicy food. Corvi doesn't eat meat. All of those classes of things can be described more efficiently than making lists of the members of the sets.

Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment.

Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be... (read more)

4orthonormal11yAnt colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.
3Eugine_Nier11yI'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.
7orthonormal11yCancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction. One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.
1Eugine_Nier11yHowever, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.
2orthonormal11ySo you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way. But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons [http://hanson.gmu.edu/filluniv.pdf](pdf) rather than a continued time of plenty).

Is this post a top-level comment to this post?

3Perplexed11yThe probability of that is <25%.

It seems plausible to me that routinely assigning numerical probabilities to predictions/beliefs that can be tested and tracking these over time to see how accurate your probabilities are (calibration) can lead to a better ability to reliably translate vague feelings of certainty into numerical probabilities.

There are practical benefits to developing this ability. I would speculate that successful bookies and professional sports bettors are better at this than average for example and that this is an ability they have developed through practice and experie... (read more)

It's as low as 70% because I'm Aumanning a little from people who are better at math than me assuring me very confidently that, with math, one can perform such magic as to make risk-neutrality sensible on a human-values-derived utility function. The fact that it looks like it would have to actually be magic prevents me from entertaining the proposition coherently enough simply to accept their authority on the matter.

3Perplexed11yThere may be some confusion here. I don't think any serious economist has ever argued that risk neutrality is the only rational stance to take regarding risk. What they have argued is that they can draw up utility functions for people who prefer $100 to a 50:50 gamble for $200 or 0. And they can also draw functions for people who prefer the gamble and for people who are neutral. That is, risk (non)neutrality is a value that can be captured in the personal utility function just like (non)neutrality toward artificial sweeteners. Now, one thing that these economists do assume is at least a little weird. Say you are completely neutral between a vacation on the beach and a vacation in the mountains. According to the economists, any rational person would then be neutral between the beach and a lottery ticket promising a vacation but making it 50:50 whether it will be beach or mountains. Risk aversion in that sense is indeed considered irrational. But, by their definitions, that 'weird' preference is not really "risk aversion".

The hard problem of consciousness will be solved within the next decade (60%).

Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you're counting "ones that lack self-concepts sufficiently strong to have any real preference to exist". Are you saying the presence of a notochord gives a fish higher moral worth than a crab?

4RobinZ11yThat's a good point - there are almost certainly invertebrate species on the same side of the line. Squid [http://en.wikipedia.org/wiki/Cephalopod_intelligence], for example.

No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it,

Just willing. If they want it infinitely much and someone else gives it to them then they have infinite utility. Their wishes may also be arbitrarily trivial to achieve. They could assign infinite utility to having a single paperclip and be willing to do anything they can to make sure they have a paperclip. Since they (probably) do ha... (read more)

1Strange711yClippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors. In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.
2Larks11yConsider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She'll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.
5cata11yDoes it even make sense to talk about "the chance to do X at no cost to Y?" Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it's a negligible influence, but if Y's utility is literally supposed to be infinite, it would dominate.
3JoshuaZ11yNo. This is one of the problems with trying to have infinite utility. Kind Clippet won't actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.
5JGWeissman11yYou can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won't matter [http://lesswrong.com/lw/2sl/the_irrationality_game/35qm?c=1].
1wedrifid11yUm... yes? That's how it works. It just doesn't particularly relate to your declaration that infinite utility is impossible (rather than my position - that is is lame). It is no better or worse or better than a theory that the utility function is '1' for having a paperclip and '0' for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn't infinite obviously rescales to 'infinitely small'). You appear to be confused about how the 'not testable' concept applies here...

England was never a 'corporate' monarchy in the sense of a limited-liability joint-stock company with numeric shares, voting rights, etc. I never said it was, though, but that it was 'property-rights based', which it was - the whole country and all legal privileges were property which the king could and did rent and sell away.

This is one of the major topics of Nick Szabo's blog Unenumerated. If you have the time, I strongly recommend reading it all. It's up there with Overcoming Bias in my books.

Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.

Note that I've tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:

  • Demands of the general fo
... (read more)

Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.

I was very energetic and chatty, and didn't really care about personal space. My friend gave it to me at a party because he wanted to see what it did to my chess ability. It was too hard to tell if it improved my chess, but it definitely led to me sitting very close to a girl that lives in my neighborhood and actually connecting with her. Normally I come across as laconic, which works pretty well for some reason, but it was nice to actually feel passionate about getting to know someone, and feel an emotional bond forming in real time. I ended up driving wi... (read more)

1wedrifid11yI've had the same experience (conversation vs driving attention focus based on stimulants). Watch out for that stuff!

This means that there exist an infinite number of Earths like ours that are in a simulation, and an infinite number of Earths like ours that are not in a simulation. Thus it becomes meaningless to ask whether or not we exist in a simulation. We exist in every possible world containing us that is a simulation, and exist in every possible world containing us that is not a simulation.

Just because a set is infinite doesn't mean it's meaningless to speak of measures on it.

4Perplexed11yThe infinite cardinality of the set doesn't preclude the bulk of the measure being attached to a single point of that set. For Solomonof-like reasons, it certainly makes sense to me to attach the bulk of the measure to the "basement reality"
1Will_Newsome11y(FWIW I endorse this line of reasoning, and still think 99.5% is reasonable. Bwa ha ha.) (That is, I also think it makes sense to attach the bulk of the measure to basement reality, but sense happens to be wrong here, and insanity happens to be right. The universe is weird. I continue to frustratingly refuse to provide arguments for this, though.) (Also, though I and I think most others agree that measure should be assigned via some kind of complexity prior (universal or speed priors are commonly suggested), others like Tegmark are drawn towards a uniform prior. I forget why.)
1Perplexed11yI wouldn't have thought that a uniform prior would even make sense unless the underlying space has a metric (a bounded metric, in fact). Certainly, a Haar measure on a recursively nested space (simulations within simulations) would have to assign the bulk of its measure to the basement. Well, live and learn.

You might want to put a big bold please read the post before voting on the comments, this is a game where voting works differently right at the beginning of your post, just in case people dive in without reading very carefully.

[-][anonymous]11y 5

The gaming industry is going to be a major source of funding* for AGI research projects in the next 20 years. (85%)

*By "major" I mean contributing enough to have good odds of causing actual progress. By gaming industry I include joint ventures, so long as the game company invested a nontrivial portion of the funding for the project.

EDIT: I am referring to video game companies, not casinos.

2Eugine_Nier11yI assume you mean designing better AI opponents, as this seems to be one type of very convenient problem for AI. Needless to say having one of these go FOOM would be very, very bad.

Opponents can be done reasonably well with even the simple AI we have now. The killer app for gaming would be AI characters who can respond meaningfully to the player talking to them, at the level of actually generating new prewritten game plot quality responses based on the stuff the player comes up with during the game.

This is quite different from chatbots and their ilk, I'm thinking of complex, multiagent player-instigated plots such as the player convincing AI NPC A to disguise itself as AI NPC B to fool AI NPC C who is expecting to interact with B, all without the game developer having anticipated that this can be done and without the player feeling like they have gone from playing a story game to hacking AI code.

So I do see a case here. The game industry has thus far been very conservative about weird AI techniques, but since cutting edge visuals seem to be approaching diminishing returns, there could be room for a gamedev enterprise going for something very different. The big problem is that when sorta-there visuals can be pretty impressive, sorta there general NPC AI will probably look quite weird and stupid in a game plot.

6Kaj_Sotala11yNot for games like Civilization they can't. Especially not if they're also supposed to deal with mods that add entirely new features. Some EURISKO-type engine that could play a lot of games against itself and then come up with good strategies (and which could be rerun after each rules change) would be a huge step forward.
6[anonymous]11yIt would be very bad if an opponent AI went FOOM. Or even one which optimized for certain types of "fun", say, rescue scenarios. But consider a game AI which optimized for features found in some games today (generalized): * The challenges of many games require you to learn to think faster as the game progresses. * They often require you to know more (and learn to transfer that knowledge, part of what I would call "thinking better"). * Through roleplaying and story, some games lead you to act the part of a person more like who you wish you were. * Many social games encourage you to rapidly develop skills in cooperation and teamwork, to exchange trust and empathy in and out of the game. They want you to catch up to the players who already have an advantage: those who had grown up farther together. There are more conditions to CEV as usually stated, and they are hard to correlate with goals that any existing game designers consciously implement. They might have to be a hard pitch, "social innovations" for a "revolutionary game". If it was done consciously, it's conceivable that AI researchers could use game funding to implement Friendly AGI. (Has there been a post or discussion yet on designing a Game AI that implements CEV? If so, I must read it. If not, I will write it.)
2NancyLebovitz11yNeedless to say having one of these go FOOM would be very, very bad. Maybe, but the purpose of such an opponent isn't to crush humans, it's to give them as good a game as possible. The big risk might be an AI which is inveigling people into playing the game more than is good for them, leading to a world which is indistinguishable from a world in which humans are competing to invent better superstimulus games.
1nazgulnarsil11yeh, given the space of various possible futures I would regard this as one of the better ones.
1dfranke11yUpvoted for overconfidence, but I'd downvote at 40%.

This is what spurred me to give consideration to the idea initially, but what makes me confident is sifting through simply mountains of reports. To get an idea of the volume and typical content, here's a catalog of vehicle interference cases in Australia from 1958 to 2004. Most could be explained by a patchwork of mistakes and coincidences, some require more elaborate, "insanity or hoax" explanations, and if there are multiple witnesses, insanity falls away too. But there is no pattern that separates witnesses into a "hoax" and a "... (read more)

If there are mutliple witnesses who can see each others reactions, it's a good candidate for mass hysteria

7Will_Newsome11yI couldn't really understand the blog post: his theory is that there are terrestrial but nonhuman entities that like to impress the religious? But the vehicle interference cases you reference are generally not religious in nature, and are extremely varying in the actual form of the craft seen (some are red and blue, some are series of lights). What possible motivations for the entities could there be? Most agents with such advanced technology will aim to efficiently optimize for their preferences. If this is what optimizing for their preferences looks like, they have some very improbably odd preferences.

To be fair to the aliens, the actions of Westerners probably seem equally weird to Sentinel Islanders. Coming every couple of years in giant ships or helicopters to watch them from afar, and then occasionally sneaking into abandoned houses and leaving gifts?

3JohannesDahlstrom11yThat was a fascinating article. Thank you.
3PlaidX11yI agree with you entirely, and this is a great source of puzzlement to me, and to basically every serious investigator. They hide in the shadows with flashing lights. What could they want from us that they couldn't do for themselves, and if they wanted to influence us without detection, shouldn't it be within their power to do it COMPLETELY without detection? I have no answers to these questions.
3Risto_Saarelma11yThat's assuming that what's going on is that entities who are essentially based on the same lawful universe as we are are running circles around humans. If what's going on is instead something like a weird universe [http://lesswrong.com/lw/2sl/the_irrationality_game/2qav?c=1], where reality makes sense most of the time, but not always, I imagine you might get something that looks a lot like some of the reported weirdness. Transient entities that don't make sense leaking through the seams, never quite leaving the causal trail which would incontrovertibly point to their existence.
1Will_Newsome10yIf I'd asked the above questions honestly rather than semi-rhetorically I may have figured a few things out a lot sooner than I did. I might be being uncharitable to myself, especially as I did eventually ask them honestly, but the point still stands I think.

Errh. If you are disagreeing with me, doesn't that mean you should upvote?

This post makes the recent comments thread look seriously messed up!

I recommend adding, up in the italicized introduction, a remark to the effect that in order to participate in this game one should disable any viewing threshold for negatively voted comments.

One in a billion strikes me as too high. Rank ordering is easier for me. I'd put your hypothesis above the existence of the Biblical God but beneath the conjunction of "9/11 Attack was a plot organized by elements of the US government", "the Lock Ness monster is a living Plesiosaur", and "Homeopathy works".

2Psy-Kosh11yHuh. My initial thought would be to simply put it at about the same order of improbability as "homeopathy is real" rather than far below. A quick surface consideration would seem to imply both requiring the same sort of "stuff we thought we know about the world is wrong, in a way that we'd strongly expect to make it look very different than it does, so in addition to that, it would need a whole lot of other tweaks to make it still look mostly the way it does look to us now". (At least that's my initial instinctive thought. Didn't make the effort to try to actually compute specific probabilities yet.)
6Jack11yLike homeopathy it is a belief that well-confirmed scientific theories are wrong. But more so than homeopathy it specifies a scenario within that probability space (the earth is an accelerating disk, and specifies a scenario for why the information we have is wrong [the conspiracy]). I also think the disk-earth scenario requires more fundamental and better confirmed theories to be wrong than homeopathy does. It calls into question gravitation, Newtonian physics, thermodynamics and geometry. I may be overconfident regarding homeopathy, though. The disk-earth scenario might seem more improbable because it is bigger and would do more to shatter my conception of my place in the universe than memory water would. Would we have to topple all of science to acknowledge homeopathy? Thats my sense of what we would have to do for the disk-earth thing.
2Psy-Kosh11yI was thinking homeopathy would essentially throw out much of what we think we know about chemistry. For the world to still look like it does even with the whole "you can dilute something to the point that there's hardly a molecule of the substance in question, but it can impose its energy signature onto the water molecules", etc, well... for that sort of thing to have a biological effect as far as being able to treat stuff, but not having any effect like throwing everything else about chemistry and bio out of whack would seem to be quite a stretch. Not to mention that, underneath all that, would probably require physics to work rather differently than the physics we know. And in noticeable ways rather than zillionth decimal place ways. Possibly you're right, and it would be less of a stretch than flat-earth, but doesn't seem that way at least. Specifying the additional specific of a nasa conspiracy being the source of the flat earth being hidden may be sufficient additional complexity to drive it below homeopathy. But overall, I'd think of both as requiring similar order of magnitude improbabilities.
1Jack11yBut can't homeopathy be represented as positing an additional chemical law- the presence of some spiritual energy signature which water can carry? I'm not exactly familiar with homeopathy but it seems like you could come up with a really kludgey theory that lets it work without you actually having to get rid of theories of chemical bonding, valence electrons and so on. It doesn't seem as easy to do that with the disk earth scenario.
6Desrtopa11yIt's worse than that. Water having a memory, spiritual or otherwise, of things it used to carry, would be downright simple compared to what homeopathy posits. Considering everything all the water on Earth has been through, you'd expect it to be full of memories of all sorts of stuff; not just the last homeopathic remedy you put in it. What homeopathy requires is that water has a memory of things that it has held, which has to be primed by a specific procedure, namely thumping the container of water against a leather pad stuffed with horse hair while the solute is still in it so the water will remember it. The process is called "succussion" and the inventor of homeopathy thought that it made his remedies stronger. Later advocates though, realized the implications of the "water has a memory" hypothesis, and so rationalized it as necessary.
2Psy-Kosh11yWow. I hadn't even heard of the very specific leather pad thing. (I've heard it has to be shaken in specific ways, but not that) How is it that no matter how stupid I think it is, I keep hearing things that makes homeopathy even more stupid than I previously thought?
1David_Gerard11yLarge chunks of it. You'd need to overturn pretty much all of chemistry and molecular biology, and I think physics would be severely affected too. The reasons for homeopathy retaining popularity are in the realm of psychology.

That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.

Information theory is the wrong place to look for objective morality. Information is purely epistemic - i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days.

Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalitio... (read more)

1AdeleneDawner11yThis does help bring clarity to the babyeaters' actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the 'fair punishment' that brings the situation back to equilibrium. I suspect that we'd be better served by a less emotionally charged word than 'punishment' for that phenomenon in general, though.
1Perplexed11yOh, I think "punishment" is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I'm not so sure I would agree that the babies in the story are being "punished". I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved - the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory. However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one's own babies does at least provide some incentive to minimize the number of excess babies produced.

Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing - most often understanding or being able to produce speech, but also other sensory processing - turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I've personally experienced this.

Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren't fluent in reading, and do have to consciously parse each word.

What would be your probability assessment if you replaced "Eliezer Yudkowsky" with "SIAI"?

So I agree with the science you cite, right? But what you said really doesn't follow. Just because our phonologic loop doesn't actually have the control it thinks it does, it doesn't follow that sensory modalities are "meaningless." You might want to re-read Joy in the Merely Real with this thought of yours in mind.

I don't know about you, but I'm not a P-zombie. :)

That emoticon isn't fooling anyone.

Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program - and in this case, we can say "yes, in memory position #31415926 there's a human in basement reality and in memory position #2718281828 there's an identical human in a deeper simulation". However, those humans can't tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.

I agree with most of what you're saying (in that comment and this one) but I still think that the ability to give well calibrated probability estimates for a particular prediction is instrumentally useful and that it is fairly likely that this is an ability that can be improved with practice. I don't take this to imply anything about humans performing actual Bayesian calculations either implicitly or explicitly.

I've seen so many decent people turn into bastards or otherwise abdicate moral responsibility when they found themselves at the helm of a company, no matter how noble their initial intentions.

Do you think this is different from the general 'power corrupts' tendency? The same thing seems to happen to politicians for example.

I'm sure there's more to it than came across in that sentence, but that sounds like shaky grounds for belief.

As I understood it, the paradox was that by the rules of the thread, "This comment will be massively upvoted. 100%" is something I should upvote if I believe it's unlikely to be true. But if I upvote it on that basis, I should expect others to upvote it as well. But if I expect others to upvote it, then I should expect it to be upvoted, and therefore I should consider it likely to be true. But if I consider it likely to be true, then by the rules of the thread, I should downvote it. But if I downvote on that basis, I should expect others to downvote it as well, and therefore I should consider it unlikely to be true. But...

Were I a robot from 1960s SF movies, my head would now explode.

4thomblake9yThe stable solution is for everyone to notice that few people will read the comment and so it will only be moderately upvoted, and so upvote it.
2MarkusRamikin9yDO NOT MESS WITH KARMA [http://www.fanfiction.net/s/5782108/17/Harry_Potter_and_the_Methods_of_Rationality]
3thomblake9ynoted [http://lesswrong.com/lw/2sl/the_irrationality_game/6buo].

Agree the chance is >50%, but upvoted for overconfidence.

90 years has room for a lot of compound weird.

If you lose measure with time, you'll lose any given amount given enough time. It's better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.

I understand what you meant by your proposition, I'm not trying to ask for clarification.

I assume you have some model of TM-practitioner behavior or social networking or something which justifies your idea that there is such a threshold in that place.

I do not have any models of: how TM is practiced, and by whom; how much TM effects someone's behavior, and consequently the behavior of those they interact with; how much priming effects like signs or posters for TM groups or instruction have on the general populace; how much the spread of TM practitioners inc... (read more)

I have to go with yes, I don't think those [symbolic, linguistic] processes require consciousness.

You pretty much have to go with "yes" if you want to claim that "consciousness/self-awareness is just a meaningless side-effect of brain processes." I've got to disagree. What my introspection calls my "consciousness" is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.

Not all of my speech works this way, but some does. And almost all of my writ... (read more)

I was just thinking about this one the other day. I was musing about taking adderall and piracetam, and thinking "Is intelligence/cognition really a bottleneck I need to clear up? Shouldn't everyone else be taking this stuff?"

Funny, I upvoted this because of the artistic and cultural contributions of religion. For most of history, until the Industrial Revolution or a little before, human economies were Malthusian. You could not increase incomes without decreasing average lifespans. The implication is that the money spent on cathedrals and gargoyles and all the rest came directly at the expense of people's lives. (A recent Steven Landsburg debate with Dinesh D'Souza explored this line of thinking more; I wouldn't recommend watching much more than the opening statements, though.)... (read more)

I would expect that most simulators who worried about computational capacity wouldn't bother simulating to the depth of quantum physics anyway. However, I'm not entirely sure that I should use this sort of argument when talking about the local laws of "physics". There is some sense, I think, in which the laws of physics around here are "supposed to be" MWI-like and that we should take them at face value.

Great idea for a post. I've really enjoyed reading the comments and discussion they generated.

Prior before having learned of Fatima, roughly? Best guess at current probability?

You sound more confident than Eugine, in which case you should upvote. Or does 70% roughly match your belief?

Two points that influence my thinking on that claim:

  1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents.
  2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.
[-][anonymous]11y 3

you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.

No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don't dispute that one couldn't lay claim to mathematical rigor, but I'm not sure that means that any human assignment of numerical probabilities is meaningless.

Upvoted for disagreement. People are inventive and resourceful. They have explored "organization space" pretty thoroughly. Many successful alternatives to corporations already exist and are functioning successfully. Any corporation producing "astronomical waste" will quickly be destroyed by corporate or non-corporate competitors

Mmm, < .01%, it wasn't something I would've dignified with enough thought to give a number. Even as a kid, although I liked the idea of aliens, stereotypical flying saucer little green men stuff struck me as facile and absurd. A failure of the imagination as to how alien aliens would really be.

In hindsight I had not considered that their outward appearance and behavior could simply be a front, but even then my estimate would've been very low, and justifiably, I think.

User:taw talked about one that you take with caffeine.

ephedrine. It's called ECA, including aspirin, but that wasn't used in the studies.

It wasn't a complaint. :)

I don't think that human values are well described by a utility function if, by "utility function", we mean "a function which an optimizing agent will behave risk-neutrally towards". If we mean something more general by "utility function", then I am less confident that human values don't fit into one.

This is still exceptionally unclear to me. Also the reference class of "Less Wrong posters" doesn't distinguish between, for example, Less Wrong posters over 60 (I'd think a pretty good chance that it's a good investment) and Less Wrong posters under 25 (At the very least we should wait a decade).

I don't know if there are many (any?) LWers over 60 but I'm sure there are a few over 40 and a few under 20 and their utility from:

  • signing up for cryonics
  • getting a life insurance policy that covers cryonics
  • being frozen
  • being frozen conditional on being successfully revived

are all different.

6Perplexed11y63 Uh, I mean one. Me
[-][anonymous]11y 3

At first I didn't think this was a good idea, but now I think it is brilliant. Bravo!

[-][anonymous]11y 3

My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of "hard takeoff" for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer's (i.e. Moore's Law will not swoop in to save the day) (6) Even an FAI might not wake the "cryonic dead" (I like that ... (read more)

There isn't a reason - that just turned out to be another stable solution to the paradox.

I'm not actually sure why there was ever confusion. From the OP: "comment voting works normally for comment replies to other comments."

Why in the world would the parent be downvoted? I'm having difficulty unraveling the paradox.

2TheOtherDave9yWell, someone might agree with wedrifid (that second-order comments are to be voted on normally) but still disapprove of his comment for reasons other than disagreement (for example, think it clarifies what would otherwise have been a valuable point of confusion), and downvote (normally) on that basis.

I'm truly not joking!!! You know perfectly well that I don't share much of what's commonly known as "sanity". So to me it's worthy of totally non-ironic consideration..

1[anonymous]9yI'm sorry for the misunderstanding. I think my brain misfired because the theory involved a video game. Can you elaborate on it? Also this probably isn't the only such incident you think is plausible, can you name others?

Are you sure this doesn't apply for personality traits as well?

Going into evopsych is so tempting right now, but the "just so story" practically writes itself.

Here's an alternative:

Since major personality traits are associated with hormones produced by parts of our body produced through embryogenisis based on our genes and the traits of our mother's womb. And since our reproductive organs are also so, it would be very surprising to find there was no correlation between personality traits and fertility/ virility, and it would be a major blow against your argument if it turned out to be one that is both strong and positive.

"At least a little bit" is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.

Another recommendation for Nick Szabo's blog. The only online writings I know of about governance and political economy that come close are the blogs of economist Arnold Kling and the eccentric and hyperbolic Mencius Moldbug. (Hanson's blog is extremely strong on several subjects, but governance is not IMHO one of them.)

4Vladimir_M11yrhollerith_dot_com: I agree with all these recommendations, and I'd add that these three authors have written some of their best stuff in the course of debating each other. In particular, a good way to get the most out of Moldbug is to read him alongside Nick Szabo's criticisms that can be found both in UR comments and on Szabo's own blog. As another gem, the 2008 Moldbug-Kling debate on finance (parts(1) [http://unqualified-reservations.blogspot.com/2008/09/maturity-transformation-considered.html] ,(2) [http://econlog.econlib.org/archives/2008/10/monetary_instit.html], (3) [http://econlog.econlib.org/archives/2008/10/thoughts_on_ban.html],(4) [http://econlog.econlib.org/archives/2008/10/thoughts_on_ban_1.html], and(5) [http://econlog.econlib.org/archives/2008/10/in_which_winnie.html]) was one of the best and most insightful discussions of economics I've ever read. I agree. In addition, I must say I'm disappointed with the shallowness of the occasional discussions of governance on LW. Whenever such topics are opened, I see people who otherwise display tremendous smarts and critical skills making not-even-wrong assertions based on a completely naive view of the present system of governance, barely more realistic than the descriptions from civics textbooks.

I'm sure that's true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy--comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can't easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans wer... (read more)

2dilaudid11yI agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs). I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.

I do wish to discuss evidence about religion - at least, I do today. I hope nick will oblige.

I think I might see what you mean.

I don't want to argue about the priors for 1-3 specifically. Such arguments generally devolve into unproductive bickering about the assignment of the burden of proof. However, priors for arguments about specific historical events, such as the location of the podium from which the speeches were delivered at Gettysburg, are known to be of ordinarily-small levels, and most evidence (e.g. written accounts) are of known weak strength in particular predictable ways*. In fact, I mentioned Gettysburg specifically because the best-... (read more)

Is my elaboration of the "burdensome detail" argument faulty? How would you advise I revise it?

Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)

It is wrong to use a subjective probability that you got from someone else for mathematical purposes directly, for reasons I expand on in my comment here. But I don't think that makes them metaphorical, unless you're using a definition of metaphor that's very different than the one I am. And you can use a subjective probability which you generated yourself, or combined with your own subjective probability, in calculations. Doing so just comes with the same caveats as using a probability from a study whose sample was too small, or which had some other bad but not entirely fatal flaw.

I don't really have a question. You have a hypothesis:

Transcendental meditation practitioners will reduce the crime rate in their cities in a nonlinear fashion satisfying certain identities.

The statement I have written above I agree with, and would therefore normally downvote.

However, you posit specific figures for the reduction of the crime rate. I have no experience with city planning or crime statistics or population figures, and hence have no real basis to judge your more specific claim.

If I disagreed with it on a qualitative level, then I would upvo... (read more)

[-][anonymous]11y 2

Yoda tutors Luke in Jedi philosophy and a practice, which it will take Luke a while to learn. In the meantime, however, Luke is merely an unpolished human. And I am not here recommending a particular philosophy and practice of thought and behavior, but making a prediction about how unpolished humans (and animals) are likely to act. My point is not to recommend that Buridan's ass should have an exaggerated confidence that the right bucket is closer, but to observe that we can expect him to have an exaggerated confidence, because, for reasons I described, ex... (read more)

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

1) So do nations with very high taxes, i.e. Nordic countries (or most of Western Europe for that matter).

One of the outliers (Ireland) has probably been knocked down a few places recently, as a result of a worldwide crisis that might well be the result of excessive deregulation.

2) In very small countries, one single insanely rich individual will make a lot of difference to average wealth, even if the ... (read more)

I'll try to give examples:

For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.

For quantum mechanics: Design a high-temperature superconductor from scratch.

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

Yeah... I ain't a mathematician! If 'measure' turns out not to be the correct mathematical concept, then I think that something like it, some kind of 'reality fluid' as Eliezer calls it, will take its place.

I was just 'following the money' to work out how market forces would likely play out with respect to mating credits. It looks at first glance like we would end up with surprisingly similar reproductive payoffs to those in the EEA. Guys, have as many children as you can afford or cuckold off on other people. Girls, seek out guys with abundant resources who can buy reproductive credits but if possible get someone with better genes to do the actual impregnation.

I'm thinking that matter-of-course paternity testing would be a useful addition to blogospheroid's proposal.

I accidentally asked this of wedrifid above, but it was intended for you:

Your confidence in a simulation universe has shaded many of your responses in this thread. You've stated you're unwilling to expend the time to elaborate on your certainty, so instead I'll ask: does your certainty affect decisions in your actual life?

Sorry! My comment was intended for Will_Newsome. Thank you for answering it anyway though, instead of just calling me an idiot =D

[-][anonymous]11y 2

When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet.

Applying the view of probability as willingness to bet, you can't refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to ... (read more)

But the rules are different in this thread. 64 here means that 64 more voters disagree than agree.

2Vladimir_Nesov11yTell that to the out-of-context list of all LW comments sorted by rating!
6wedrifid11yHang on, we have one of those?

There will be a net positive to society by measures of overall health, wealth and quality of life if the government capped reproduction at a sustainable level and distributed tradeable reproductive credits for that amount to all fertile young women. (~85% confident)

8Alicorn11yHow I evaluate this statement depends very heavily on how the policy is enforced, so I'm presently abstaining; can you elaborate on how people would be prohibited from reproducing without the auspices of one of these credits?
2wedrifid11yThe implications of that on mating payoffs are fascinating.
1mattnewport11yHistorically, global population increase has correlated pretty well with increases in measures of overall health, wealth and quality of life. What empirical evidence do you derive your theory that zero or negative population growth would be better for these measures from?
2blogospheroid11yThe peak oil literature and global climate change is something that has made me seriously reconsider the classic liberal viewpoint towards population control. Also, The reflective consistency of the population control logic. Cultures that restrict their reproduction for altruistic reasons will die out, leaving the earth for selfish replicators who will , if left uncontrolled, take every person's living standards back to square one. Population control will be on the agenda of even a moral singleton. I live in India and have seen China overtake India bigtime because of a lot of institutional improvement, but also because of the simple fact that they controlled their population. People talk about India's demographic dividend but we are not even able to educate and provide basic hygiene and health to our children to take advantage of this dividend. I've seen the demographic transition in action everywhere in the world and it seems like a good thing to happen to societies. Setting up an incentive system that rewards altruistic control of reproduction, careful creation of children and sustainability seems to be an overall plus to me. My only concern is if this starts a level-2 status game where more children become a status good and political pressure increases the quotas beyond sustainability.

In the sense that there are multiple equilibriums or that there is no equilibrium for reflection?

Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)

1wedrifid11yWorth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.

I would assume any objectively real morality would be in some way entailed by the physical universe, and therefore in theory discoverable.

I wouldn't say that a thing existed if it could not interact in any causal way with our universe.

I thought there were a lot of libertarians on LW! I'm stunned by how unsuccessful this one was!

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, (in both more and less libertarian) economies that would be very surprising to me.

A good example, in spite of the fact that Somalia has effectively no government services (not even private property protections or enforcement of contracts), its economy has generally grown year by year.

At that speed, you have less than 0.3 mm per clock cycle for your signals to propagate. Seems like you'd either need to make ridiculously tiny gadgets, or devote a lot of resources to managing the delays. Seems reasonable enough.

[-][anonymous]11y 2

This post has generated so much more controversy than I expected.

I meant exactly exercise and healthy eating! I thought people would assume I meant that. Not gastric bypass surgery, not liposuction, not starvation, not amputating limbs.

6DilGreen11yWhenever I see someone with one of those badges that says; 'Lose weight now, ask me how!", I check that they have all their limbs.

For those who want the latter, there are other places on the web full of people whose talent for such things is considerably greater than yours.

I specifically object to your implied argument in the grandparent. I will continue to reject comments that make that mistake regardless of how many times you insult me.

Generally, people would do much better with social skills, but if one person finds one really good IA technique and tells the future FAI team, that might be enough to tip the balance.

And don't forget the direct benefit that good IA techniques can have on the ability to develop social skills!

Help! There is someone reasoning in terms of decision theoretic significantness ruining my fun by telling me that my disagreement with you is meaningless.

2Will_Newsome11yAhhh! Ahhhhh! I am extremely reluctant to go into long explanations here. Have you read the TDT manual though? I think it's up at the singinst.org website now, finally. It might dissolve confusions of interpretation, but no promises. Sorry, it's just a really tricky and confusing topic with lots of different intuitions to take into account and I really couldn't do it justice in a few paragraphs here. :(

Care to lay out the evidence? Or is this not the place for that?

I really couldn't; it's such a large burden of proof to justify 99.5% certainty that I would have to be extremely careful in laying out all of my disjunctions and explaining all of my intuitions and listing every smart rationalist who agreed with me, and that's just not something I can do in a blog comment.

How is that different than "I believe that I am a simulation with non-negligible probability"?

If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in 'basement reality' versus on simulators' computers.

This also breaks down when you start reasoning decision theoretically, but most LW people don't do that, so I'm not too wo... (read more)

2Perplexed11yWhy meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don't think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark# I'm not asking you to bear the burden of proving that non-basement versions are numerous. I'm asking you to justify your claim that when I use the word "I" in this universe, it is meaningless to say that I'm not talking about the fellow saying "I" in a simulation and that he is not talking (in part) about me. Surely "I" can be interpreted to mean the local instance.

How do you know they were decent people? Were they actually tested, or was running a corporation their first test? It's easy to be "decent" when there's nothing really at stake.

1Morendil11yGood point. What I mean is that I knew them first as employees, and I heard them speak about their employers and how employers should behave, and inferred from that some values of theirs. When they became employers in turn and I saw these values tested, they failed these tests miserably.

What are the fictional metaphysical entities?

Upvoted for disagreement. I definitely disagree on whether writing the book is a rational step toward his goals. I also disagree on whether EY will build an AGI. I doubt that he will build the first one (unless he already has) at something like your 99% level.

My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.

I have this memory that monks transcribed Aristotle, Plato and Pythagoras and kept them alive, when most of the world was illiterate.

Right idea, wrong philosophers. Keep in mind that Greek was a forgotten language in western Europe throughout the middle ages. They had translated copies of Aristotle but not any other Greek writer.

As for Pythagoras, well he didn't survive. All we know about him comes from second and third hand accounts.

Downvoted for the sheer number of reversals of what used to be my background assumptions about biology without an obvious identification of a single lever that could be used to push on all of those variables.

I am now interested in Wachtershauser, but it takes more than a good LW post to make me think that everything I know is wrong and that it was all disproved by the same person.

Well, he hasn't disproved anything, merely offered an alternative hypothesis. A convincing one, IMHO.

But there is a "single lever". Wachtershauser believes that the... (read more)

If anyone wants to do this again or otherwise use voting weirdly, it is probably a good idea to have everyone put a disclaimer at the beginning of their comment warning that it's part of the experiment, for the sake of the recent comments thread.
(I don't trust any of the scores on this post. At the very least, I expect people to vote up anything at -3 or below that doesn't sound insulting in isolation.)

I've felt for a while that LW has a pretty serious problem of people voting from the recent comments page without considering the context.

But there were others, I think?

For sure. Laxatives. e coli. But yes, there are others with better side effect profiles too. :)

User:taw talked about one that you take with caffeine. It might have been a stimulant, though.

Take with caffeine? More caffeine. That'll do the trick. :P

hat gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.

The second 'negative sum' seems redundant...

2Will_Newsome11yAre you claiming that 100% of negative sum interactions are negative sum?! 1 is not a probability! ...just kidding. I meant 'improbable or not'.
2wedrifid11yCome to think of it negative sum isn't quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn't the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.

I still don't see how the two situations are different--for example, if I was talking to someone selling cryonics, wouldn't that be qualitatively the same as Pascal's Mugging?

Nah, the cryonics agent isn't trying to mug you! (Er, hopefully.) He's just giving you two options and letting you calculate.

In this case of Pascal's Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don'... (read more)

Also, certain supplements work, but I forgot which. So I gotta agree with you.

For example... just about any stimulant you can get your hands on.

If I, as a 22-year old in very good health, were to be frozen right now, I would be sacrificing a large portion of my initial life. If I were 77 in good health, I might be looking into methods to get myself frozen so as to avoid having my body fall apart.

That is, the expected utility of freezing and revival varies widely, distinctly from the wide variation of expectations about the possibility of success or the financial impact.

So my agreement or disagreement would hinge on the demographics of the reference class. (In addition to my beliefs about cryonics AND my beliefs about medicine vs. charity)

I'd interpret the who to mean 'Less Wrong commenters', since that's the reference class we're generally working with here.

2[anonymous]11yThat was the reference class I was referring to, but it really doesn't matter much in this case--after all, who wouldn't want to live through a positive Singularity?

The story was Alicorn's Damage Report.

Neither of these is an explanation.

Naively:

Everyone should agree that 100% certainty of something is infinitely overconfident. Then, everyone should upvote. Knowing this, I'm completely certain that I'll get lots of upvotes, and so absurdly large amounts of certainty seem justified. And as a kicker, everyone said I was overconfident of something that turned out to be correct.

Obviously, there are other possibilities (like me retracting the comment before it can be massively upvoted), so (like usual) 100% certainty really isn't justified. And unforseen consequences like that are exactly why you don't play with outcome pumps, as the time turner story reminds us.

Consistency of preferences is at least some kind of a prediction.

Eating some is better than none, because certain nutrients in animal fat are helpful for CDC. The point that vegetarianism is over rated for the health benefits is contrarian enough here and in the wider world to make a good post.

But yes, losing other vital nutrients would be bad.

And Atkins is silly and unhealthy. Why bring it up?

1Desrtopa11yBecause I thought that might be what you were referring to. My mother lost about 90 pounds on it, and her health is definitely better than it was when she was overweight, but it did have some rather unpleasant side effects (although she generally refuses to acknowledge them, since they're lost in the halo effect.)

If you accept measurements, it seems to me there's no way to save the flat-earth hypothesis except by supposing that our understanding of mathematics is wrong -- which seems rather less likely than measurements being wrong.

The most likely way that flat-earth could be true is that all the information we've been told about measurements (including, for example, the photos of the spherical earth) is a lie.

(Since you were fond of the Knox case discussion, I'll note that I have a similar view of the situation there: the most likely way that Knox and Sollecito ... (read more)

It's not always grammatically feasible or elegant to do so. Also, the singular "you" is much more common than the singular "they," so your readers are more likely to expect it and are prepared for the potential ambiguity.

Change, to the extent the notion makes sense (in the map, not territory) already comes with all of its consequences (and causes).

Given any mapping Worlds->Utilities, you get a partition of Worlds on equivalence classes of equal utility. Presumably, exactly equal utility is not easy to arrange, so these classes will be small in some sense. But whatever the case, these classes have boundaries, so that an arbitrarily small change in one direction or the other (from a point on a boundary) determines higher or lower resulting utility. Just make it so that one atom is at a different location.

Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger's cat and what you would care about after?

Failed to switch out a grapefruit to paperclip when I was revising. (Clips seemed more appropriate.)

2khafra11yThanks; I'm rather disappointed in myself for not guessing that. I'd imagined you having a lapse of thought while eating a grapefruit while typing it up, or thinking about doing so; but that now seems precluded to a rather ridiculous degree by Occam's Razor.

I confess that I probably exaggerated the certainty. It's more like 55-60%.

I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called "How to survive a Robot Uprising" which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist.

Then I came here and learned more about nanotechnology and the singularity a... (read more)

4magfrump11yI do think it's possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of "singularity" rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don't have a hidden agenda. This isn't very strong evidence but it's the only evidence I have). I would assign up to a 30% probability or so of "Google is planning something which might be described as preparing to implement a positive singularity." But less than a 5% chance that I would describe it that way, due to more detailed definitions of "singularity" and "positive."
3NancyLebovitz11yI don't entirely trust Google because they want everyone else's information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.