Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!
New Comment
932 comments, sorted by Click to highlight new comments since: Today at 1:55 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.

5wedrifid13y
Given that most of the top comments are meta in one way or another it would seem that the 'top comments' list belongs somewhere other than on the front page. Can't we hide the link to it on the wiki somewhere?
4Luke Stebbing13y
The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them. Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).
5AngryParsley13y
Just to clarify: by "unknown entities" do you mean non-human intelligent beings?
1PlaidX13y
Yes.
3Will_Newsome12y
I would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.
2[anonymous]13y
I find it vaguely embarrassing that this post, taken out of context, now appears at the top of the "Top Comments" listing.
5Vladimir_Nesov13y
I think "top comments" was an experiment with a negative result, and so should be removed.
1Scott Alexander13y
I upvoted you because 95% is way high, but I agree with you that it's non-negligible. There's way too much weirdness in some of the cases to be easily explainable by mass hysteria or hoaxes or any of that stuff - and I'm glad you pointed out Fatima, because that was the one that got me thinking, too. That having been said, I don't know what they are. Best guess is easter eggs in the program that's simulating the universe.
3Will_Newsome13y
Prior before having learned of Fatima, roughly? Best guess at current probability?
1PlaidX13y
I don't think that's a very good guess, but it's as good as any I've seen. I tried to phrase my belief statement to include things like this within its umbrella.
1Will_Newsome13y
Voted up, and you've made me really curious. Link or explanation?
5PlaidX13y
This is what spurred me to give consideration to the idea initially, but what makes me confident is sifting through simply mountains of reports. To get an idea of the volume and typical content, here's a catalog of vehicle interference cases in Australia from 1958 to 2004. Most could be explained by a patchwork of mistakes and coincidences, some require more elaborate, "insanity or hoax" explanations, and if there are multiple witnesses, insanity falls away too. But there is no pattern that separates witnesses into a "hoax" and a "mistake" group, or even that separates them from the general population.

If there are mutliple witnesses who can see each others reactions, it's a good candidate for mass hysteria

7Will_Newsome13y
I couldn't really understand the blog post: his theory is that there are terrestrial but nonhuman entities that like to impress the religious? But the vehicle interference cases you reference are generally not religious in nature, and are extremely varying in the actual form of the craft seen (some are red and blue, some are series of lights). What possible motivations for the entities could there be? Most agents with such advanced technology will aim to efficiently optimize for their preferences. If this is what optimizing for their preferences looks like, they have some very improbably odd preferences.

To be fair to the aliens, the actions of Westerners probably seem equally weird to Sentinel Islanders. Coming every couple of years in giant ships or helicopters to watch them from afar, and then occasionally sneaking into abandoned houses and leaving gifts?

3JohannesDahlstrom13y
That was a fascinating article. Thank you.
3PlaidX13y
I agree with you entirely, and this is a great source of puzzlement to me, and to basically every serious investigator. They hide in the shadows with flashing lights. What could they want from us that they couldn't do for themselves, and if they wanted to influence us without detection, shouldn't it be within their power to do it COMPLETELY without detection? I have no answers to these questions.
2Risto_Saarelma13y
That's assuming that what's going on is that entities who are essentially based on the same lawful universe as we are are running circles around humans. If what's going on is instead something like a weird universe, where reality makes sense most of the time, but not always, I imagine you might get something that looks a lot like some of the reported weirdness. Transient entities that don't make sense leaking through the seams, never quite leaving the causal trail which would incontrovertibly point to their existence.
1Will_Newsome12y
If I'd asked the above questions honestly rather than semi-rhetorically I may have figured a few things out a lot sooner than I did. I might be being uncharitable to myself, especially as I did eventually ask them honestly, but the point still stands I think.
0wedrifid13y
64 points! This is the highest voted comment that I can remember seeing. (A few posts have gone higher). Can anyone remember another, higher voted example?
2Richard_Kennaway13y
But the rules are different in this thread. 64 here means that 64 more voters disagree than agree.
2Vladimir_Nesov13y
Tell that to the out-of-context list of all LW comments sorted by rating!
6wedrifid13y
Hang on, we have one of those?
0[anonymous]13y
-
0Jonathan_Graehl13y
I'd like to know what your prior is for the disjunction "unknown entities control saucers that ambiguously reveal themselves to a minority of people on Earth, for some purpose". While I'm sure you've looked more closely at the evidence than I have, I presume your prior for that disjunction must be much higher than mine to even look closely.
1PlaidX13y
It certainly wasn't high... I went through most of my life never giving the idea a thought, stumbled onto the miracle of fatima one day, and said "well, clearly this wasn't a flying saucer, but what the heck was it?" But the rabbit hole just kept going down. It is not a particularly pleasant feeling to me, as someone who used to think he had a fairly solid grip on the workings of the world.
0Perplexed13y
The sun, seen through moving clouds. Just exactly what it is described as being.
7PlaidX13y
Here is one of many detailed accounts, this one is from Dr. José Maria de Almeida Garrett, professor at the Faculty of Sciences of Coimbra, Portugal I was looking at the place of the apparitions, in a serene, if cold, expectation of something happening, and with diminishing curiosity, because a long time had passed without anything to excite my attention. Then I heard a shout from thousands of voices and saw the multitude suddenly turn its back and shoulders away from the point toward which up to now it had directed its attention, and turn to look at the sky on the opposite side. It must have been nearly two o'clock by the legal time, and about midday by the sun. The sun, a few moments before, had broken through the thick layer of clouds which hid it, and shone clearly and intensely. I veered to the magnet which seemed to be drawing all eyes, and saw it as a disc with a clean-cut rim, luminous and shining, but which did not hurt the eyes. I do not agree with the comparison which I have heard made in Fatima---that of a dull silver disc. It was a clearer, richer, brighter colour, having something of the luster of a pearl. It did not in the least resemble the moon on a clear night because one saw it and felt it to be a living body. It was not spheric like the moon, nor did it have the same colour, tone, or shading. It looked like a glazed wheel made of mother-of-pearl. It could not be confused, either, with the sun seen through fog (for there was no fog at the time), because it was not opaque, diffused or veiled. In Fatima it gave light and heat and appeared clear-cut with a well-defined rim. The sky was mottled with light cirrus clouds with the blue coming through here and there, but sometimes the sun stood out in patches of clear sky. The clouds passed from west to east and did not obscure the light of the sun, giving the impression of passing behind it, though sometimes these flecks of white took on tones of pink or diaphanous blue as they passed before the sun.
0Will_Newsome13y
Do you think you guess numerically what your prior probability was before learning of the Miracle of Fatima?
3PlaidX13y
Mmm, < .01%, it wasn't something I would've dignified with enough thought to give a number. Even as a kid, although I liked the idea of aliens, stereotypical flying saucer little green men stuff struck me as facile and absurd. A failure of the imagination as to how alien aliens would really be. In hindsight I had not considered that their outward appearance and behavior could simply be a front, but even then my estimate would've been very low, and justifiably, I think.
1Eugine_Nier13y
Probably ~15% (learning about Fatima didn't change it much by the way). Basically because I can't think of a good reason why this should have an extremely low prior.
-2CronoDAS13y
And do you believe in Santa Claus, too? :P

Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%

I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.

7ata13y
Agreed. I think they've explicitly denied that they're working on AGI, but I'm not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they're probably among the entities most likely (along with, I'd say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).

If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.

Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.

5Kevin13y
Google has one employee working (sometimes) on AGI. http://research.google.com/pubs/author37920.html
6khafra13y
It's comforting, friendliness-wise, that one of his papers cites "personal communication with Steve Rayhawk."
0magfrump13y
If they've explicitly denied doing research into AGI, they would have no reason to talk about friendliness research; that isn't additional evidence. I do think the OP is extremely overconfident though.
1Raemon13y
I confess that I probably exaggerated the certainty. It's more like 55-60%. I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called "How to survive a Robot Uprising" which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist. Then I came here and learned more about nanotechnology and the singularity and the joke became a lot less funny. (The techniques described in the Robot Uprising are remarkably useless when you have about a day between noticing something is wrong and the whole world turning into paperclips.) It seems to me that with the number of extremely smart people in Google, there's gotta be at least some who are pondering this issue and thinking about it seriously. The actual evidence of Google being a genuinely idealistic company that just wants information to be free and to provide a good internet experience vs them having SOME kind of secret agenda seems about 50/50 to me - there's no way I can think of to tell the difference until they actually DO something with their massively accumulated power. Given that I have no control of it, basically I just feel more comfortable believing they are doing something that a) uses their power in a way I can perceive as good or at least good-intentioned, which might actually help, b) lines up with the particular set of capabilities and interests. I'd also note that the type of Singularity I'm imagining isn't necessarily AI per se. More of the internet and humanity (or parts of it) merging into a superintelligent consciousness, gradually outsourcing certain brain functions to the increasingly massive processing power of computers.
4magfrump13y
I do think it's possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of "singularity" rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don't have a hidden agenda. This isn't very strong evidence but it's the only evidence I have). I would assign up to a 30% probability or so of "Google is planning something which might be described as preparing to implement a positive singularity." But less than a 5% chance that I would describe it that way, due to more detailed definitions of "singularity" and "positive."
3NancyLebovitz13y
I don't entirely trust Google because they want everyone else's information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.

Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Can you rephrase this statement tabooing the words experience and qualia.

If he could, he wouldn't be making that mistake in the first place.

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

0[anonymous]13y
Well, from a certain point of view you could see the two propositions as being essentially equivalent... i.e. the inhabitants of a higher layer reality poking through the layers and toying with us (if you had a universe simulation running on your desktop, would you really be able to refrain from fucking with your sims' heads)? So whatever probability you assign to one proposition, your probability for the other shouldn't be too much different.
0LucasSloan13y
I certainly agree with you now, but it wasn't entirely certain what you meant by your statement. A qualifier might help.
0Will_Newsome13y
Most won't see the need for precision, but you're right, I should add a qualifier for those who'd (justifiably) like it.
2Perplexed13y
Help! There is someone reasoning in terms of decision theoretic significantness ruining my fun by telling me that my disagreement with you is meaningless.
2Will_Newsome13y
Ahhh! Ahhhhh! I am extremely reluctant to go into long explanations here. Have you read the TDT manual though? I think it's up at the singinst.org website now, finally. It might dissolve confusions of interpretation, but no promises. Sorry, it's just a really tricky and confusing topic with lots of different intuitions to take into account and I really couldn't do it justice in a few paragraphs here. :(
5LucasSloan13y
What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.
0Will_Newsome13y
So you know which I must have meant, then. I do try to be almost certainly right. ;) (Technically, we shouldn't really be thinking about probabilities here either because it's not important and may be meaningless decision theoretically, but I think LW is generally too irrational to have reached the level of sophistication such that many would pick that nit.)
4Nick_Tarleton13y
I'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)
2Will_Newsome13y
That is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%. What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.
0wedrifid13y
That's a good question. My impression is that it is somewhat. But in the figures we are giving here we seem to be trying to convey two distinct concepts (not just likelyhoods).
4Mass_Driver13y
Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.
4Will_Newsome13y
Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.
2[anonymous]13y
This raises an interesting point: what do you think about the Presumptuous Philosopher thought experiment?
3Jonathan_Graehl13y
Yep. Over-reliance on anthropic arguments IMO.
4Will_Newsome13y
Huh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe. (ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)
2AlephNeil13y
If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect. On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)
5Will_Newsome13y
It's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot. First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way. (Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).) What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?
2AlephNeil13y
Oh sure - non-trivial philosophical questions are funny like that. Anyway, my idea is that for any description of a universe, certain elements of that description will be ad hoc mathematical 'scaffolding' which could easily be changed without meaningfully altering the 'underlying reality'. A basic example of this would be a choice of co-ordinates in Newtonian physics. It doesn't mean anything to say that this body rather than that one is "at rest". Now, specifying a manner in which the universe is being simulated is like 'choosing co-ordinates' in that, to do a simulation, you need to make a bunch of arbitrary ad hoc choices about how to represent things numerically (you might actually need to be able to say "this body is at rest"). Of course, you also need to specify the laws of physics of the 'outside universe' and how the simulation is being implemented and so on, but perhaps the difference between this and a simple 'choice of co-ordinates' is a difference in degree rather than in kind. (An 'opaque' chunk of physics wrapped in a 'transparent' mathematical skin of varying thickness.) I'm not saying this account is unproblematic - just that these are some pretty tough metaphysical questions, and I see no grounds for (near-)certainty about their correct resolution. He's not talking about ensemble vs 'single universe' models of reality, he's talking about reference - what's it's possible for someone to refer to. He may be wrong - I'm not sure - but even when he's wrong he's usually wrong in an interesting way. (Like this.) I'm unmoved - it's trite to point out that even smart people tend to be overconfident in beliefs that they've (in some way) invested in. (And please note that the line you were responding to is specifically about the scenario where there is 'intervention'.)
2wedrifid13y
Err... I'm not intimately acquainted with the sport myself... What's the approximate difficulty rating of that kind of verbal gymnastics stunt again? ;)
2AlephNeil13y
It's a tricky one - read the paper. I think what he's saying is that there's no way for a person in a simulation (assuming there is no intervention) to refer to the 'outside' world in which the simulation is taking place. Here's a crude analogy: Suppose you were a two-dimensional being living on a flat plane, embedded in an ambient 3D space. Then Putnam would want to say that you cannot possibly refer to "up" and "down". Even if you said "there is a sphere above me" and there was a sphere above you, you would be 'incorrect' (in the same paradoxical way).
6MugaSofer11y
But ... we can describe spaces with more than three dimensions.
1timtyler13y
So: you think there's a god who created the universe?!? Care to lay out the evidence? Or is this not the place for that?
2Will_Newsome13y
I really couldn't; it's such a large burden of proof to justify 99.5% certainty that I would have to be extremely careful in laying out all of my disjunctions and explaining all of my intuitions and listing every smart rationalist who agreed with me, and that's just not something I can do in a blog comment.
0A1987dM11y
Upvoted mainly because of the last sentence (though upvoting it does coincide with what I'd have to do according to the rules of the game).
0[anonymous]13y
I'm confused about the justification for reasoning in terms of measure. While the MUH (or at least its cousin the CUH) seems to be preferred from complexity considerations, I'm unsure of how to account for the fact that it is unknown whether the cosmological measure problem is solvable. Also, what exactly do you consider making up "your measure"? Just isomorphic computations?
1Will_Newsome13y
Naively, probabilistically isomorphic computations, where the important parts of the isomorphism are whatever my utility function values... such that, on a scale from 0 to 1, computations like Luke Grecki might be .9 'me' based on qualia valued by my utility function, or 1.3 'me' if Luke Grecki qualia are more like the qualia my utility function would like to have if I knew more, thought faster, and was better at meditation.
0[anonymous]13y
Ah, you just answered the easier part!
2Will_Newsome13y
Yeah... I ain't a mathematician! If 'measure' turns out not to be the correct mathematical concept, then I think that something like it, some kind of 'reality fluid' as Eliezer calls it, will take its place.
0Liron13y
99.5% is just too certain. Even if you think piles of realities nested 100 deep are typical, you might only assign 99% to not being in the basement.
0Perplexed13y
How is that different than "I believe that I am a simulation with non-negligible probability"? I'm leaving you upvoted. I think the probability is negligible however you play with the ontology.
2Will_Newsome13y
If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in 'basement reality' versus on simulators' computers. This also breaks down when you start reasoning decision theoretically, but most LW people don't do that, so I'm not too worried about it. In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about 'basement' as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs). The only reason you'd expect there to be humans in the first place is if they appeared in 'basement' level reality, and in a universal dovetailer computing via complexity, there's then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it's meaningless decision theoretically, but meaningful if we're just talking about the structure of the ensemble from a naive human perspective.)
2Perplexed13y
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don't think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark# I'm not asking you to bear the burden of proving that non-basement versions are numerous. I'm asking you to justify your claim that when I use the word "I" in this universe, it is meaningless to say that I'm not talking about the fellow saying "I" in a simulation and that he is not talking (in part) about me. Surely "I" can be interpreted to mean the local instance.
-1LucasSloan13y
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn't just that given that they have no way to test, this means they'll never know, it's more fundamental than that. It's kinda like how if there's an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there's no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
0Perplexed13y
I must be missing a subtlety here. I began by asking "Is saying X different from saying Y?" I seem to be getting the answer "Yes, they are different. X is meaningless because it can't be distinguished from Y."
3LucasSloan13y
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program - and in this case, we can say "yes, in memory position #31415926 there's a human in basement reality and in memory position #2718281828 there's an identical human in a deeper simulation". However, those humans can't tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
1Perplexed13y
No, you don't see the problem. The problem is that Will_Newsome began by stating: Which is fine. But now I am being told that my counter claim "I am not living in a simulation" is meaningless. Meaningless because I can't prove my statement empirically. What we seem to have here is very similar to Godel's version of St. Anselm's "ontological" proof of the existence of a simulation (i.e. God).
-3LucasSloan13y
Oh. Did you see my comment asking him to tell whether he meant "some of our measure is in a simulation" or "this particular me is in a simulation"? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
0Will_Newsome13y
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there's an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).
0[anonymous]13y
I don't get this. Consider the following ordering of programs; T' < T iff T can simulate T'. More precisely: T' < T iff for each x' there exists an x such that T'(x') = T(x) It's not immediately clear to me that this ordering shouldn't have any least elements. If it did, such elements could be thought of as basements. I don't have any idea about whether or not we could be part of such a basement computation. I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.
-3Kaj_Sotala13y
My stance on the simualtion hypothesis: Presume that there is an infinite amount of "stuff" in the universe. This can be a a Tegmarkian Level IV universe (all possible mathematical structures exist), or alternatively there might only be an infinite amount of matter in this universe. The main assumption we need is that there is an infinite amount of "stuff", enough that anything in the world gets duplicated an infinite number of times. (Alternatively, it could finite but insanely huge.) Now this means that there are an infinite number of Earths like ours. It also means that there is an infinite number of planets that are running different simulations. An infinite number of those simulations will, by coincidence or purpose, happen to be simulating the exact same Earth as ours. This means that there exist an infinite number of Earths like ours that are in a simulation, and an infinite number of Earths like ours that are not in a simulation. Thus it becomes meaningless to ask whether or not we exist in a simulation. We exist in every possible world containing us that is a simulation, and exist in every possible world containing us that is not a simulation. (I'm not sure if I should upvote or downvote you.)
6Eugine_Nier13y
Just because a set is infinite doesn't mean it's meaningless to speak of measures on it.
5Perplexed13y
The infinite cardinality of the set doesn't preclude the bulk of the measure being attached to a single point of that set. For Solomonof-like reasons, it certainly makes sense to me to attach the bulk of the measure to the "basement reality"
2Will_Newsome13y
(FWIW I endorse this line of reasoning, and still think 99.5% is reasonable. Bwa ha ha.) (That is, I also think it makes sense to attach the bulk of the measure to basement reality, but sense happens to be wrong here, and insanity happens to be right. The universe is weird. I continue to frustratingly refuse to provide arguments for this, though.) (Also, though I and I think most others agree that measure should be assigned via some kind of complexity prior (universal or speed priors are commonly suggested), others like Tegmark are drawn towards a uniform prior. I forget why.)
1Perplexed13y
I wouldn't have thought that a uniform prior would even make sense unless the underlying space has a metric (a bounded metric, in fact). Certainly, a Haar measure on a recursively nested space (simulations within simulations) would have to assign the bulk of its measure to the basement. Well, live and learn.
0Will_Newsome13y
Yeah, I also don't understand Tegmark's reasoning (which might have changed anyway).
0Will_Newsome13y
Right, I agree with Eugine Nier: the relative measures are important. You are in tons of universes at once, but some portion of your measure is simulated, and some not. What's the portion?

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

-6jferguson13y

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

What reason do you have for assigning such high probability to time travel being possible?

3Perplexed13y
And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation? ;) Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.
2RobinZ13y
I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability. Edit: Of course, evidence for that 95%+ would be appreciated.
0nick01200013y
Well, most of the arguments against it are, to my knowledge, start with something along the lines of "If time travel exists, causality would be fucked up, and therefore time travel can't exist," though it might not be framed quite that implicitly. Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.
5RobinZ13y
That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.
0rabidchicken13y
nick voted up, robin voted down... This feels pretty weird.

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

0Normal_Anomaly13y
My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn't the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

9Swimmy13y
You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.
8wedrifid13y
Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!
9RobinZ13y
It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.
8RobinZ13y
I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?
0Vladimir_Nesov13y
We should learn to present this argument correctly, since complexity of hypothesis doesn't imply its improbability. Furthermore, the prior argument drives probability through the floor, making 99% no more surprising than 1%, and is thus an incorrect argument if you wouldn't use it for 1% as well (would you?).

I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:

  1. God exists.
  2. God created the universe.
  3. God prefers not to violate natural laws.
  4. The stories about people seeing angels are based on real events.
  5. The angels seen during these events were actually just robots.
  6. The angels seen during these events were wielding laser turrets.

Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.

But yes - I'm not good at arguing.

-7Vladimir_Nesov13y

There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.

Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.

Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.

This (modulo the chance it was made up) is pretty strong evidence that you're wrong. I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

Here's another case:

"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.

6AdeleneDawner13y
They could probably do some relevant research by talking to Alzheimer's patients - they wouldn't get anything as clear as that, I think, but I expect they'd be able to get statistically-significant data.
8[anonymous]13y
How detailed of a model are you thinking of? It seems like there are at least easy and somewhat trivial predictions we could make e.g. that a human will eat chocolate instead of motor oil.
4dyokomizo13y
I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.

How about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)

-2dyokomizo13y
This model seems to be reducible to "people will eat what they prefer". A good model would be able to reduce the number of bits to describe a behavior, if the model requires to keep a log (e.g. what particular humans prefer to eat) to predict something, it's not much less complex (i.e. bit encoding) than the behavior.
6AdeleneDawner13y
Maybe I've misunderstood. It seems to me that your original prediction has to refer either to humans as a group, in which case Luke's counterexample is a good one, or humans as individuals, in which case my counterexample is a good one. It also seems to me that either counterexample can be refined into a useful prediction: Humans in general don't eat petroleum products. I don't eat spicy food. Corvi doesn't eat meat. All of those classes of things can be described more efficiently than making lists of the members of the sets.
-2newerspeak13y
No, because preferences are revealed by behavior. Using revealed preferences is a good heuristic generally, but it's required if you're right that explanations for behavior are mostly post-hoc rationalizations. So: People eat what they prefer. What they prefer is what they wind up having eaten. Ergo, people eat what they eat.
1Strange713y
Consistency of preferences is at least some kind of a prediction.
7Douglas_Knight13y
I think "vague" is a poor word choice for that concept. "(not) informative" is a technical term with this meaning. There are probably words which are clearer to the layman.
2dyokomizo13y
I agree vague is not a good word choice. Irrelevant (using relevancy as it's used to describe search results) is a better word.
5Perplexed13y
Downvoted in agreement. But I think that the randomness comes from what programmers call "race conditions" in the timing of external stimuli vs internal stimuli. Still, these race conditions make prediction impossible as a practical matter.
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

4wedrifid13y
I want to upvote each of these points a dozen times. Then another few for the first. It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.
2mattnewport13y
I guess I'm playing the game right then :) I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.
4Mass_Driver13y
Funny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)
1wedrifid13y
Most of them, no. Some, yes. Particularly since the alternative is the inevitable loss of everything that is valuable to me in the universe.
7Will_Newsome13y
This is incredibly tangential, but I was talking to a friend earlier and I realized how difficult it is to instill in someone the desire for altruism. Her reasoning was basically, "Yeah... I feel like I should care about cancer, and I do care a little, but honestly, I don't really care." This sort of off-hand egoism is something I wasn't used to; most smart people try to rationalize selfishness with crazy beliefs. But it's hard to argue with "I just don't care" other than to say "I bet you will have wanted to have cared", which is gramatically horrible and a pretty terrible argument.
9Jordan13y
I respect blatant apathy a whole hell of a lot more than masked apathy, which is how I would qualify the average person's altruism.
0DanielLC13y
I agree with your second. Was your third supposed to be high or low? I think it's low, but not unreasonably so.
0mattnewport13y
I expected the third to be higher than most less wrongers would estimate.
0[anonymous]13y
I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?
0[anonymous]13y
I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?
6mattnewport13y
Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment. Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton). Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton. Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders. Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.
4orthonormal13y
Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.
3Eugine_Nier13y
I'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.
7orthonormal13y
Cancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction. One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.
1Eugine_Nier13y
However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.
2orthonormal13y
So you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way. But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).
0Eugine_Nier13y
I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.
5orthonormal13y
Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.
0mattnewport13y
I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen). The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance. The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.
0orthonormal13y
I'm analogizing a singleton to a single ant colony, not to a supercolony.
0Eugine_Nier13y
I agree with your first two, but am dubious about your third.
3mattnewport13y
Two points that influence my thinking on that claim: 1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents. 2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.

75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.

At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.

(Edited for clarity.)

(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)

2Risto_Saarelma13y
Just to make sure, is this talking about something different from people committing less crimes when they are themselves practicing TM or in daily contact with someone who does? I don't really understand the second paragraph. What arm TM-Sidhis, are they something distinct from regular TM (are these different types of practicioners). And what's with the sqrt(1%)? One in ten people in the total population need to be TM-Sidhis for the crime rate reduction effect to kick in?
0Pavitra13y
I'm not sure if personal contact with practitioners has an effect, but the studies I'm thinking of were on the level of cities -- put a group of meditators in Chicago, the Chicago crime rate goes down. TM-Sidhis is a separate/additional practice that has TM as a dependency in the sense of package management. If you know TM, you can learn TM-Sidhis. Sorry, I meant sqrt(.01p) where p is the population of the group to be affected. For example, a city of one million people would require ten thousand TM meditators or 100 TM-Sidhis meditators.
0Risto_Saarelma13y
Right, thanks for the clarification. This definitely puts the claim into upvote territory for me.
0magfrump13y
No vote: I agree with the hypothesis that appropriate meditation practice could reduce crime rates, but I haven't the slightest idea how to evaluate the specific population figures.
0Pavitra13y
Can you clarify the question, or does the whole statement seem meaningless?
2magfrump13y
I don't really have a question. You have a hypothesis: Transcendental meditation practitioners will reduce the crime rate in their cities in a nonlinear fashion satisfying certain identities. The statement I have written above I agree with, and would therefore normally downvote. However, you posit specific figures for the reduction of the crime rate. I have no experience with city planning or crime statistics or population figures, and hence have no real basis to judge your more specific claim. If I disagreed with it on a qualitative level, then I would upvote. If I had any sense of what your numbers meant I might think that they were about right or too high or too low but since I don't I'm not able to evaluate it. But not-evaluating because I don't know how to engage the numbers is different than not-evaluating because I didn't read it, so I wanted to make the difference clear; since the point of the game is to engage with ideas that may be controversial.
0Pavitra13y
I'm still not sure I understand what you mean, but let me take a shot in the dark: Out of the variance in crime rate that depends causally on the size of the meditating group, most of that variance depends on whether or not the size of the group is greater than a certain value that I'll call x. If the meditating group is practicing only TM, then x is equal to 1% of the size of the population to be affected, and if the meditating group is practicing TM-Sidhis, then x is equal to the square root of 1% of the population to be affected. For example, with a TM-only group in a city of ten thousand people, increasing the size of the group from 85 to 95 meditators should have a relatively small effect on the city's crime rate, increasing from 95 to 105 should have a relatively large effect, and increasing from 105 to 115 should have a relatively small effect. Edit: Or did you mean my confidence values? The second proposition (about the nonlinear relationship) I assign 20% confidence conditional on the truth of the first proposition. Since I assign the first proposition 75% confidence, and since the second proposition essentially implies the first, it follows that the second proposition receives a confidence of (0.2 * 0.75)=15%.
3magfrump13y
I understand what you meant by your proposition, I'm not trying to ask for clarification. I assume you have some model of TM-practitioner behavior or social networking or something which justifies your idea that there is such a threshold in that place. I do not have any models of: how TM is practiced, and by whom; how much TM effects someone's behavior, and consequently the behavior of those they interact with; how much priming effects like signs or posters for TM groups or instruction have on the general populace; how much the spread of TM practitioners increases the spread of advertisement. I would not be hugely surprised if it were the case that, given 1% of the population practiced TM, this produced enough advertisement to reach nearly all of the population (i.e. a sign on the side of a couple well-traveled highways) or enough social connections that everyone in a city was within one or two degrees of separation of a TM practitioner. But I also wouldn't be surprised if the threshold was 5%, or .1%, or if there was no threshold, or if there was a threshold in rural areas but not urban areas, or conservative-leaning areas but not liberal-leaning areas, or the reverse. I have no model of how these things would go about, so I don't feel comfortable agreeing or disagreeing. Certainly fewer than 15% of the possible functions of TM-practice vs crime are as you describe, but it is certainly far more likely that your hypothesis is true compared with the hypothesis "even one TM-practitioner makes the crime rate 100%" but I don't know if it's 5 bits more relevant or 10 bits more relevant, and I don't know what my probabilities should be even if I knew how many bits of credence I should give a hypothesis. If you know something more than I do (which is to say, anything at all) about social networking, advertising, or the meditation itself, or the people who practice it, then you might reasonably have a good hypothesis. But I don't, so I can only take the outside view,
0Pavitra13y
I understand now. The causally primary reason for my belief is that while I was growing up in a TM-practicing community, I was told repeatedly that there were many scientific studies published in respectable journals demonstrating this effect, and the "square root of one percent" was a specific point of doctrine. I've had some trouble finding the articles in question on academically respectable, non-paywalled sites (though I didn't try for more than five or ten minutes), but a non-neutrally-hosted bibliography-ish thing is here. (Is there a general lack of non-paywalled academically respectable online archives of scientific papers?) . (Edited to add: if anyone decides to click any of the videos on that page, rather than just following text links, I'd assign Fred Travis the highest probability of saying anything worth hearing.) . (Edited again: I was going to say this when I first wrote this comment, but forgot: The obvious control would be against other meditation techniques. I don't think there are studies with this specific control on the particular effect in my top-level comment, but there are such studies on e.g. medical benefits.) . (Edited yet again: I've now actually watched the videos in question. The unlabeled video at the top (John Hagelin) is a lay-level overview of studies that you can read for yourself through text links. (That is, you can read the studies, not the overview.) Gary Kaplan is philosophizing with little to no substance in the sense of expectation-constraint, and conditional on the underlying phenomena being real his explanation is probably about as wrong as, say, quantum decoherence. Nancy Lonsdorf is arguing rhetorically for ideas whose truth is almost entirely dependent on the validity of the studies in question and that follow from such validity in a trivial and straightforward fashion. Some people might need what she's saying pointed out to them, but probably not the readers of Less Wrong. Fred Travis goes into more crunch
0magfrump13y
Wow that was a super in depth response! Thanks, I'll check it out if I have time.

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

4wedrifid13y
Do you apply this to yourself?
3Simon Fischer13y
Yes! Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
3RomanDavis13y
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using, And this is before the computer uses it's knowledge to reoptimize it's optimization process.
1Simon Fischer13y
I understand the concept of recursive self-optimization und I don't consider it to be very implausible. Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization? I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
3wedrifid13y
I count "algorithm-space is really really really big" as at least some form of evidence. ;) Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid. Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
2Risto_Saarelma13y
You're putting 'effectively' here in place of 'intelligently' in the original assertion.
0Simon Fischer13y
I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?
0Risto_Saarelma13y
I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?
1Simon Fischer13y
This is of course a matter of degree, but basically yes!
0Risto_Saarelma13y
Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems? The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.
2Simon Fischer13y
I'll try to give examples: For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent. For quantum mechanics: Design a high-temperature superconductor from scratch. Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?
0wedrifid13y
We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!
0Simon Fischer13y
Not a good start if we are facing exponential search-spaces! If brute-force would work, I imagine the AI-problem would be solved?
0wedrifid13y
Not particularly. :) But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn't expect you would concede the ability to brute force 'general optimisation' - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently. Not necessarily. Biases could easily have made humans worse than brute-force.
0Simon Fischer13y
Please give evidence that "a more impressive kind of general intelligence" actually exists!
5wedrifid13y
Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours. Note that I've tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference: * Demands of the general form "Where is the evidence for?" are somewhat of a hangover from traditional rational 'debate' mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn't the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations). * "More impressive than humans" is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best 'general intelligence' we could arrive at in the local area. We haven't had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn't wait until our brains reached the best level DNA could support before it kicked in. * A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle. * Being able to 'brute force' a solution to any problem is actuall
0Simon Fischer13y
My intention was merely to point out where I don't follow your argument, but your criticism in my formulation is valid. I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance) My concern is more about what we can do with limited ressources, this is why I'm not impressed with the brute-force-solution This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
0wedrifid13y
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even "X turns without a piece being taken" would be sufficient depending on how idiotic the 'brute force' is. Is such a rule in place?
0Apprentice13y
Yes, the fifty-move rule. Though technically it only allows you to claim a draw, it doesn't force it.
0wedrifid13y
OK, thanks. In that case brute force doesn't actually produce perfect play in chess and doesn't return if it tries. (Incidentally, this observation that strengthens SimonF's position.)
0Simon Fischer13y
But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here) This claims that generalized chess is EXPTIME-complete, which is in agreement with the above.
0wedrifid13y
That rule will do it (given the forced).
0wedrifid13y
(Pardon the below tangent...) I'm somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that'd be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I'm guessing chess will be a stalemate too but I don't know for sure even whether we'll ever be able to prove that one way or the other. Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to 'two moves and a pawn" or somesuch thing. My prediction: As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans 'catching up'. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected 'perfect' result.
2timtyler13y
Sure there is - see: * Legg, Shane Tests of Machine Intelligence. Shane Legg and Marcus Hutter. In Proc. 50th Anniversary Summit of Artificial Intelligence, Monte Verità, Switzerland. 2007. * Hutter, M.: Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer, Berlin (2004) * Hernández-Orallo, J., Dowe, D.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence. 17, 1508-1539 (2010) * Solomonoff, R. J.: A Formal Theory of Inductive Inference: Parts 1 and 2. Information and Control 7, 1-22 and 224-254 (1964). The only assumption about the environment is that Occam's razor applies to it.
4Simon Fischer13y
Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
1timtyler13y
IMO, it is best to think of power and breadth being two orthogonal dimensions - like this. * narrow <-> broad; * weak <-> powerful. The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct. I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can. I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
2[anonymous]12y
That is a very good point, with wideness orthogonal to power. Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
0Simon Fischer13y
I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)
0timtyler13y
Clear - but also clearly wrong. Robin Hanson says: ...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.
0Simon Fischer13y
Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.
0timtyler13y
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist. Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".
2whpearson13y
Can you unpack algorithm and why you think an intelligence is one?
1Simon Fischer13y
I'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way. Wikipedia says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system." When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.
3whpearson13y
Does it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine or persistent Turing machine. So some may say it is not an algorithm. The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.
1ata13y
Do you behave intelligently in domains you were not specifically designed(/selected) for?
0Simon Fischer13y
No, I don't think I would be capable if the domain is sufficiently different from the EEA.
0[anonymous]13y
Do you antipredict an AI specialized in AI design, which can't do anything it's not specifically designed to do, but can specifically design itself as needed?

Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)

I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.

4JoshuaZ13y
Upvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.
5James_Miller13y
I wrote about it here: http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html Once we have identified genes that play a key role in intelligence then eugenics through massive embryo selection has a good chance at producing lots of super-geniuses especially if you are willing to tolerate a high "error rate." The Chinese are actively looking for the genetic keys to intelligence. (See http://vladtepesblog.com/?p=24064) The Chinese have a long pro-eugenics history (See Imperfect Conceptions by Frank Dikötter) and I suspect have a plan to implement a serious eugenics program as soon as it becomes practical which will likely be within the next five years.
5JoshuaZ13y
I think the main point of disagreement is the estimate that such a program would be practical in five years (hence my longer-term estimate). My impression is that actual studies of the genetic roots of intelligence are progressing but at a fairly slow pace. I'd give a much lower than 40% chance that we'll have that good an understanding in five years.
0James_Miller13y
If the following is correct we are already close to finding lots of IQ boosting genes: "SCIENTISTS have identified more than 200 genes potentially associated with academic performance in schoolchildren. Those schoolchildren possessing the 'right' combinations achieved significantly better results in numeracy, literacy and science.'" http://www.theaustralian.com.au/news/nation/found-genes-that-make-kids-smart/story-e6frg6nf-1225926421510
2Douglas_Knight13y
The article is correct, but we are not close to finding lots of IQ boosting genes. But the relevant question is whether the Chinese government is fooled by this too.
3Jack13y
Can you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.
2gwern13y
Recorded: * http://predictionbook.com/predictions/1834
0wedrifid13y
Hat tip to China.
0magfrump13y
Tentatively downvoted; I think over a longer time period it's highly likely, but I would be unsurprised to later discover that it started that soon. I might put my (uninformed) guess closer to 10-20% but it feels qualitatively similar.

There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)

7JenniferRM13y
If I'm interpreting the terms charitably, I think I put this more like 70%... which seems like a big enough numerical spread to count as disagreement -- so upvoted! My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe's Leviathan, and personal musings about Fukuyama's End Of History extrapolated into transhuman contexts, and more ideas in this vein. It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out... but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a "theory of morality". But even then, being able to generate evidence about the absence of an objective object level "theory of morality" would itself seem to offer a strategy for taking a universally acceptable position on the general subject... which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel's "Last Word": "If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it."
0magfrump13y
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.
4jimrandomh13y
This probably isn't what you had in mind, but any single complete human brain is a (or contains a) morality, and it's objectively real.
4WrongBot13y
Indeed, that was not at all what I meant.
3Will_Newsome13y
Does the morality apply to paperclippers? Babyeaters?
-1WrongBot13y
I'd say that it's about as likely to apply to paperclippers or babyeaters as it is to us. While I think there's a non-trivial chance that such a morality exists, I can't even begin to speculate about what it might be or how it exists. There's just a lot of uncertainty and very little either evidence. The reason I think there's a chance at all, for what it's worth, is the existence of information theory. If information is a fundamental mathematical concept, I don't think it's inconceivable that there are all kinds of mathematical laws specifically about engines of cognition. Some of which may look like things we call morality. But most likely not.
5Perplexed13y
Information theory is the wrong place to look for objective morality. Information is purely epistemic - i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days. Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalition members - pretending that they have perfect information about your past actions, even though they may not actually have perfect information. Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out. Treat neutral parties with indifference - if they have no power over you, you have no reason to apply your power over them in either direction. This "objective morality" is strikingly different from the "inter-subjective morality" that evolution presumably installed in our human natures. But this may be an objective advantage if we have to make moral decisions regarding Baby Eaters who presumably received a different endowment from their own evolutionary history.
1AdeleneDawner13y
This does help bring clarity to the babyeaters' actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the 'fair punishment' that brings the situation back to equilibrium. I suspect that we'd be better served by a less emotionally charged word than 'punishment' for that phenomenon in general, though.
1Perplexed13y
Oh, I think "punishment" is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I'm not so sure I would agree that the babies in the story are being "punished". I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved - the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory. However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one's own babies does at least provide some incentive to minimize the number of excess babies produced.
0[anonymous]13y
Does that "game-theoretic golden rule" even tell you how to behave?
0saturn13y
Do you also think there is a means or mechanism for humans to discover and verify the objectively real morality? If so, what could it be?
2WrongBot13y
I would assume any objectively real morality would be in some way entailed by the physical universe, and therefore in theory discoverable. I wouldn't say that a thing existed if it could not interact in any causal way with our universe.
0RobinZ13y
I expect a plurality may vote as you expect, but 10% seems reasonable based on my current state of knowledge.
-8nick01200013y

The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)

2Will_Newsome13y
This seems reasonable with the help of FAI, though I doubt CEV would do it; or are you thinking of possible non-FAI technologies?
0Tiiba13y
So you intend to acquire an extra grandparent somewhere along the line?

No. I intend to revive one. Possibly all four, if necessary. Consider it thawing technology so advanced it can revive even the pyronics crowd.

4JenniferRM13y
Did you coin the term "pyronics"?
0Tenek13y
I would imagine not (99%) , although it doesn't appear to be in common usage.
0Tiiba13y
Sorry, I missed the time machine part.

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?

-3erratio13y
Sorry, I was letting my own uncertainty get in the way of clarity there. A stronger version of what I was trying to say would be that consciousness gives us the illusion of being in control of our actions when in fact we have no such control. Or to put it another way: we're all P-zombies with delusions of grandeur (yes, this doesn't actually make logical sense, but it works for me)
4LucasSloan13y
So I agree with the science you cite, right? But what you said really doesn't follow. Just because our phonologic loop doesn't actually have the control it thinks it does, it doesn't follow that sensory modalities are "meaningless." You might want to re-read Joy in the Merely Real with this thought of yours in mind.
-3erratio13y
Well, sure, you can find meaning wherever you want. I'm currently listening to some music that I find beautiful and meaningful. But that beauty and meaning isn't an inherent trait of the music, it's just something that I read into it. Similarly when I say that consciousness is meaningless I don't mean that we should all become nihilists, only that consciousness doesn't pay rent and so any meaning or usefulness it has is what you invent for it.
4Eugine_Nier13y
I don't know about you, but I'm not a P-zombie. :)

That emoticon isn't fooling anyone.

Upvoted for 'not even being wrong'.

0Paul Crowley13y
I'm not sure whether "not even wrong" calls for an upvote, does it?
3NihilCredo13y
Could you expand a little on this?
7erratio13y
Sure. Here's a version of the analogy that first got me thinking about it: If I turn on a lamp at night, it sheds both heat and light. But I wouldn't say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn't produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn't do much for us. There's a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.
2Perplexed13y
(I'm not sure why I pushed the button to reply, but here I am so I guess I'll just make something up to cover my confusion.) Do you also believe that we use language - speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?
0erratio13y
Hah! I found it amusing at least. I'm.. honestly not sure. I think that the vast majority of the time we don't consciously choose whether to speak or what exact words to say when we do speak. Listening and reading are definitely unconscious processes, otherwise it would be possible to turn them off (also, cocktail party effect is a huge indication of listening being largely unconscious). Arithmetic calculations - that's a matter of learning an algorithm which usually involves mnemonics for the numbers.. On balance I have to go with yes, I don't think those processes require consciousness
4AdeleneDawner13y
Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing - most often understanding or being able to produce speech, but also other sensory processing - turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I've personally experienced this. Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren't fluent in reading, and do have to consciously parse each word.
3Perplexed13y
You pretty much have to go with "yes" if you want to claim that "consciousness/self-awareness is just a meaningless side-effect of brain processes." I've got to disagree. What my introspection calls my "consciousness" is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud. Not all of my speech works this way, but some does. And almost all of my writing, including this note. So I have to disagree that consciousness has no causal role in my behavior. Sometimes I act with "malice aforethought". Or at least I sometimes speak that way. For these reasons, I prefer "spotlight" consciousness theories, like "global workspace" or "integrated information theory". Theories that capture the fact that we observe some things consciously and do some things consciously.
0Blueberry13y
Agreed, but that tells you consciousness requires language. That doesn't tell you language requires consciousness. Drugs such as alcohol or Ambien can cause people to have conversations and engage in other activities while unconscious.
0NihilCredo13y
Thanks; +1 for the explanation. No mod to the original comment; I would downmod the "consciousness was not a positive factor in the evolution of brains" part and upmod the "we do not actually rely much if at all on conscious thought" one.
0davidad13y
Upvoted for underconfidence.
0drc500free13y
Having just stumbled across LW yesterday, I've been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this. “Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans. “I am” a super-agent. It is a stack of component agents. At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it. My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged. My super-agent survives because it has marked the area on its model corresponding to

It does not all add up to normality. We are living in a weird universe. (75%)

6Interpolate13y
My initial reaction was that this is not a statement of belief but one of opinion, and to think like reality. I'm still not entirely sure what you mean (further elaboration would be very welcome), but going by a naive understanding I upvoted your comment based on the principle of Occam's Razor - whatever your reasons for believing this (presumably perceived inconsistencies, paradoxes etc. in the observable world, physics etc.) I doubt your conceived "weird" universe would the simplest explanation. Additionally, that conceived weird universe in addition to lacking epistemic/empirical ground begs for more explanation than the understanding/lack thereof of the universe/reality that's more of less shared by current scientific consensus. If I'm understanding correctly, your argument for the existence of a "weird universe" is analagous to an argument for the existence of God (or the supernatural, for that matter): where by introducing some cosmic force beyond reason and empiricism, we eliminate the problem of there being phenomena which can't be explained by it.
6Eugine_Nier13y
Please specify what you mean by a weird universe.
7Kevin13y
We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.
3[anonymous]13y
The more I hear about this the more intrigued I get. Could someone with a strong belief in this hypothesis write a post about it? Or at the very least throw out hints about how you updated in this direction?
5Risto_Saarelma13y
Would "Fortean phenomena really do occur, and some type of anthropic effect keeps them from being verifiable by scientific observers" fit under this statement?
1Kevin13y
That sounds weird to me.
2Will_Newsome13y
Downvoted in agreement (I happen to know generally what Kevin's talking about here, but it's really hard to concisely explain the intuition).
1Clippy13y
Why do you think so?
2Kevin13y
For some definitions of weird, our deal (assuming it continues to completion) is enough to land this universe in the block of weird universes.
[-][anonymous]13y39

I think that there are better-than-placebo methods for causing significant fat loss. (60%)

ETA: apparently I need to clarify.

It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.

[This comment is no longer endorsed by its author]Reply

voted up because 60% seems WAAAAAYYYY underconfident to me.

5Eugine_Nier13y
Now that we're up-voting underconfidence I changed my vote.
2magfrump13y
From the OP:
0Zvi13y
I almost want this reworded the opposite way for this reason, as a 40% chance that there are not better-than-placebo methods for causing significant fat loss. Even if I didn't have first and second hand examples to fall back on I don't see why there is real doubt on this question. Another more interesting variation is, does such a method exist that is practical for a large percentage of people?
0wedrifid13y
Likewise. My p: 99.5%
-2datadataeverywhere13y
likewise
3[anonymous]13y
shoot... I'm just scared to bet, is all. You can tell I'm no fun at Casino Night.
7Will_Newsome13y
Ah, but betting for a proposition is equivalent to betting against its opposite. Why are you so certain that there's no better-than-placebo methods for causing significant fat loss? But If you do change your mind, please don't change the original, as then everyone's comments would be irrelevant.
6Jonathan_Graehl13y
Absolutely right. This is an important point that many people miss. If you're uncertain about your estimated probability, or even merely risk averse, then you may want to take neither side of the implied bet. Fine, but at least figure out some odds where you feel like you should have an indifferent expectation.
1[anonymous]13y
I think, with some confidence, that there are better-than-placebo methods for causing significant fat loss. The low confidence estimate has more to do with my reluctance to be wrong than anything else. If I were wrong, it would be because overweight is mostly genetic and irreversible (something I have seen argued and supported with clinical studies.)
0Relsqui13y
I sympathize with this. But I also upvoted the original comment because of it (i.e. I also think you're underconfident).
3Will_Newsome13y
Voted down for agreement! (Liposuction... do you mean dietary methods? I'd still agree with you though.) Edit: On reflection, 60% does seem too low. Changed to upvote.
2[anonymous]13y
I meant diet, exercise, and perhaps supplements; liposuction is trivially true.
0Will_Newsome13y
Generally speaking, most diets and moderate exercise work very well for a year or two. But the shangri-la diet tends to work for as long as you do it (for many/most? people). Also, certain supplements work, but I forgot which. So I gotta agree with you.
2wedrifid13y
For example... just about any stimulant you can get your hands on.
0Will_Newsome13y
But there were others, I think? User:taw talked about one that you take with caffeine. It might have been a stimulant, though.
5Douglas_Knight13y
ephedrine. It's called ECA, including aspirin, but that wasn't used in the studies.
0Will_Newsome13y
Thanks! :D
2wedrifid13y
For sure. Laxatives. e coli. But yes, there are others with better side effect profiles too. :) Take with caffeine? More caffeine. That'll do the trick. :P
1Normal_Anomaly13y
Upvoted, because I say diet and exercise work at 85% (for a significant fraction of people; there may be some with unlucky genes who can't lose weight that way).
0khafra13y
Does "method" include "exercise and healthy eating"?
2[anonymous]13y
This post has generated so much more controversy than I expected. I meant exactly exercise and healthy eating! I thought people would assume I meant that. Not gastric bypass surgery, not liposuction, not starvation, not amputating limbs.
6DilGreen13y
Whenever I see someone with one of those badges that says; 'Lose weight now, ask me how!", I check that they have all their limbs.
0Richard_Kennaway13y
That's ok. Just put an ETA in the top-level comment to clarify that. There's a lot of wiggle room around "healthy eating" though. Where are you drawing the line between calorie restriction and starvation?
0Larks13y
Becoming seriously ill? Better in the sense of losing more weight.
0JoshuaZ13y
Voting down for trivial agreement. Both stomach stapling and gastric lap bands easily meet this. Do you mean maybe non-surgical methods? That seems more questionable.
0lmnop13y
Short term or long term? If long, how long?
-2datadataeverywhere13y
I assign p=1 to the proposition that not eating causes significant fat loss. I can't justify subtracting any particular epsilon, which means to me that p=1-e, where e is too small for me to conceive and apply a number to. EDIT: I am particularly referring to indefinite periods of perfect fasting.
1[anonymous]13y
The reason it's questionable: how long can one not eat? Can most people not eat for long enough?
-2datadataeverywhere13y
Then take involuntary starvation. Perhaps you meant "better" in an ethical sense, but I thought you meant in a sense of strict effectiveness. This proposition is patently false (by indicating that there is a 40% chance that nothing causes better weight loss than placebo), as you admitted with regard to liposuction elsewhere in this thread.
7Will_Newsome13y
I think you're nitpicking; if what she's saying sounds completely obviously unreasonable then it's probably not what she meant. She means something like "There's a 60% chance that diets, legal supplements, fasting, and/or exercise, in amounts that Western culture would count as memetically reasonable, and in amounts that can be reasonably expected to be undertaken by members of Western culture, can cause significant weight loss." To which everyone says, "No, more like 95%", not "Haha obviously liposuction works, and so does starvation, you imprecise person: next time write a paragraph's worth of disclaimers and don't count on the ability of your audience to make charitable interpretations."
-2datadataeverywhere13y
Maybe I have a different idea than you of memetically reasonable, but I'm perfectly happy saying "No, more like 1-10^-30" to your statement as well as hers. Maybe I need to make a top level post here, but I think that it's a very small minority of humans that are unable to lose weight through diet and exercise, even if the degree of effort required is one not frequently undertaken. I don't think that the degree of effort required is considered widely unreasonable in Western culture. My p value is so high because this thread asks us to discount matters of opinion, so the probability that the effort required is beyond what is considered reasonable seems outside the scope. Same for "reasonably expected". I feel like it's enough to say that the methods don't require super-human willpower or vast resources. I think the methods themselves are unquestionable.
0Richard_Kennaway13y
It has been remarked in support of that proposition that no fat people came out of Auschwitz (or Singapore, or similar episodes). But is that because they got thin, or did they die before getting thin? Has any research been done on how people of different body types respond to starvation? The full report on this experiment might address that, but the Wiki article doesn't. However, the volunteers for that experiment were "young, healthy men" volunteering as an alternative to military service, so it's unlikely that any of them were obese going in.

the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)

*that humans have invented so far

6Mass_Driver13y
The proposition strikes me as either circular or wrong, depending on your definitions of "peaceful" and "prosperous." If by "peaceful" you mean "devoid of violence," and by "violence" you essentially mean "transfers of wealth that are contrary to just laws," and by "just laws" you mean "laws that honor private property rights above all else," then you should not be surprised if joint stock corporations are the most peaceful entities the world has seen so far, because joint stock corporations are dependent on private property rights for their creation and legitimacy. If by "prosperous" you mean "full of the kind of wealth that can be reported on an objective balance sheet," and if by "objective balance sheet" you mean "an accounting that will satisfy a plurality of diverse, decentralized and marginally involved investors," then you should likewise not be surprised if joint stock corporations increase prosperity, because joint stock corporations are designed so as to maximize just this sort of prosperity. Unfortunately, they do it by offloading negative externalities in the form of pollution, alienation, lower wages, censored speech, and cyclical instability of investments onto individual people. When your 'goals' are the lowest common denominator of materialistic consumption, joint stock corporations might be unbeatable. If your goals include providing a social safety net, education, immunizations, a free marketplace of ideas, biodiversity, and clean air, you might want to consider using a liberal democracy. Using the most charitable definitions I can think of for your proposition, my estimate for the probability that a joint-stock system would best achieve a fair and honest mix of humanity's crasser and nobler goals is somewhere around 15%, and so I'm upvoting you for overconfidence.
5blogospheroid13y
Coming from the angle of competition in governance, I think you might be mixing up a lot of stuff. A joint stock corporation which is sovereign is trying to compete in the wider world for customers , i.e. willing taxpayers. If the people desire the values you have mentioned then the joint-stock government will try to provide those cost effectively. Clean Air and Immunizations will almost certainly be on the agenda of a city government Biodiversity will be important to a government which includes forests in its assets and wants to sustainably maintain the same. A free marketplace of ideas, free education and social safety nets would purely be determined by the market for people. Is it an important value enough that people would not come to your country and would go to another? if it is, then the joint stock government would try to provide the same. If not, then they wouldn't.
5wedrifid13y
All of this makes sense in principle. (I'm assuming you're not thinking that any of it would actually work in practice with either humans or ideal rational agents, right?)
1Mass_Driver13y
Good response, but I have to agree with wedrifid here: you can't compete for "willing taxpayers" at all if you're dealing with hard public goods, and elsewhere competition is dulled by (a) the irrational political loyalties of citizens, (b) the legitimate emotional and economic costs of immigration, (c) the varying ability of different kinds of citizens to move, and (d) protectionist controls on the movement of labor in whatever non-libertopian governments remain, which might provide them with an unfair advantage in real life, the theoretical axioms of competitive advantage theory be damned. I'd be all for introducing some features of the joint stock corporation into some forms of government, but that doesn't sound very much like what you were proposing would lead to peace and prosperity -- you said the jsc was better than other forms, not a good thing to have a nice dose of.
3blogospheroid13y
Or how I would call it, no representation without taxation. Those who contribute equity to society rule it. Everyone else contracts with the corporate in some way or another.
2knb13y
What is the term for this mode of governance? Corporate Monarchy? Seems like a good idea to me.
2gwern13y
England had property-rights based monarchy. It's basically gone now. So pace Mencius Moldbug, it can't be especially good a system - else it would not have died.
0knb13y
I don't understand this. England never was a corporate monarchy, though.
5gwern13y
England was never a 'corporate' monarchy in the sense of a limited-liability joint-stock company with numeric shares, voting rights, etc. I never said it was, though, but that it was 'property-rights based', which it was - the whole country and all legal privileges were property which the king could and did rent and sell away. This is one of the major topics of Nick Szabo's blog Unenumerated. If you have the time, I strongly recommend reading it all. It's up there with Overcoming Bias in my books.
0Emile13y
Moldbug calls this a joint-stock republic, though he mixes it with some of his more fringe ideas. I'll second gwern's recommendation on Nick Szabo's blog - he has a post on Government for Profit, which I think was written as a rebuttal to some of Moldbug's ideas (see the comments in this post)
2RHollerith13y
Another recommendation for Nick Szabo's blog. The only online writings I know of about governance and political economy that come close are the blogs of economist Arnold Kling and the eccentric and hyperbolic Mencius Moldbug. (Hanson's blog is extremely strong on several subjects, but governance is not IMHO one of them.)
4Vladimir_M13y
rhollerith_dot_com: I agree with all these recommendations, and I'd add that these three authors have written some of their best stuff in the course of debating each other. In particular, a good way to get the most out of Moldbug is to read him alongside Nick Szabo's criticisms that can be found both in UR comments and on Szabo's own blog. As another gem, the 2008 Moldbug-Kling debate on finance (parts (1), (2), (3), (4), and (5)) was one of the best and most insightful discussions of economics I've ever read. I agree. In addition, I must say I'm disappointed with the shallowness of the occasional discussions of governance on LW. Whenever such topics are opened, I see people who otherwise display tremendous smarts and critical skills making not-even-wrong assertions based on a completely naive view of the present system of governance, barely more realistic than the descriptions from civics textbooks.
0Scott7870413y
Open source.

Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)

(Absolutely certain.)

I'm not sure whether to chide you or giggle at the self-reference. I suspect, though, that "absolutely certain" is not a confidence level.

I want to vote you down in agreement, but I don't have enough karma.

assigning numerical probabilities to common-sense conclusions and beliefs is meaningless

It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.

The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.

Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian prob... (read more)

2Vladimir_M13y
I think you're not drawing a clear enough distinction between two different things, namely the mathematical relationships between numbers, and the correspondence between numbers and reality. If you ask an astronomer what is the mass of some asteroid, he will presumably give you a number with a few significant digits and and uncertainty interval. If you ask him to justify this number, he will be able to point to some observations that are incompatible with the assumption that the mass is outside this interval, which follows from a mathematical argument based on our best knowledge of physics. If you ask for more significant digits, he will say that we don't know (and that beyond a certain accuracy, the question doesn't even make sense, since it's constantly losing and gathering small bits of mass). That's what it means for a number to be rigorously justified. But now imagine that I make an uneducated guess of how heavy this asteroid might be, based on no actual astronomical observation. I do of course know that it must be heavier than a few tons or otherwise it wouldn't be noticeable from Earth as an identifiable object, and that it must be lighter than 10^20 or so tons since that's roughly the range where smaller planets are, but it's clearly nonsensical for me to express that guess with even one digit of precision. Yet I could insist on a precise guess, and claim that it's "meaningful" in a way analogous to your above justification of subjective probability estimates, by deriving various mathematical and physical implications of this fact. If you deprecate my claim that its mass is 4.5237 x 10^15kg, then you cannot also deprecate my claim that it is a sphere of radius 1km and average density 1000kg/m^3, since the conjunction of these claims is by the sheer force of mathematics false. Therefore, I don't see how you can argue that a number is meaningful by merely noting its relationships with other numbers that follow from pure mathematics. Or am I missing somethin
0Perplexed13y
The only thing you are missing is the first paragraph of my reply. Just because something doesn't have the kind of meaning you think it ought to have (by virtue of being a number, for example) that doesn't justify your claim that it is meaningless. Subjective probabilities of isolated propositions don't have the kind of meaning you want numbers to have. But they have exactly the kind of meaning I want them to have - specifically they can be used in computations that produce consistent results. Do you think that the digits of pi beyond the first half dozen are also meaningless?
1Vladimir_M13y
Perplexed: Fair enough, but I still don't see how this solves the problem of the correspondence between numbers and reality. Any number can be used in computations that produce consistent results if you just start plugging it into formulas derived from some consistent mathematical theory. It is when the numbers are used as basis for claims about the real, physical world that I insist on an explanation of how exactly they are derived and how their claimed correspondence with reality is justified. The digits of pi are an artifact of pure mathematics, so I don't think it's a good analogy for what we're talking about. Once you've built up enough mathematics to define lengths of curves in Euclidean geometry, the ratio between the circumference and diameter of a circle follows by pure logic. Any suitable analogy for what we're talking about must encompass empirical knowledge, and claims which can be falsified by empirical observations.
1Perplexed13y
It doesn't have to. That is a problem you made up. Other people don't have to buy in to your view on the proper relationship between numbers and physical reality. My viewpoint on numbers is somewhere between platonism and formalism. I think that the meaning of a number is a particular structure in my mind. If I have an axiom system that is categorical (and, of course, usually I don't) then that picture in my mind can be made inter-subjective in that someone who also accepts those axioms can build an isomorphic structure in their own mind. The real world has absolutely nothing to do with Tarski's semantics - which is where I look to find out what the "meaning" of a number is. Your complaint that subjective probabilities have no meaning is very much like the complaint of a new convert to atheism who laments that without God, life has no meaning. My advice: stop telling other people what the word "meaning" should mean. However, if you really need some kind of affirmation, then I will provide some. I agree with you that the numbers used in subjective probabilities are less, ... what is the right word, ... less empirical than are the numbers you usually find in science classes. Does that make you feel better?
2Vladimir_M13y
Perplexed: You probably wouldn't buy that same argument if it came from a numerologist, though. I don't think I hold any unusual and exotic views on this relationship, and in fact, I don't think I have made any philosophical assumptions in this discussion beyond the basic common-sense observation that if you want to use numbers to talk about the real world, they should have a clear connection with something that can be measured or counted to make any sense. I don't see any relevance of these (otherwise highly interesting) deep questions of the philosophy of math for any of my arguments.
1Perplexed13y
There is nothing philosophically wrong with your position except your choice of the word "meaningless" as an epithet for the use of numbers which cannot be empirically justified. Your choice of that word is pretty much the only reason I am disagreeing with you.
1mattnewport13y
Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet? If you're not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?
-3Vladimir_M13y
mattnewport: In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense. Otherwise, betting is just gambling (which of course can be worthwhile for fun or signaling value). Look at the stock market: it's pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor. This is the basic reason why I consider the emphasis on subjective Bayesian probabilities that is so popular here misguided. In technical problems where probability calculations can be helpful, the experts in the field already know how to use them. On the other hand, for the great majority of the relevant beliefs and conclusions you'll form in life, they offer nothing useful beyond what your vague common sense is already telling you. If you start taking them too seriously, it's easy to start fooling yourself that your thinking is more accurate and precise than it really is, and if you start actually betting on them, you'll be just gambling. I'm not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they're guaranteed to make a profit no matter what happens. Effectively, they exploit the inconsistencies between different people's estimates of what the favorable odds are. (If there are bookmakers who stake their profit on some particular outcome, then I'm sure that they have insider knowledge if they can stay profitable.) Now of course, the trick is to come up with a book that is both profitable and offers odds that will sell well, but here we get into the fuzzy art of exploiting people's biases for profit.
0mattnewport13y
You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for? Vastly higher expertise than the average investor would appear to include something like the ability in question - translating your beliefs about the future into a probability such that you can judge whether investments have positive expected value. If you accept that true alpha) exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad? What's your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions. They certainly aim for a balanced book but they wouldn't be very profitable if they were not reasonably competent at setting initial odds (and updating them in the light of new information). If the initial odds are wildly out of line with their customers' then they won't be able to make a balanced book.
-2Vladimir_M13y
mattnewport: They sure do, but in all the examples I can think of, people either just follow their intuition directly when faced with a concrete situation, or employ rigorous science to attack the problem. (It doesn't have to be the official accredited science, of course; the Venn diagram of official science and valid science features only a partial overlap.) I just don't see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any? Well, if I knew, I would be doing it myself -- and I sure wouldn't be talking about it publicly! The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity. Still, I would be surprised to find out that the success of some alpha-achieving investors is based on taking numerical expressions of common-sense confidence seriously. In a sense, a similar problem faces anyone who aspires to be more "rational" than the average folk in any meaningful sense. Either your "rationality" manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you're reaping practical success that eludes so many other people, and in such a way that they can't just copy your approach. I agree with this assessment, but the accuracy of information aggregated by a prediction market implies nothing about your own individual certainty. Prediction markets work by cancelling out random errors and enabling specialists who wield esoteric expertise to take advantage of amateurs' systematic biases. Where your own individual judgment
0mattnewport13y
I'd speculate that bookies and professional sports bettors are doing something like this. By bookies here I mean primarily the kind of individuals who stand with a chalkboard at race tracks rather than the large companies. They probably use some semi-rigorous / scientific techniques to analyze past form and then mix it with a lot of intuition / expertise together with lots of detailed domain specific knowledge and 'insider' info (a particular horse or jockey has recently recovered from an illness or injury and so may perform worse than expected, etc.). They'll then integrate all of this information together using some non mathematically rigorous opaque mental process and derive a probability estimate which will determine what odds they are willing to offer or accept. I've read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar. Integrating information derived from rigorous analysis with more fuzzy / intuitive reasoning based on expertise, knowledge and experience and using it to derive probabilities for particular outcomes. They then seek out investments that currently appear to be mis-priced relative to the probabilities they've estimated, ideally with a fairly large margin of safety to allow for the imprecise and uncertain nature of their estimates. It's entirely possible that this is not what's going on at all but it appears to me that something like this is a factor in the success of anyone who consistently profits from dealing with risk and uncertainty. My experience leads me to believe that this is not entirely accurate. Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone
0Vladimir_M13y
mattnewport: Your knowledge about these trades seems to be much greater than mine, so I'll accept these examples. In the meantime, I have expounded my whole view of the topic in a reply to an excellent systematic list of questions posed by prase, and in those terms, this would indicate the existence of what I called the third type of exceptions under point (3). I still maintain that these are rare exceptions in the overall range of human judgments, though, and that my basic point holds for the overwhelming majority of human common-sense thinking. I don't think they're being deliberately misleading. I just think that the whole mechanism by which the public discourse on these topics comes into being inherently generates a nearly impenetrable confusion, which you can dispel to extract useful information only if you are already an expert in the first place. There are many specific reasons for this, but it all ultimately comes down to the stability of the weak EMH equilibrium. Oh, absolutely! But you're presumably estimating the rank of your abilities based on some significant accomplishments that most people would indeed find impossible to achieve. What I meant to say (even though I expressed it poorly) is that there is no easy and readily available way to excel at "rationality" in any really relevant matters. This in contrast to the attitude, sometimes seen among the people here, that you can learn about Bayesianism or whatever else and just by virtue of that set yourself apart from the masses in accuracy of thought. The EMH ethos is, in my opinion, a good intellectual antidote against such temptations of hubris.
0jimrandomh13y
You're dodging the question. What if the odds arose from a natural process, so that there isn't a person on the other side of the bet to compare your state of knowledge against?
0[anonymous]13y
I think this is right. The idea that you would be betting against another person is inessential, an unfortunate distraction arising from the choice of thought experiment. Admittedly it's a natural way to understand the thought experiment, but it's inessential. The experiment could be revised to exlude it. In fact every moment we make decisions whose outcomes depend on things we don't know, and in making those decisions we are therefore in effect gambling. We are surrounded by risks, and our decisions reveal our assessment of those risks.
0Vladimir_M13y
jimrandomh: Maybe it's my failure of English comprehension (I'm not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase "being offered good odds if offered a bet," I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn't dodging the question. But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines. For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I'm effectively betting that it's narrow enough to jump over safely. In reality, I'll feel intuitively either that it's safe to jump or not, and I'll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort. It would be cool to have a brain that produces confidenc
2[anonymous]13y
Applying the view of probability as willingness to bet, you can't refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet. In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can't usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don't know what it is. But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a p
0wnoise13y
That's a startling statement (especially out of context).
1[anonymous]13y
Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it? I'll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual: If Oswald had not shot Kennedy, then someone else would have. Now consider the equation F=ma. This is translated at Wikipedia into the English: A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma. Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body: Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass. That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false. I believe that this point can be extended to all the laws of physics, either Newton's laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions. Is there any part of this with which you disagree? A point about the insertion of "I believe". The phrase "I believe" is sometimes used by people to assert their religious beliefs. I don't consider the point I am making to be a personal religious belief, but the plain truth. I only insert "I be
0wnoise13y
I am merely commenting. Counterfactuals are counterfactual, and so don't "exist" and can't be "there" by their very nature. Yes, of course, they're part of how we do our analyses.
8komponisto13y
Upvoted. Definitely can't back you on this one. Are you sure you're not just worried about poor calibration?
4wedrifid13y
Another upvote. That's crazy talk.
0Vladimir_M13y
komponisto: No, my objection is fundamental. I provide a brief explanation in the comment I linked to, but I'll restate it here briefly. The problem is that the algorithms that your brain uses to perform common-sense reasoning are not transparent to your conscious mind, which has access only to their final output. This output does not provide a numerical probability estimate, but only a rough and vague feeling of certainty. Yet in most situations, the output of your common sense is all you have. There are very few interesting things you can reason about by performing mathematically rigorous probability calculations (and even when you can, you still have to use common sense to establish the correspondence between the mathematical model and reality). Therefore, there are only two ways in which you can arrive at a numerical probability estimate for a common-sense belief: 1. Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes the number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty. 2. Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious. Honestly, all this seems entirely obvious to me. I would be curious to see which points in the above reasoning are supposed to be even controversial, let alone outright false.

Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes this number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

Disagree here. Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.

Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Disagree here also. The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.

For example, consider my current probability estimate of 10^(-3) that Aman... (read more)

-2Vladimir_M13y
komponisto: If I understand correctly, you're saying that talking about numbers rather than the usual verbal expressions of certainty prompts people to be more careful and re-examine their reasoning more strictly. This may be true sometimes, but on the other hand, numbers also tend to give a false feeling of accuracy and rigor where there is none. One of the usual symptoms (and, in turn, catalysts) of pseudoscience is the use of numbers with spurious precision and without rigorous justification. In any case, you seem to concede that these numbers ultimately don't convey any more information than various vague verbal expressions of confidence. If you want to make the latter more systematic and clear, I have no problem with that, but I see no way to turn them into actual numbers without introducing spurious precision. Trouble is, this is often not possible. Most of what happens in your brain is not amenable to introspection, and you cannot devise a probability calculation that will capture all the important things that happen there. Take your own example: See, this is where, in my opinion, you're introducing spurious numerical claims that are at best unnecessary and at worst outright misleading. First you note that murderers are extremely rare, and that AK is a sort of person especially unlikely to be one. OK, say you can justify these numbers by looking at crime statistics. Then you perform a complex common-sense evaluation of the evidence, and your brain tells you that on the whole it's weak, so it's highly unlikely that AK killed the victim. So far, so good. But then you insist on turning this feeling of near-certainty about AK's innocence into a number, and you end up making a quantitative claim that has no justification at all. You say: I strongly disagree. Neither is this number you came up with any more meaningful than the simple plain statement "I think it's highly unlikely she did it," nor does it offer any additional practical benefit. On the contrary,

Let's see if we can try to hug the query here. What exactly is the mistake I'm making when I say that I believe such-and-such is true with probability 0.001?

Is it that I'm not likely to actually be right 999 times out of 1000 occasions when I say this? If so, then you're (merely) worried about my calibration, not about the fundamental correspondence between beliefs and probabilities.

Or is it, as you seem now to be suggesting, a question of attire: no one has any business speaking "numerically" unless they're (metaphorically speaking) "wearing a lab coat"? That is, using numbers is a privilege reserved for scientists who've done specific kinds of calculations?

It seems to me that the contrast you are positing between "numerical" statements and other indications of degree is illusory. The only difference is that numbers permit an arbitrarily high level of precision; their use doesn't automatically imply a particular level. Even in the context of scientific calculations, the numbers involved are subject to some particular level of uncertainty. When a scientist makes a calculation to 15 decimal places, they shouldn't be interpreted as distinguishing betwe... (read more)

4Mass_Driver13y
Love the logic and the scale, although I think Vladimir_M pokes some important holes specifically at the 10^(-2) to 10^(-3) level. May I suggest "un-planned for errors?" In my experience, it is not useful to plan for contingencies with about a 1/300 chance in happening per trial. For example, on any given day of the year, my favorite cafe might be closed due to the owner's illness, but I do not call the cafe first to confirm that it is open each time I go there. At any given time, one of my 300-ish acquaintances is probably nursing a grudge against me, but I do not bother to open each conversation with "Hi, do you still like me today?" When, as inevitably happens, I run into a closed cafe or a hostile friend, I usually stop short for a bit; my planning mechanism reports a bug; there is no 'action string' cached for that situation, for the simple reason that I was not expecting the situation, because I did not plan for the situation, because that is how rare it is. Nevertheless, I am not 'surprised' -- I know at some level that things that happen about 1/300 times are sort of prone to happening once in a while. On the other hand, I would be 'surprised' if my favorite cafe had been burned to the ground or if my erstwhile buddy had taken a permanent vow of silence. I expect that these things will never happen to me, and so if they happen I go and double-check my calculations and assumptions, because it seems equally likely that I am wrong about my assumptions and that the 1/30,000 event would actually occur. Anyway, the point is that a category 3 event is an event that makes you shut up for a moment but doesn't make you reexamine any core beliefs. If you hold most of your core beliefs with probability > .993 then you are almost certainly overconfident in your core beliefs. I'm not talking about stuff like "my senses offer moderately reliable evidence" or "F(g) = GMm/(r^2)"; I'm talking about stuff like "Solominoff induction predicts that hyperintelligent AIs will emp
4soreff13y
10^-3 is roughly the probability that I try to start my car and it won't start because the battery has gone bad. Is the scale intended only for questions one asks once per lifetime? There are lots of questions that one asks once a day, hence my car example.
1komponisto13y
That is precisely why I added the phrase "on an important question". It was intended to rule out exactly those sorts of things. The intended reference class (for me) consists of matters like the Amanda Knox case. But if I got into the habit of judging similar cases every day, that wouldn't work either. Think "questions I might write a LW post about".
3Vladimir_M13y
komponisto: It's not that I'm worried about your poor calibration in some particular instance, but that I believe that accurate calibration in this sense is impossible in practice, except in some very special cases. (To give some sense of the problem, if such calibration were possible, then why not calibrate yourself to generate accurate probabilities about the stock market movements and bet on them? It would be an easy and foolproof way to get rich. But of course that there is no way you can make your numbers match reality, not in this problem, nor in most other ones.) The way you put it, "scientists" sounds too exclusive. Carpenters, accountants, cashiers, etc. also use numbers and numerical calculations in valid ways. However, their use of numbers can ultimately be scrutinized and justified in similar ways as the scientific use of numbers (even if they themselves wouldn't be up to that task), so with that qualification, my answer would be yes. (And unfortunately, in practice it's not at all rare to see people using numbers in ways that are fundamentally unsound, which sometimes gives rise to whole edifices of pseudoscience. I discussed one such example from economics in this thread.) Now, you say: However, when a scientist makes a calculation with 15 digits of precision, or even just one, he must be able to rigorously justify this degree of precision by pointing to observations that are incompatible with the hypothesis that any of these digits, except the last one, is different. (Or in the case of mathematical constants such as pi and e, to proofs of the formulas used to calculate them.) This disclaimer is implicit in any scientific use of numbers. (Assuming valid science is being done, of course.) And this is where, in my opinion, you construct an invalid analogy: But these disclaimers are not at all the same! The scientist's -- or the carpenter's, for that matter -- implicit disclaimer is: "This number is subject to this uncertainty interval, but there
4Mass_Driver13y
I think this statement reflects either an ignorance of finance or the Dark Arts. First, the stock market is the single worst place to try to test out ideas about probabilities, because so many other people are already trying to predict the market, and so much wealth is at stake. Other people's predictions will remove most of the potential for arbitrage (reducing 'signal'), and the insider trading and other forms of cheating generated by the potential for quick wealth will further distort any scientifically detectable trends in the market (increasing 'noise'). Because investments in the stock market must be made in relatively large quantities to avoid losing your money through trading commissions, a causal theory tester is likely to run out of money long before hitting a good payoff even if he or she is already well-calibrated. Of course, in real life, people might be moderately-calibrated. The fact that one is capable of making some predictions with some accuracy and precision is not a guarantee that one will be able to reliably and detectably beat even a thin market like a political prediction clearinghouse. Nevertheless, some information is often better than none: I am (rationally) much more concerned about automobile accidents than fires, despite the fact that I know two people who have died in fires and none who have died in automobile accidents. I know this based on my inferences from published statistics, the reliability of which I make further inferences about. I am quite confident (p ~ .95) that it is sensible to drive defensively (at great cost in effort and time) while essentially ignoring fire safety (even though checking a fire extinguisher or smoke detector might take minimal effort.) I don't play the stock market, though. I'm not that well calibrated, and probably nobody is without access to inside info of one kind or another.
0Vladimir_M13y
Mass_Driver: I'm not an expert on finance, but I am aware of everything you wrote about it in your comment. So I guess this leaves us with the second option. The Dark Arts hypothesis is probably that I'm using the extreme example of the stock market to suggest a general sweeping conclusion that in fact doesn't hold in less extreme cases. To which I reply: yes, the stock market is an extreme example, but I honestly can't think of any other examples that would show otherwise. There are many examples of scientific models that provide more or less accurate probability estimates for all kinds of things, to be sure, but I have yet to hear about people achieving practical success in anything relevant by translating their common-sense feelings of confidence in various beliefs into numerical probabilities. In my view, calibration of probability estimates can succeed only if (1) you come up with a valid scientific model which you can then use in a shut-up-and-calculate way instead of applying common sense (though you still need it to determine whether the model is applicable in the first place), or (2) you make an essentially identical judgment many times, and from your past performance you extrapolate how frequently the black box inside your head tends to be right. Now, you try to provide some counterexamples: Frankly, the only subjective probability estimate I see here is the p~0.95 for your belief about driving. In this case, I'm not getting any more information from this number than if you just described your level of certainty in words, nor do I see any practical application to which you can put this number. I have no objection to your other conclusions, but I see nothing among them that would be controversial to even the most extreme frequentist.
1Mass_Driver13y
Not sure who voted down your reply; it looks polite and well-reasoned to me. I believe you when you say that the stock market was honestly intended as representative, although, of course, I continue to disagree about whether it actually is representative. Here are some more counterexamples: *When deciding whether to invest in an online bank that pays 1% interest or a local community bank that pays 0.1% interest, I must calculate the odds that each bank will fail before I take my money out; I cannot possibly have a scientific model that generates replicable results for these two banks while also holding down a day job, but numbers will nevertheless help me make a decision that is not driven by an emotional urge to stay with (or leave) an old bank based on customer service considerations that I rationally value as far less than the value of my principal. *When deciding whether to donate time, money, or neither to a local election campaign, it will help to know which of my donations will have an 10^-6 chance, a 10^-4 chance, and a 10^-2 chance of swinging the election. Numbers are important here because irrational friends and colleagues will urge me to do what 'feels right' or to 'do my part' without pausing to consider whether this serves any of our goals. If I can generate a replicable scientific model that says whether an extra $500 will win an election, I should stop electioneering and sign up for a job as a tenured political science faculty member, but I nevertheless want to know what the odds are, approximately, in each case, if only so that I can pick which campaign to work on. As for your objection that: I suppose I have left a few steps out of my analysis, which I am spelling out in full now: *Published statistics say that the risk of dying in a fire is 10^-7/people-year and the risk of dying in a car crash is 10^-4/people-year (a report of what is no doubt someone else's subjective but relatively evidence-based estimate). *The odds that these statisti
1Vladimir_M13y
Regarding your examples with banks and donations, when I imagine myself in such situations, I still don't see how numbers derived from my own common-sense reasoning can be useful. I can see myself making a decision based a simple common-sense impression that one bank looks less shady, or that it's bigger and thus more likely to be bailed out, etc. Similarly, I could act on a vague impression that one political candidacy I'd favor is far more hopeless than another, and so on. On the other hand, I could also judge from the results of calculations based on numbers from real expert input, like actuary tables for failures of banks of various types, or the poll numbers for elections, etc. What I cannot imagine, however, is doing anything sensible and useful with probabilities dreamed up from vague common-sense impressions. For example, looking at a bank, getting the impression that it's reputable and solid, and then saying, "What's the probability it will fail before time T? Um.. seems really unlikely... let's say 0.1%.", and then using this number to calculate my expected returns. Now, regarding your example with driving vs. fires, suppose I simply say: "Looking at the statistical tables, it is far more likely to be killed by a car accident than a fire. I don't see any way in which I'm exceptional in my exposure to either, so if I want to make myself safer, it would be stupid to invest more effort in reducing the chance of fire than in more careful driving." What precisely have you gained with your calculation relative to this plain and clear English statement? In particular, what is the significance of these subjectively estimated probabilities like p=10^-1 in step 2? What more does this number tell us than a simple statement like "I don't think it's likely"? Also, notice that my earlier comment specifically questioned the meaningfulness and practical usefulness of the numerical claim that p~0.95 for this conclusion, and I don't see how it comes out of your calculati
6mattnewport13y
It seems plausible to me that routinely assigning numerical probabilities to predictions/beliefs that can be tested and tracking these over time to see how accurate your probabilities are (calibration) can lead to a better ability to reliably translate vague feelings of certainty into numerical probabilities. There are practical benefits to developing this ability. I would speculate that successful bookies and professional sports bettors are better at this than average for example and that this is an ability they have developed through practice and experience. Anyone who has to make decisions under uncertainty seems like they could benefit from a well developed ability to assign well calibrated numerical probability estimates to vague feelings of certainty. Investors, managers, engineers and others who must deal with uncertainty on a regular basis would surely find this ability useful. I think a certain degree of skepticism is justified regarding the utility of various specific methods for developing this ability (things like predictionbook.com don't yet have hard evidence for their effectiveness) but it certainly seems like it is a useful ability to have and so there are good reasons to experiment with various methods that promise to improve calibration.
-4Vladimir_M13y
I addressed this point in another comment in this thread: http://lesswrong.com/lw/2sl/the_irrationality_game/2qgm
4mattnewport13y
I agree with most of what you're saying (in that comment and this one) but I still think that the ability to give well calibrated probability estimates for a particular prediction is instrumentally useful and that it is fairly likely that this is an ability that can be improved with practice. I don't take this to imply anything about humans performing actual Bayesian calculations either implicitly or explicitly.
7prase13y
I have read most of the responses and still am not sure whether to upvote or not. I doubt among several (possibly overlapping) interpretations of your statement. Could you tell to what extent the following interpretations really reflect what you think? 1. Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless. 2. Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.) 3. Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure. 4. The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason. 5. Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning of subjective probabilities is preserved after monotonous rescaling. 6. Although, strictly speaking, human reason
3Vladimir_M13y
That’s an excellent list of questions! It will help me greatly to systematize my thinking on the topic. Before replying to the specific items you list, perhaps I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight. Therefore, I believe that whenever one encounters people talking about numbers of any sort that look even slightly suspicious, they should be considered guilty until proven otherwise -- and this entire business with subjective probability estimates for common-sense beliefs doesn’t come even close to clearing that bar for me. Now to reply to your list. ---------------------------------------- My answer to (1) follows from my opinion about (2). In my view, a number that gives any information about the real world must ultimately refer, either directly or via some calculation, to something that can be measured or counted (at least in principle, perhaps using a thought-experiment). This doesn’t mean that all sensible numbers have to be derived from concrete empirical measurements; they can also follow from common-sense insight and generalization. For example, reading about Newton’s theory leads to the common-sense insight that it’s a very close approximation of reality under certain assumptions. Now, if we look at the gravity formula F=m1*m2/r^2 (in units set so that G=1), the number 2 in the denominator is not a product of any concrete measurement, but a generalization from common sense. Yet what makes it sensible is that it ultimately refers to measurable reality via a well-defined formula: measure the force between two bodies of known
4komponisto13y
I'll point out here that reversed stupidity is not intelligence, and that for every possible error, there is an opposite possible error. In my view, if someone's numbers are wrong, that should be dealt with on the object level (e.g. "0.001 is too low", with arguments for why), rather than retreating to the meta level of "using numbers caused you to err". The perspective I come from is wanting to avoid the opposite problem, where being vague about one's beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.) But I'll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities: As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they're well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it's a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice. If my internal "Bayesian calculator" believes P(X) = 0.001, and X turns out to be true, I'm not made less wrong by having concealed the number, saying "I don't think X is true" instead. Less embarrassed, perhaps, but not less wrong.
-1Vladimir_M13y
komponisto: Trouble is, sometimes numbers can be not even wrong, with their very definition lacking logical consistency or any defensible link with reality. It is that category that I am most concerned with, and I believe that it sadly occurs very often in practice, with entire fields of inquiry sometimes degenerating into meaningless games with such numbers. My honest impression is that in our day and age, such numerological fallacies have been responsible for much greater intellectual sins than the opposite fallacy of avoiding scrutiny by excessive vagueness, although the latter phenomenon is not negligible either. Here we seem to be clashing about terminology. I think that "poor calibration" is too much of a euphemism for the situations I have in mind, namely those where sensible calibration is altogether impossible. I would instead use some stronger expression clarifying that the supposed "calibration" is done without any valid basis, not that the result is poor because some unfortunate circumstance occurred in the course of an otherwise sensible procedure. As I explained in the above lengthy comment, I simply don't find numbers that "refer specifically to degrees of belief, and not anything else" a coherent concept. We seem to be working with fundamentally different philosophical premises here. Can these numerical "degrees of belief" somehow be linked to observable reality according to the criteria I defined in my reply to the points (1)-(2) above? If not, I don't see how admitting such concepts can be of any use. But if you do this 10,000 times, and the number of times X turns out to be true is small but nowhere close to 10, you are much more wrong than if you had just been saying "X is highly unlikely" all along. On the other hand, if we're observing X as a single event in isolation, I don't see how this tests your probability estimate in any way. But I suspect we have some additional philosophical differences here.
3Vladimir_M13y
[Continued from the parent comment.] I have revised my view about this somewhat thanks to a shrewd comment by xv15. The use of unjustified numerical probabilities can sometimes be a useful figure of speech that will convey an intuitive feeling of certainty to other people more faithfully than verbal expressions. But the important thing to note here is that the numbers in such situations are mere figures of speech, i.e. expressions that exploit various idiosyncrasies of human language and thinking to transmit hard-to-convey intuitive points via non-literal meanings. It is not legitimate to use these numbers for any other purpose. Otherwise, I agree. Except in the above-discussed cases, subjective probabilities extracted from common-sense reasoning are at best an unnecessary addition to arguments that would be just as valid and rigorous without them. At worst, they can lead to muddled and incorrect thinking based on a false impression of accuracy, rigor, and insight where there is none, and ultimately to numerological pseudoscience. Also, we still don’t know whether and to what extent various parts of our brains involved in common-sense reasoning approximate Bayesian networks. It may well be that some, or even all of them do, but the problem is that we cannot look at them and calculate the exact probabilities involved, and these are not available to introspection. The fallacy of radical Bayesianism that is often seen on LW is in the assumption that one can somehow work around this problem so as to meaningfully attach an explicit Bayesian procedure and a numerical probability to each judgment one makes. Note also that even if my case turns out to be significantly weaker under scrutiny, it may still be a valid counterargument to the frequently voiced position that one can, and should, attach a numerical probability to every judgment one makes. ---------------------------------------- So, that would be a statement of my position; I’m looking forward to any comments
5jimrandomh13y
Suppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn't been conducted, you wouldn't give up and act like you didn't have any probability at all; you'd use the one from the small study. You might have to do some extra sanity checks, and your results wouldn't be as reliable, but they'd still be better than if you didn't have a probability at all. A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you're really trying to do is set a minimum quality level. Since probabilities that're based on studies and calculation are generally better than probabilities that aren't, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren't any relevant statistical calculations or studies to compare against. I think what's confusing you is an intuition that if someone gives a probability, you
1Vladimir_M13y
After thinking about your comment, I think this observation comes close to the core of our disagreement: Basically, yes. More specifically, the quality level I wish to set is that the numbers must give more useful information than mere verbal expressions of confidence. Otherwise, their use at best simply adds nothing useful, and at worst leads to fallacious reasoning encouraged by a false feeling of accuracy. Now, there are several possible ways to object my position: * The first is to note that even if not meaningful mathematically, numbers can serve as communication-facilitating figures of speech. I have conceded this point. * The second way is to insist on an absolute principle that one should always attach numerical probabilities to one's beliefs. I haven't seen anything in this thread (or elsewhere) yet that would shake my belief in the fallaciousness of this position, or even provide any plausible-seeming argument in favor of it. * The third way is to agree that sometimes attaching numerical probabilities to common-sense judgments makes no sense, but on the other hand, in some cases common-sense reasoning can produce numerical probabilities that will give more useful information than just fuzzy words. After the discussion with mattnewport and others, I agree that there are such cases, but I still maintain that these are rare exceptions. (In my original statement, I took an overly restrictive notion of "common sense"; I admit that in some cases, thinking that could be reasonably called like that is indeed precise enough to produce meaningful numerical probabilities.) So, to clarify, which exact position do you take in this regard? Or would your position require a fourth item to summarize fairly? I agree that there is a non-zero amount of meaning, but the question is whether it exceeds what a simple verbal statement of confidence would convey. If I can't take a number and start calculating with it, what good is it? (Except for the caveat about possible
0jimrandomh13y
My response to this ended up being a whole article, which is why it took so long. The short version of my position is, we should attack numbers to beliefs as often as possible, but for instrumental reasons rather than on principle.
1[anonymous]13y
As a matter of fact I can think of one reason - a strong reason in my view - that the consciously felt feeling of certainty is liable to be systematically and significantly exaggerated with respect to the true probability assignment assigned by the person's mental black box - the latter being something that we might in principle elicit through experimentation by putting the same subject through variants of a given scenario. (Think revealed probability assignment - similar to revealed preference as understood by the economists.) The reason is that whole-hearted commitment is usually best whatever one chooses to do. Consider Buridan's ass, but with the following alterations. Instead of hay and water, to make it more symmetrical suppose the ass has two buckets of water, one on either side about equally distant. Suppose furthermore that his mental black box assigns a 51% probability to the proposition that the bucket on the right side is closer to him than the bucket on the left side. The question, then, is what should the ass consciously feel about the probability that the bucket on the right is closest? I propose that given that his black box assigns a 51% probability to this, he should go to the bucket on the right. But given that he should go to the bucket on the right, he should go there without delay, without a hesitating step, because hesitation is merely a waste of time. But how can the ass go there without delay if he is consciously feeling that the probability is 51% that the bucket on the right is closest? That feeling will cause within him uncertainty and hesitation and will slow him down. Therefore it is best if the ass consciously is absolutely convinced that the bucket on the right is closest. This conscious feeling of certainty will speed his step and get him to the water quickly. So it is best for Buridan's ass that his consciously felt degrees of certainty are great exaggerations of his mental black box's probability assignments. I think this genera
2Richard_Kennaway13y
I don't agree with this conflation of commitment and belief. I've never had to run from a predator, but when I run to catch a train, I am fully committed to catching the train, although I may be uncertain about whether I will succeed. In fact, the less time I have, the faster I must run, but the less likely I am to catch the train. That only affects my decision to run or not. On making the decision, belief and uncertainty are irrelevant, intention and action are everything. Maybe some people have to make themselves believe in an outcome they know to be uncertain, in order to achieve it, but that is just a psychological exercise, not a necessary part of action.
1[anonymous]13y
The question is not whether there are some examples of commitment which do not involve belief. The question is whether there are (some, many) examples where really, absolutely full commitment does involve belief. I think there are many. Consider what commitment is. If someone says, "you don't seem fully committed to this", what sort of thing might have prompted him to say this? It's something like, he thinks you aren't doing everything you could possibly do to help this along. He thinks you are holding back. You might reply to this criticism, "I am not holding anything back. There is literally nothing more that I can do to further the probability of success, so there is no point in doing more - it would be an empty and possibly counterproductive gesture rather than being an action that truly furthers the chance of success." So the important question is, what can a creature do to further the probability of success? Let's look at you running to catch the train. You claim that believing that you will succeed would not further the success of your effort. Well, of course not! I could have told you that! If you believe that you will succeed, you can become complacent, which runs the risk of slowing you down. But if you believe that there is something chasing you, that is likely to speed you up. Your argument is essentially, "my full commitment didn't involve belief X, therefore you're wrong". But belief X is a belief that would have slowed you down. It would have reduced, not furthered, your chance of success. So of course your full commitment didn't involve belief X. My point is that it is often the case that a certain consciously felt belief would increase a person's chances of success, given their chosen course of action. And in light of what commitment is - it is commitment of one's self and one's resources to furthering the probability of success - then if a belief would further a chance of success, then full, really full commitment will include that belief. S
0Richard_Kennaway13y
You're right that my analogy was inaccurate: what corresponds in the train-catching scenario to believing there is a predator is my belief that I need to catch this train. A stronger belief may produce stronger commitment, but strong commitment does not require strong belief. The animal either flees or does not, because a half-hearted sprint will have no effect on the outcome whether a predator is there or not. Similarly, there's no point making a half-hearted jog for a train, regardless of how much or little one values catching it. Belief and commitment to act on the belief are two different parts of the process. Of course, a lot of the "success" literature urges people to have faith in themselves, to believe in their mission, to cast all doubt aside, etc., and if a tool works for someone I've no urge to tell them it shouldn't. But, personally, I take Yoda's attitude: "Do, or do not."
2[anonymous]13y
Yoda tutors Luke in Jedi philosophy and a practice, which it will take Luke a while to learn. In the meantime, however, Luke is merely an unpolished human. And I am not here recommending a particular philosophy and practice of thought and behavior, but making a prediction about how unpolished humans (and animals) are likely to act. My point is not to recommend that Buridan's ass should have an exaggerated confidence that the right bucket is closer, but to observe that we can expect him to have an exaggerated confidence, because, for reasons I described, exaggerated confidence is likely to have been selected for because it is likely to have improved the chances of survival of asses who did not have the benefit of Yoda's instruction. So I don't recommend, rather I expect that humans will commonly have conscious feelings of confidence which are exaggerated, and which do not truly reflect the output of the human's mental black box, his mental machinery to which he does not have access. Let me explain by the way what I mean here, because I'm saying that the black box can output a 51% probability for Proposition P while at the same time causing the person to be consciously absolutely convinced of the truth of P. This may be confusing, because I seem to be saying that the black box outputs two probabilities, a 51% probability for purposes of decisionmaking and a 100% probability for conscious consumption. So let me explain with an example what I mean. Suppose you want to test Buridan's ass to see what probability he assigns to the proposition that the right bucket is closer. What you can do is take the scenario and alter as follows: introduce a mechanism which, with 4% probability, will move the right bucket further than the left bucket before Buridan's ass gets to it. Now, if Buridan's ass assigns a 100% probability that the right bucket is (currently) closer than the left bucket, then taking into account the introduced mechanism, this yields a 96% probability that, b
2prase13y
Thanks for the lengthy answer. Still, why it is impossible to calibrate people in general, looking at how often they get the anwer right, and then using them as a device for measuring probabilities? If a person is right on approximately 80% of the issues he says he's "sure", then why not translating his next "sure" into an 80% probability? Doesn't seem arbitrary to me. There may be inconsistency between measurements using different people, but strictly speaking, the thermometers and clocks also sometimes disagree.
0Vladimir_M13y
I do discuss this exact point in the above lengthy comment, and I allow for this possibility. Here is the relevant part: Now clearly, the critical part is to ensure that the future judgments are based on the same parts of the person's brain and that the relevant features of these parts, as well as the problem being solved, remain unchanged. In practice, these requirements can be satisfied by people who have reached the peak of ability achievable by learning from experience in solving some problem that repeatedly occurs in nearly identical form. Still, even in the best case, we're talking about a very limited number of questions and people here.
0prase13y
I know you have limited it to repeated judgments about essentialy the same question. I was rather asking why, and I am still not sure whether I interpret it correctly. Is it that the judgments themselves are possibly produced by different parts of brain, or the person's self-evaluation of certainty are produced by different parts of brain, or both? And if so, so what? Imagine a test is done on a particular person. During five consecutive years he is being asked a lot of questions (of all different types), and he has to give an answer and a subjective feeling of certainty. After that, we see that the answers which he has labeled as "almost certain" were right in 83%, 78%, 81%, 84% and 85% of cases in the five years. Let's even say that the experimenters were careful enough to divide the questions into different topics, and establish, that his "almost certain" anwers about medicine were right in 94% of the time in average and his "almost certain" answers about politics were right in 56% of the time in average. All other topics were near the overall average. Do you 1) maintain that such stable results are very unlikely to happen, or that 2) even if most of people can be calibrated is such way, still it doesn't justify using them for measuring probabilities?
0Vladimir_M13y
prase: We don't really know, but it could certainly be both, and also it may well be that the same parts of the brain are not equally reliable for all questions they are capable of processing. Therefore, while simple inductive reasoning tells us that consistent accuracy on the same problem can be extrapolated, there is no ground to generalize to other questions, since they may involve different parts of the brain, or the same part functioning in different modes that don't have the same accuracy. Unless, of course, we cover all such various parts and modes and obtain some sort of weighted average over them, which I suppose is the point of your thought experiment, of which more below. If the set of questions remains representative -- in the sense of querying the same brain processes with the same frequency -- the results could turn out to be fairly stable. This could conceivably be achieved by large and wide-ranging sets of questions. (I wonder if someone has actually done such experiments?) However, the result could be replicated only if the same person is again asked similar large sets of questions that are representative with regards to the frequencies with which they query different brain processes. Relative to that reference class, it clearly makes sense to attach probabilities to answers, so, yes, here we would have another counterexample for my original claim, for another peculiar meaning of probabilities. The trouble is that these probabilities would be useless for any purpose that doesn’t involve another similar representative set of questions. In particular, sets of questions about some particular topic that is not representative would presumably not replicate them, and thus they would be a very bad guide for betting that is limited to some particular topic (as it nearly always is). Thus, this seems like an interesting theoretical exercise, but not a way to obtain practically useful numbers. (I should add that I never thought about this scenario before
0prase13y
If there are any experimental psychologist reading this, maybe they can organise the experiment. I am curious whether people indeed can be calibrated on general questions.
4xv1513y
I tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error. Meaningless is a very strong word. In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.
2Vladimir_M13y
So why stop there? If you can justify 54%, then why not go further and calculate a dozen or two more significant digits, and stand behind them all with unshaken resolve?
9wnoise13y
You can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is. You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it's not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right? Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data. It seems to me that this is somewhat related to the problem of logical uncertainty.
7xv1513y
Again, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision. In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning. Now, maybe I will hold the line at 54% exactly, not feeling any gain to thinking harder about the cutoff (as it gets harder AND less important to nail down further digits). Heck, maybe on some other issue I only care to go out to the nearest 10%. But so what? There are plenty of cases where I know my common sense belief probability to within 10%. That suggests such an estimate is not meaningless.
2Vladimir_M13y
xv15: To be precise, I wrote "meaningless, except perhaps as a vague figure of speech." I agree that the claim would be too strong without that qualification, but I do believe that "vague figure of speech" is a fair summary of the meaningfulness that is to be found there. (Note also that the claim specifically applies to "common-sense conclusions and beliefs," not things where there is a valid basis for employing mathematical models that yield numerical probabilities.) You seem to be saying that since you perceive this number as meaningful, you will be willing to act on it, and this by itself renders it meaningful, since it serves as guide for your actions. If we define "meaningful" to cover this case, then I agree with you, and this qualification should be added to my above statement. But the sense in which I used the term originally doesn't cover this case.
2xv1513y
Fair. Let me be precise too. I read your original statement as saying that numbers will never add meaning beyond what a vague figure of speech would, i.e. if you say "I strongly believe this" you cannot make your position more clear by attaching a number. That I disagree with. To me it seems clear that: i) "Common-sense conclusions and beliefs" are held with varying levels of precision. ii) Often even these beliefs are held with a level of precision that can be best described with a number. (Best=most succinctly, least misinterpretable, etc...indeed it seems to me that sometimes "best" could be replaced with "only." You will never get people to understand 60% by saying "I reasonably strongly believe"...and yet your belief may be demonstrably closer to 60 than 50 or 70). I don't think your statement is defensible from a normal definition of "common sense conclusions," but you may have internally defined it in such a way as to make your statement true, with a (I think) relatively narrow sense of "meaningfulness" also in mind. For instance if you ignore the role of numbers in transmission of belief from one party to the next, you are a big step closer to being correct.
2Vladimir_M13y
xv15: You have a very good point here. For example, a dialog like this could result in a real exchange of useful information: A: "I think this project will probably fail." B: "So, you mean you're, like, 90% sure it will fail?" A: "Um... not really, more like 80%." I can imagine a genuine meeting of minds here, where B now has a very good idea of how confident A feels about his prediction. The numbers are still used as mere figures of speech, but "vague" is not a correct way to describe them, since the information has been transmitted in a more precise way than if A had just used verbal qualifiers. So, I agree that "vague" should probably be removed from my original claim.
7HughRistik13y
On point #2, I agree with you. On point #1, I had the same reaction as xv15. Your example conversation is exactly how I would defend the use of numerical probabilities in conversation. I think you may have confused people with the phrase "vague figure of speech," which was itself vague. Vague relative to what? "No idea / kinda sure / pretty sure / very sure?", the ways that people generally communicate about probability, are much worse. You can throw in other terms like "I suspect" and "absolutely certain" and "very very sure", but it's not even clear how these expressions of belief match up with others. In common speech, we really only have about 3-5 degrees of probability. That's just not enough gradations. In contrast, when expressing a percentage probability, people only tend to use multiples of 10, certain multiples of 5, 0.01%, 1%, 2%, 98%, 99% and 99.99%. If people use figures like 87%, or any decimal places other than the ones previously mentioned, it's usually because they are deliberately being ridiculous. (And it's no coincidence that your example uses multiples of 10.) I agree with you that feelings of uncertainty are fuzzy, but they aren't so fuzzy that we can get by with merely 3-5 gradations in all sorts of conversations. On some subjects, our communication becomes more precise when we have 10-20 gradations. Yet there are diminishing returns on more degrees of communicable certainty (due to reasons you correctly describe), so going any higher resolution than 10-20 degrees isn't useful for anything except jokes. Yes. Gaining the 10-20 gradations that numbers allow when they are typically used does make conversations relatively more precise than just by tacking on "very very" to your statement of certainty. It's similar to the infamous 1-10 rating system for people's attractiveness. Despite various reasons that rating people with numbers is distasteful, this ranking system persists because, in my view, people find it useful for communicating subjec
-4Vladimir_M13y
I mostly agree with this assessment. However, the key point is that such uses of numbers should be seen as metaphorical. The literal meaning of a metaphor is typically nonsensical, but it works by somehow hacking the human understanding of language to successfully convey a point with greater precision than the most precise literal statement would allow, at least in as many words. (There are other functions of metaphors too, of course, but this one is relevant here.) And just like it is fallacious to understand a metaphor literally, it is similarly fallacious to interpret these numerical metaphors as useful for mathematical purposes. When it comes to subjective probabilities, however, I often see what looks like confusion on this point.
2jimrandomh13y
It is wrong to use a subjective probability that you got from someone else for mathematical purposes directly, for reasons I expand on in my comment here. But I don't think that makes them metaphorical, unless you're using a definition of metaphor that's very different than the one I am. And you can use a subjective probability which you generated yourself, or combined with your own subjective probability, in calculations. Doing so just comes with the same caveats as using a probability from a study whose sample was too small, or which had some other bad but not entirely fatal flaw.
0Vladimir_M13y
I will write a reply to that earlier comment of yours a bit later today when I'll have more time. (I didn't forget about it, it's just that I usually answer lengthy comments that deserve a greater time investment later than those where I can fire off replies rapidly during short breaks.) But in addition to the theme of that comment, I think you're missing my point about the possible metaphorical quality of numbers. Human verbal expressions have their literal information content, but one can often exploit the idiosyncrasies of the human language interpretation circuits to effectively convey information altogether different from the literal meaning of one's words. This gives rise to various metaphors and other figures of speech, which humans use in their communication frequently and effectively. (The process is more complex than this simple picture, since frequently used metaphors can eventually come to be understood as literal expressions of their common metaphorical meaning, and this process is gradual. There are also other important considerations about metaphors, but this simple observation is enough to support my point.) Now, I propose that certain practical uses of numbers in communication should be seen that way too. A literal meaning of a number is that something can ultimately be counted, measured, or calculated to arrive at that number. A metaphorical use of a number, however, doesn't convey any such meaning, but merely expects to elicit similar intuitive impressions, which would be difficult or even impossible to communicate precisely using ordinary words. And just like a verbal metaphor is nonsensical except for the non-literal intuitive point it conveys, and its literal meaning should be discarded, at least some practical uses of numbers in human conversations serve only to communicate intuitive points, and the actual values are otherwise nonsensical and should not be used for any other purposes -- and even if they perhaps are, their metaphorical value
0xv1513y
okay. I still suspect I disagree with whatever you mean by mere "figures of speech," but this rational truthseeker does not have infinite time or energy. in any case, thank you for a productive and civil exchange.
4wedrifid13y
Or, you could slide up your arbitrary and fallacious slippery slope and end up with Shultz.
0Vladimir_M13y
Even if you believe that my position is fallacious, I am sure not the one to be accused of arbitrariness here. Arbitrariness is exactly what I object to, in the sense of insisting on the validity of numbers that lack both logically correct justification and clear error bars that would follow from it. And I'm asking the above question in full seriousness: a Bayesian probability calculation will give you as many significant digits as you want, so if you believe that it makes sense to extract a Bayesian probability with two significant digits from your common sense reasoning, why not more than that? In any case, I have explained my position at length, and it would be nice if you addressed the substance of what I wrote instead of trying to come up with witty one-liner jabs. For those who want the latter, there are other places on the web full of people whose talent for such things is considerably greater than yours.
2wedrifid13y
I specifically object to your implied argument in the grandparent. I will continue to reject comments that make that mistake regardless of how many times you insult me.
-1Vladimir_M13y
Look, in this thread, you have clearly been making jabs for rhetorical effect, without any attempt to argue in a clear and constructive manner. I am calling you out on that, and if you perceive that as insulting, then so be it. Everything I wrote here has been perfectly honest and upfront, and written with the goal of eliciting rational counter-arguments from which I might perhaps change my opinion. I have neither the time nor the inclination for the sort of one-upmanship and showing off that you seem to be after, and even if I were, I would pursue it in some more suitable venue. (Where, among other things, one would indeed expect to see the sort of performance you're striving for done in a much more skilled and entertaining way.)
0wedrifid13y
Your map is not the territory. If you look a little closer you may find that my points are directed at the topic, and not your ego. In particular, take a second glance at this comment. The very example of betting illustrates the core problem with your position. The insult would be that you are telling me I'm bad at entertaining one-upmanship. I happen to believe I would be quite good at making such performances were I of a mind and in a context where it suited my goals (dealing with AMOGs, for example). When dealing with intelligent agents, if you notice that what they are doing does not seem to be effective at achieving their goals it is time to notice your confusion. It is most likely that your model of their motives is inaccurate. Mind reading is hard. Shultz does know nuthink. Slippery slopes do (arbitrarily) slide in both directions (to either Shultz to Omega in this case). Most importantly, if you cannot assign numbers to confidence levels you will lose money when you try to bet.
3torekp13y
Upvoted, because I think you're only probably right. And you not only stole my thunder, you made it more thunderous :(
2[anonymous]13y
Downvote if you agree with something, upvote if you disagree. EDIT: I missed the word only. I just read "I think you're probably right." My mistake.
3magfrump13y
Upvote for disagreements of overconfidence OR underconfidence.
0groupuscule13y
Same here. A "pretty sure" confidence level would probably have done it for me.
2orthonormal13y
Um, so when Nate Silver tells us he's calculated odds of 2 in 3 that Republicans will control the house after the election, this number should be discarded as noise because it's a common-sense belief that the Republicans will gain that many seats?
0Vladimir_M13y
Boy did I hit a hornets' nest with this one! No, of course I didn't mean anything like that. Here is how I see this situation. Silver has a model, which is ultimately a piece of mathematics telling us that some p=0.667, and for reasons of common sense, Silver believes (assuming he's being upfront with all this) that this model closely approximates reality in such a way that p can be interpreted, with reasonable accuracy, as the probability of Republicans winning a House majority this November. Now, when you ask someone which party is likely to win this election, this person's brain will activate some algorithm that will produce an answer along with some rough level of confidence. Someone completely ignorant about politics might answer that he has no idea, and cannot say anything with any certainty. Other people will predict different results with varying (informally expressed) confidence. Silver himself, or someone else who agrees with his model, might reply that the best answer is whatever the model says (i.e. Republicans win with p=0.667), since it is completely superior to the opaque common-sense algorithms used by the brains of non-mathy political analysts. Others will have greater or lesser confidence in the accuracy of the model, and might take its results into account, with varying weight, alongside other common-sense considerations. Ultimately, the status of this number depends on the relation between Silver's model and reality. If you believe that the model is a vast improvement over any informal common-sense considerations in predicting election results, just like Newton's theory is a vast improvement over any common-sense considerations in predicting the motions of planets, then we're n