Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!
The Irrationality Game
New Comment
932 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]PlaidX1250

Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.

5wedrifid
Given that most of the top comments are meta in one way or another it would seem that the 'top comments' list belongs somewhere other than on the front page. Can't we hide the link to it on the wiki somewhere?
4Luke Stebbing
The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them. Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).
5AngryParsley
Just to clarify: by "unknown entities" do you mean non-human intelligent beings?
1PlaidX
Yes.
3Will_Newsome
I would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.
2[anonymous]
I find it vaguely embarrassing that this post, taken out of context, now appears at the top of the "Top Comments" listing.
5Vladimir_Nesov
I think "top comments" was an experiment with a negative result, and so should be removed.
1Scott Alexander
I upvoted you because 95% is way high, but I agree with you that it's non-negligible. There's way too much weirdness in some of the cases to be easily explainable by mass hysteria or hoaxes or any of that stuff - and I'm glad you pointed out Fatima, because that was the one that got me thinking, too. That having been said, I don't know what they are. Best guess is easter eggs in the program that's simulating the universe.
3Will_Newsome
Prior before having learned of Fatima, roughly? Best guess at current probability?
1PlaidX
I don't think that's a very good guess, but it's as good as any I've seen. I tried to phrase my belief statement to include things like this within its umbrella.
1Will_Newsome
Voted up, and you've made me really curious. Link or explanation?
5PlaidX
This is what spurred me to give consideration to the idea initially, but what makes me confident is sifting through simply mountains of reports. To get an idea of the volume and typical content, here's a catalog of vehicle interference cases in Australia from 1958 to 2004. Most could be explained by a patchwork of mistakes and coincidences, some require more elaborate, "insanity or hoax" explanations, and if there are multiple witnesses, insanity falls away too. But there is no pattern that separates witnesses into a "hoax" and a "mistake" group, or even that separates them from the general population.

If there are mutliple witnesses who can see each others reactions, it's a good candidate for mass hysteria

7Will_Newsome
I couldn't really understand the blog post: his theory is that there are terrestrial but nonhuman entities that like to impress the religious? But the vehicle interference cases you reference are generally not religious in nature, and are extremely varying in the actual form of the craft seen (some are red and blue, some are series of lights). What possible motivations for the entities could there be? Most agents with such advanced technology will aim to efficiently optimize for their preferences. If this is what optimizing for their preferences looks like, they have some very improbably odd preferences.

To be fair to the aliens, the actions of Westerners probably seem equally weird to Sentinel Islanders. Coming every couple of years in giant ships or helicopters to watch them from afar, and then occasionally sneaking into abandoned houses and leaving gifts?

3JohannesDahlstrom
That was a fascinating article. Thank you.
3PlaidX
I agree with you entirely, and this is a great source of puzzlement to me, and to basically every serious investigator. They hide in the shadows with flashing lights. What could they want from us that they couldn't do for themselves, and if they wanted to influence us without detection, shouldn't it be within their power to do it COMPLETELY without detection? I have no answers to these questions.
2Risto_Saarelma
That's assuming that what's going on is that entities who are essentially based on the same lawful universe as we are are running circles around humans. If what's going on is instead something like a weird universe, where reality makes sense most of the time, but not always, I imagine you might get something that looks a lot like some of the reported weirdness. Transient entities that don't make sense leaking through the seams, never quite leaving the causal trail which would incontrovertibly point to their existence.
1Will_Newsome
If I'd asked the above questions honestly rather than semi-rhetorically I may have figured a few things out a lot sooner than I did. I might be being uncharitable to myself, especially as I did eventually ask them honestly, but the point still stands I think.
0wedrifid
64 points! This is the highest voted comment that I can remember seeing. (A few posts have gone higher). Can anyone remember another, higher voted example?
2Richard_Kennaway
But the rules are different in this thread. 64 here means that 64 more voters disagree than agree.
2Vladimir_Nesov
Tell that to the out-of-context list of all LW comments sorted by rating!
6wedrifid
Hang on, we have one of those?
0[anonymous]
-
0Jonathan_Graehl
I'd like to know what your prior is for the disjunction "unknown entities control saucers that ambiguously reveal themselves to a minority of people on Earth, for some purpose". While I'm sure you've looked more closely at the evidence than I have, I presume your prior for that disjunction must be much higher than mine to even look closely.
1PlaidX
It certainly wasn't high... I went through most of my life never giving the idea a thought, stumbled onto the miracle of fatima one day, and said "well, clearly this wasn't a flying saucer, but what the heck was it?" But the rabbit hole just kept going down. It is not a particularly pleasant feeling to me, as someone who used to think he had a fairly solid grip on the workings of the world.
0Perplexed
The sun, seen through moving clouds. Just exactly what it is described as being.
7PlaidX
Here is one of many detailed accounts, this one is from Dr. José Maria de Almeida Garrett, professor at the Faculty of Sciences of Coimbra, Portugal I was looking at the place of the apparitions, in a serene, if cold, expectation of something happening, and with diminishing curiosity, because a long time had passed without anything to excite my attention. Then I heard a shout from thousands of voices and saw the multitude suddenly turn its back and shoulders away from the point toward which up to now it had directed its attention, and turn to look at the sky on the opposite side. It must have been nearly two o'clock by the legal time, and about midday by the sun. The sun, a few moments before, had broken through the thick layer of clouds which hid it, and shone clearly and intensely. I veered to the magnet which seemed to be drawing all eyes, and saw it as a disc with a clean-cut rim, luminous and shining, but which did not hurt the eyes. I do not agree with the comparison which I have heard made in Fatima---that of a dull silver disc. It was a clearer, richer, brighter colour, having something of the luster of a pearl. It did not in the least resemble the moon on a clear night because one saw it and felt it to be a living body. It was not spheric like the moon, nor did it have the same colour, tone, or shading. It looked like a glazed wheel made of mother-of-pearl. It could not be confused, either, with the sun seen through fog (for there was no fog at the time), because it was not opaque, diffused or veiled. In Fatima it gave light and heat and appeared clear-cut with a well-defined rim. The sky was mottled with light cirrus clouds with the blue coming through here and there, but sometimes the sun stood out in patches of clear sky. The clouds passed from west to east and did not obscure the light of the sun, giving the impression of passing behind it, though sometimes these flecks of white took on tones of pink or diaphanous blue as they passed before the sun.
0Will_Newsome
Do you think you guess numerically what your prior probability was before learning of the Miracle of Fatima?
3PlaidX
Mmm, < .01%, it wasn't something I would've dignified with enough thought to give a number. Even as a kid, although I liked the idea of aliens, stereotypical flying saucer little green men stuff struck me as facile and absurd. A failure of the imagination as to how alien aliens would really be. In hindsight I had not considered that their outward appearance and behavior could simply be a front, but even then my estimate would've been very low, and justifiably, I think.
1Eugine_Nier
Probably ~15% (learning about Fatima didn't change it much by the way). Basically because I can't think of a good reason why this should have an extremely low prior.
-2CronoDAS
And do you believe in Santa Claus, too? :P
[-]Raemon740

Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%

I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.

7ata
Agreed. I think they've explicitly denied that they're working on AGI, but I'm not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they're probably among the entities most likely (along with, I'd say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).

If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.

Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.

5Kevin
Google has one employee working (sometimes) on AGI. http://research.google.com/pubs/author37920.html
6khafra
It's comforting, friendliness-wise, that one of his papers cites "personal communication with Steve Rayhawk."
0magfrump
If they've explicitly denied doing research into AGI, they would have no reason to talk about friendliness research; that isn't additional evidence. I do think the OP is extremely overconfident though.
1Raemon
I confess that I probably exaggerated the certainty. It's more like 55-60%. I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called "How to survive a Robot Uprising" which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist. Then I came here and learned more about nanotechnology and the singularity and the joke became a lot less funny. (The techniques described in the Robot Uprising are remarkably useless when you have about a day between noticing something is wrong and the whole world turning into paperclips.) It seems to me that with the number of extremely smart people in Google, there's gotta be at least some who are pondering this issue and thinking about it seriously. The actual evidence of Google being a genuinely idealistic company that just wants information to be free and to provide a good internet experience vs them having SOME kind of secret agenda seems about 50/50 to me - there's no way I can think of to tell the difference until they actually DO something with their massively accumulated power. Given that I have no control of it, basically I just feel more comfortable believing they are doing something that a) uses their power in a way I can perceive as good or at least good-intentioned, which might actually help, b) lines up with the particular set of capabilities and interests. I'd also note that the type of Singularity I'm imagining isn't necessarily AI per se. More of the internet and humanity (or parts of it) merging into a superintelligent consciousness, gradually outsourcing certain brain functions to the increasingly massive processing power of computers.
4magfrump
I do think it's possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of "singularity" rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don't have a hidden agenda. This isn't very strong evidence but it's the only evidence I have). I would assign up to a 30% probability or so of "Google is planning something which might be described as preparing to implement a positive singularity." But less than a 5% chance that I would describe it that way, due to more detailed definitions of "singularity" and "positive."
3NancyLebovitz
I don't entirely trust Google because they want everyone else's information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.

Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Can you rephrase this statement tabooing the words experience and qualia.

If he could, he wouldn't be making that mistake in the first place.

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

0[anonymous]
Well, from a certain point of view you could see the two propositions as being essentially equivalent... i.e. the inhabitants of a higher layer reality poking through the layers and toying with us (if you had a universe simulation running on your desktop, would you really be able to refrain from fucking with your sims' heads)? So whatever probability you assign to one proposition, your probability for the other shouldn't be too much different.
0LucasSloan
I certainly agree with you now, but it wasn't entirely certain what you meant by your statement. A qualifier might help.
0Will_Newsome
Most won't see the need for precision, but you're right, I should add a qualifier for those who'd (justifiably) like it.
2Perplexed
Help! There is someone reasoning in terms of decision theoretic significantness ruining my fun by telling me that my disagreement with you is meaningless.
2Will_Newsome
Ahhh! Ahhhhh! I am extremely reluctant to go into long explanations here. Have you read the TDT manual though? I think it's up at the singinst.org website now, finally. It might dissolve confusions of interpretation, but no promises. Sorry, it's just a really tricky and confusing topic with lots of different intuitions to take into account and I really couldn't do it justice in a few paragraphs here. :(
5LucasSloan
What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.
0Will_Newsome
So you know which I must have meant, then. I do try to be almost certainly right. ;) (Technically, we shouldn't really be thinking about probabilities here either because it's not important and may be meaningless decision theoretically, but I think LW is generally too irrational to have reached the level of sophistication such that many would pick that nit.)
4Nick_Tarleton
I'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)
2Will_Newsome
That is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%. What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.
0wedrifid
That's a good question. My impression is that it is somewhat. But in the figures we are giving here we seem to be trying to convey two distinct concepts (not just likelyhoods).
4Mass_Driver
Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.
4Will_Newsome
Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.
2[anonymous]
This raises an interesting point: what do you think about the Presumptuous Philosopher thought experiment?
3Jonathan_Graehl
Yep. Over-reliance on anthropic arguments IMO.
4Will_Newsome
Huh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe. (ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)
2AlephNeil
If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect. On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)
5Will_Newsome
It's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot. First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way. (Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).) What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?
2AlephNeil
Oh sure - non-trivial philosophical questions are funny like that. Anyway, my idea is that for any description of a universe, certain elements of that description will be ad hoc mathematical 'scaffolding' which could easily be changed without meaningfully altering the 'underlying reality'. A basic example of this would be a choice of co-ordinates in Newtonian physics. It doesn't mean anything to say that this body rather than that one is "at rest". Now, specifying a manner in which the universe is being simulated is like 'choosing co-ordinates' in that, to do a simulation, you need to make a bunch of arbitrary ad hoc choices about how to represent things numerically (you might actually need to be able to say "this body is at rest"). Of course, you also need to specify the laws of physics of the 'outside universe' and how the simulation is being implemented and so on, but perhaps the difference between this and a simple 'choice of co-ordinates' is a difference in degree rather than in kind. (An 'opaque' chunk of physics wrapped in a 'transparent' mathematical skin of varying thickness.) I'm not saying this account is unproblematic - just that these are some pretty tough metaphysical questions, and I see no grounds for (near-)certainty about their correct resolution. He's not talking about ensemble vs 'single universe' models of reality, he's talking about reference - what's it's possible for someone to refer to. He may be wrong - I'm not sure - but even when he's wrong he's usually wrong in an interesting way. (Like this.) I'm unmoved - it's trite to point out that even smart people tend to be overconfident in beliefs that they've (in some way) invested in. (And please note that the line you were responding to is specifically about the scenario where there is 'intervention'.)
2wedrifid
Err... I'm not intimately acquainted with the sport myself... What's the approximate difficulty rating of that kind of verbal gymnastics stunt again? ;)
2AlephNeil
It's a tricky one - read the paper. I think what he's saying is that there's no way for a person in a simulation (assuming there is no intervention) to refer to the 'outside' world in which the simulation is taking place. Here's a crude analogy: Suppose you were a two-dimensional being living on a flat plane, embedded in an ambient 3D space. Then Putnam would want to say that you cannot possibly refer to "up" and "down". Even if you said "there is a sphere above me" and there was a sphere above you, you would be 'incorrect' (in the same paradoxical way).
6MugaSofer
But ... we can describe spaces with more than three dimensions.
1timtyler
So: you think there's a god who created the universe?!? Care to lay out the evidence? Or is this not the place for that?
2Will_Newsome
I really couldn't; it's such a large burden of proof to justify 99.5% certainty that I would have to be extremely careful in laying out all of my disjunctions and explaining all of my intuitions and listing every smart rationalist who agreed with me, and that's just not something I can do in a blog comment.
0A1987dM
Upvoted mainly because of the last sentence (though upvoting it does coincide with what I'd have to do according to the rules of the game).
0[anonymous]
I'm confused about the justification for reasoning in terms of measure. While the MUH (or at least its cousin the CUH) seems to be preferred from complexity considerations, I'm unsure of how to account for the fact that it is unknown whether the cosmological measure problem is solvable. Also, what exactly do you consider making up "your measure"? Just isomorphic computations?
1Will_Newsome
Naively, probabilistically isomorphic computations, where the important parts of the isomorphism are whatever my utility function values... such that, on a scale from 0 to 1, computations like Luke Grecki might be .9 'me' based on qualia valued by my utility function, or 1.3 'me' if Luke Grecki qualia are more like the qualia my utility function would like to have if I knew more, thought faster, and was better at meditation.
0[anonymous]
Ah, you just answered the easier part!
2Will_Newsome
Yeah... I ain't a mathematician! If 'measure' turns out not to be the correct mathematical concept, then I think that something like it, some kind of 'reality fluid' as Eliezer calls it, will take its place.
0Liron
99.5% is just too certain. Even if you think piles of realities nested 100 deep are typical, you might only assign 99% to not being in the basement.
0Perplexed
How is that different than "I believe that I am a simulation with non-negligible probability"? I'm leaving you upvoted. I think the probability is negligible however you play with the ontology.
2Will_Newsome
If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in 'basement reality' versus on simulators' computers. This also breaks down when you start reasoning decision theoretically, but most LW people don't do that, so I'm not too worried about it. In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about 'basement' as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs). The only reason you'd expect there to be humans in the first place is if they appeared in 'basement' level reality, and in a universal dovetailer computing via complexity, there's then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it's meaningless decision theoretically, but meaningful if we're just talking about the structure of the ensemble from a naive human perspective.)
2Perplexed
Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don't think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark# I'm not asking you to bear the burden of proving that non-basement versions are numerous. I'm asking you to justify your claim that when I use the word "I" in this universe, it is meaningless to say that I'm not talking about the fellow saying "I" in a simulation and that he is not talking (in part) about me. Surely "I" can be interpreted to mean the local instance.
-1LucasSloan
Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn't just that given that they have no way to test, this means they'll never know, it's more fundamental than that. It's kinda like how if there's an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there's no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.
0Perplexed
I must be missing a subtlety here. I began by asking "Is saying X different from saying Y?" I seem to be getting the answer "Yes, they are different. X is meaningless because it can't be distinguished from Y."
3LucasSloan
Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program - and in this case, we can say "yes, in memory position #31415926 there's a human in basement reality and in memory position #2718281828 there's an identical human in a deeper simulation". However, those humans can't tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.
1Perplexed
No, you don't see the problem. The problem is that Will_Newsome began by stating: Which is fine. But now I am being told that my counter claim "I am not living in a simulation" is meaningless. Meaningless because I can't prove my statement empirically. What we seem to have here is very similar to Godel's version of St. Anselm's "ontological" proof of the existence of a simulation (i.e. God).
-3LucasSloan
Oh. Did you see my comment asking him to tell whether he meant "some of our measure is in a simulation" or "this particular me is in a simulation"? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).
0Will_Newsome
Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there's an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).
0[anonymous]
I don't get this. Consider the following ordering of programs; T' < T iff T can simulate T'. More precisely: T' < T iff for each x' there exists an x such that T'(x') = T(x) It's not immediately clear to me that this ordering shouldn't have any least elements. If it did, such elements could be thought of as basements. I don't have any idea about whether or not we could be part of such a basement computation. I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.
-3Kaj_Sotala
My stance on the simualtion hypothesis: Presume that there is an infinite amount of "stuff" in the universe. This can be a a Tegmarkian Level IV universe (all possible mathematical structures exist), or alternatively there might only be an infinite amount of matter in this universe. The main assumption we need is that there is an infinite amount of "stuff", enough that anything in the world gets duplicated an infinite number of times. (Alternatively, it could finite but insanely huge.) Now this means that there are an infinite number of Earths like ours. It also means that there is an infinite number of planets that are running different simulations. An infinite number of those simulations will, by coincidence or purpose, happen to be simulating the exact same Earth as ours. This means that there exist an infinite number of Earths like ours that are in a simulation, and an infinite number of Earths like ours that are not in a simulation. Thus it becomes meaningless to ask whether or not we exist in a simulation. We exist in every possible world containing us that is a simulation, and exist in every possible world containing us that is not a simulation. (I'm not sure if I should upvote or downvote you.)
6Eugine_Nier
Just because a set is infinite doesn't mean it's meaningless to speak of measures on it.
5Perplexed
The infinite cardinality of the set doesn't preclude the bulk of the measure being attached to a single point of that set. For Solomonof-like reasons, it certainly makes sense to me to attach the bulk of the measure to the "basement reality"
2Will_Newsome
(FWIW I endorse this line of reasoning, and still think 99.5% is reasonable. Bwa ha ha.) (That is, I also think it makes sense to attach the bulk of the measure to basement reality, but sense happens to be wrong here, and insanity happens to be right. The universe is weird. I continue to frustratingly refuse to provide arguments for this, though.) (Also, though I and I think most others agree that measure should be assigned via some kind of complexity prior (universal or speed priors are commonly suggested), others like Tegmark are drawn towards a uniform prior. I forget why.)
1Perplexed
I wouldn't have thought that a uniform prior would even make sense unless the underlying space has a metric (a bounded metric, in fact). Certainly, a Haar measure on a recursively nested space (simulations within simulations) would have to assign the bulk of its measure to the basement. Well, live and learn.
0Will_Newsome
Yeah, I also don't understand Tegmark's reasoning (which might have changed anyway).
0Will_Newsome
Right, I agree with Eugine Nier: the relative measures are important. You are in tons of universes at once, but some portion of your measure is simulated, and some not. What's the portion?

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

[-]Jack160

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

-6jferguson

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

[-]RobinZ250

What reason do you have for assigning such high probability to time travel being possible?

3Perplexed
And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation? ;) Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.
2RobinZ
I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability. Edit: Of course, evidence for that 95%+ would be appreciated.
0nick012000
Well, most of the arguments against it are, to my knowledge, start with something along the lines of "If time travel exists, causality would be fucked up, and therefore time travel can't exist," though it might not be framed quite that implicitly. Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.
5RobinZ
That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.
0rabidchicken
nick voted up, robin voted down... This feels pretty weird.

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

0Normal_Anomaly
My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn't the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

9Swimmy
You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.
8wedrifid
Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!
9RobinZ
It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.
8RobinZ
I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?
0Vladimir_Nesov
We should learn to present this argument correctly, since complexity of hypothesis doesn't imply its improbability. Furthermore, the prior argument drives probability through the floor, making 99% no more surprising than 1%, and is thus an incorrect argument if you wouldn't use it for 1% as well (would you?).
[-]RobinZ130

I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:

  1. God exists.
  2. God created the universe.
  3. God prefers not to violate natural laws.
  4. The stories about people seeing angels are based on real events.
  5. The angels seen during these events were actually just robots.
  6. The angels seen during these events were wielding laser turrets.

Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.

But yes - I'm not good at arguing.

-7Vladimir_Nesov

There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.

Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.

Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.

This (modulo the chance it was made up) is pretty strong evidence that you're wrong. I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

Here's another case:

"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.

6AdeleneDawner
They could probably do some relevant research by talking to Alzheimer's patients - they wouldn't get anything as clear as that, I think, but I expect they'd be able to get statistically-significant data.
8[anonymous]
How detailed of a model are you thinking of? It seems like there are at least easy and somewhat trivial predictions we could make e.g. that a human will eat chocolate instead of motor oil.
4dyokomizo
I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.

How about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)

-2dyokomizo
This model seems to be reducible to "people will eat what they prefer". A good model would be able to reduce the number of bits to describe a behavior, if the model requires to keep a log (e.g. what particular humans prefer to eat) to predict something, it's not much less complex (i.e. bit encoding) than the behavior.
6AdeleneDawner
Maybe I've misunderstood. It seems to me that your original prediction has to refer either to humans as a group, in which case Luke's counterexample is a good one, or humans as individuals, in which case my counterexample is a good one. It also seems to me that either counterexample can be refined into a useful prediction: Humans in general don't eat petroleum products. I don't eat spicy food. Corvi doesn't eat meat. All of those classes of things can be described more efficiently than making lists of the members of the sets.
-2newerspeak
No, because preferences are revealed by behavior. Using revealed preferences is a good heuristic generally, but it's required if you're right that explanations for behavior are mostly post-hoc rationalizations. So: People eat what they prefer. What they prefer is what they wind up having eaten. Ergo, people eat what they eat.
1Strange7
Consistency of preferences is at least some kind of a prediction.
7Douglas_Knight
I think "vague" is a poor word choice for that concept. "(not) informative" is a technical term with this meaning. There are probably words which are clearer to the layman.
2dyokomizo
I agree vague is not a good word choice. Irrelevant (using relevancy as it's used to describe search results) is a better word.
5Perplexed
Downvoted in agreement. But I think that the randomness comes from what programmers call "race conditions" in the timing of external stimuli vs internal stimuli. Still, these race conditions make prediction impossible as a practical matter.
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

4wedrifid
I want to upvote each of these points a dozen times. Then another few for the first. It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.
2mattnewport
I guess I'm playing the game right then :) I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.
4Mass_Driver
Funny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)
1wedrifid
Most of them, no. Some, yes. Particularly since the alternative is the inevitable loss of everything that is valuable to me in the universe.
7Will_Newsome
This is incredibly tangential, but I was talking to a friend earlier and I realized how difficult it is to instill in someone the desire for altruism. Her reasoning was basically, "Yeah... I feel like I should care about cancer, and I do care a little, but honestly, I don't really care." This sort of off-hand egoism is something I wasn't used to; most smart people try to rationalize selfishness with crazy beliefs. But it's hard to argue with "I just don't care" other than to say "I bet you will have wanted to have cared", which is gramatically horrible and a pretty terrible argument.
9Jordan
I respect blatant apathy a whole hell of a lot more than masked apathy, which is how I would qualify the average person's altruism.
0DanielLC
I agree with your second. Was your third supposed to be high or low? I think it's low, but not unreasonably so.
0mattnewport
I expected the third to be higher than most less wrongers would estimate.
0[anonymous]
I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?
0[anonymous]
I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?
6mattnewport
Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment. Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton). Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton. Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders. Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.
4orthonormal
Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.
3Eugine_Nier
I'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.
7orthonormal
Cancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction. One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.
1Eugine_Nier
However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.
2orthonormal
So you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way. But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).
0Eugine_Nier
I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.
5orthonormal
Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.
0mattnewport
I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen). The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance. The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.
0orthonormal
I'm analogizing a singleton to a single ant colony, not to a supercolony.
0Eugine_Nier
I agree with your first two, but am dubious about your third.
3mattnewport
Two points that influence my thinking on that claim: 1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents. 2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.

75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.

At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.

(Edited for clarity.)

(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)

2Risto_Saarelma
Just to make sure, is this talking about something different from people committing less crimes when they are themselves practicing TM or in daily contact with someone who does? I don't really understand the second paragraph. What arm TM-Sidhis, are they something distinct from regular TM (are these different types of practicioners). And what's with the sqrt(1%)? One in ten people in the total population need to be TM-Sidhis for the crime rate reduction effect to kick in?
0Pavitra
I'm not sure if personal contact with practitioners has an effect, but the studies I'm thinking of were on the level of cities -- put a group of meditators in Chicago, the Chicago crime rate goes down. TM-Sidhis is a separate/additional practice that has TM as a dependency in the sense of package management. If you know TM, you can learn TM-Sidhis. Sorry, I meant sqrt(.01p) where p is the population of the group to be affected. For example, a city of one million people would require ten thousand TM meditators or 100 TM-Sidhis meditators.
0Risto_Saarelma
Right, thanks for the clarification. This definitely puts the claim into upvote territory for me.
0magfrump
No vote: I agree with the hypothesis that appropriate meditation practice could reduce crime rates, but I haven't the slightest idea how to evaluate the specific population figures.
0Pavitra
Can you clarify the question, or does the whole statement seem meaningless?
2magfrump
I don't really have a question. You have a hypothesis: Transcendental meditation practitioners will reduce the crime rate in their cities in a nonlinear fashion satisfying certain identities. The statement I have written above I agree with, and would therefore normally downvote. However, you posit specific figures for the reduction of the crime rate. I have no experience with city planning or crime statistics or population figures, and hence have no real basis to judge your more specific claim. If I disagreed with it on a qualitative level, then I would upvote. If I had any sense of what your numbers meant I might think that they were about right or too high or too low but since I don't I'm not able to evaluate it. But not-evaluating because I don't know how to engage the numbers is different than not-evaluating because I didn't read it, so I wanted to make the difference clear; since the point of the game is to engage with ideas that may be controversial.
0Pavitra
I'm still not sure I understand what you mean, but let me take a shot in the dark: Out of the variance in crime rate that depends causally on the size of the meditating group, most of that variance depends on whether or not the size of the group is greater than a certain value that I'll call x. If the meditating group is practicing only TM, then x is equal to 1% of the size of the population to be affected, and if the meditating group is practicing TM-Sidhis, then x is equal to the square root of 1% of the population to be affected. For example, with a TM-only group in a city of ten thousand people, increasing the size of the group from 85 to 95 meditators should have a relatively small effect on the city's crime rate, increasing from 95 to 105 should have a relatively large effect, and increasing from 105 to 115 should have a relatively small effect. Edit: Or did you mean my confidence values? The second proposition (about the nonlinear relationship) I assign 20% confidence conditional on the truth of the first proposition. Since I assign the first proposition 75% confidence, and since the second proposition essentially implies the first, it follows that the second proposition receives a confidence of (0.2 * 0.75)=15%.
3magfrump
I understand what you meant by your proposition, I'm not trying to ask for clarification. I assume you have some model of TM-practitioner behavior or social networking or something which justifies your idea that there is such a threshold in that place. I do not have any models of: how TM is practiced, and by whom; how much TM effects someone's behavior, and consequently the behavior of those they interact with; how much priming effects like signs or posters for TM groups or instruction have on the general populace; how much the spread of TM practitioners increases the spread of advertisement. I would not be hugely surprised if it were the case that, given 1% of the population practiced TM, this produced enough advertisement to reach nearly all of the population (i.e. a sign on the side of a couple well-traveled highways) or enough social connections that everyone in a city was within one or two degrees of separation of a TM practitioner. But I also wouldn't be surprised if the threshold was 5%, or .1%, or if there was no threshold, or if there was a threshold in rural areas but not urban areas, or conservative-leaning areas but not liberal-leaning areas, or the reverse. I have no model of how these things would go about, so I don't feel comfortable agreeing or disagreeing. Certainly fewer than 15% of the possible functions of TM-practice vs crime are as you describe, but it is certainly far more likely that your hypothesis is true compared with the hypothesis "even one TM-practitioner makes the crime rate 100%" but I don't know if it's 5 bits more relevant or 10 bits more relevant, and I don't know what my probabilities should be even if I knew how many bits of credence I should give a hypothesis. If you know something more than I do (which is to say, anything at all) about social networking, advertising, or the meditation itself, or the people who practice it, then you might reasonably have a good hypothesis. But I don't, so I can only take the outside view,
0Pavitra
I understand now. The causally primary reason for my belief is that while I was growing up in a TM-practicing community, I was told repeatedly that there were many scientific studies published in respectable journals demonstrating this effect, and the "square root of one percent" was a specific point of doctrine. I've had some trouble finding the articles in question on academically respectable, non-paywalled sites (though I didn't try for more than five or ten minutes), but a non-neutrally-hosted bibliography-ish thing is here. (Is there a general lack of non-paywalled academically respectable online archives of scientific papers?) . (Edited to add: if anyone decides to click any of the videos on that page, rather than just following text links, I'd assign Fred Travis the highest probability of saying anything worth hearing.) . (Edited again: I was going to say this when I first wrote this comment, but forgot: The obvious control would be against other meditation techniques. I don't think there are studies with this specific control on the particular effect in my top-level comment, but there are such studies on e.g. medical benefits.) . (Edited yet again: I've now actually watched the videos in question. The unlabeled video at the top (John Hagelin) is a lay-level overview of studies that you can read for yourself through text links. (That is, you can read the studies, not the overview.) Gary Kaplan is philosophizing with little to no substance in the sense of expectation-constraint, and conditional on the underlying phenomena being real his explanation is probably about as wrong as, say, quantum decoherence. Nancy Lonsdorf is arguing rhetorically for ideas whose truth is almost entirely dependent on the validity of the studies in question and that follow from such validity in a trivial and straightforward fashion. Some people might need what she's saying pointed out to them, but probably not the readers of Less Wrong. Fred Travis goes into more crunch
0magfrump
Wow that was a super in depth response! Thanks, I'll check it out if I have time.

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

4wedrifid
Do you apply this to yourself?
3Simon Fischer
Yes! Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
3RomanDavis
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using, And this is before the computer uses it's knowledge to reoptimize it's optimization process.
1Simon Fischer
I understand the concept of recursive self-optimization und I don't consider it to be very implausible. Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization? I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
3wedrifid
I count "algorithm-space is really really really big" as at least some form of evidence. ;) Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid. Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
2Risto_Saarelma
You're putting 'effectively' here in place of 'intelligently' in the original assertion.
0Simon Fischer
I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?
0Risto_Saarelma
I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?
1Simon Fischer
This is of course a matter of degree, but basically yes!
0Risto_Saarelma
Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems? The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.
2Simon Fischer
I'll try to give examples: For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent. For quantum mechanics: Design a high-temperature superconductor from scratch. Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?
0wedrifid
We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!
0Simon Fischer
Not a good start if we are facing exponential search-spaces! If brute-force would work, I imagine the AI-problem would be solved?
0wedrifid
Not particularly. :) But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn't expect you would concede the ability to brute force 'general optimisation' - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently. Not necessarily. Biases could easily have made humans worse than brute-force.
0Simon Fischer
Please give evidence that "a more impressive kind of general intelligence" actually exists!
5wedrifid
Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours. Note that I've tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference: * Demands of the general form "Where is the evidence for?" are somewhat of a hangover from traditional rational 'debate' mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn't the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations). * "More impressive than humans" is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best 'general intelligence' we could arrive at in the local area. We haven't had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn't wait until our brains reached the best level DNA could support before it kicked in. * A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle. * Being able to 'brute force' a solution to any problem is actuall
0Simon Fischer
My intention was merely to point out where I don't follow your argument, but your criticism in my formulation is valid. I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance) My concern is more about what we can do with limited ressources, this is why I'm not impressed with the brute-force-solution This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
0wedrifid
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even "X turns without a piece being taken" would be sufficient depending on how idiotic the 'brute force' is. Is such a rule in place?
0Apprentice
Yes, the fifty-move rule. Though technically it only allows you to claim a draw, it doesn't force it.
0wedrifid
OK, thanks. In that case brute force doesn't actually produce perfect play in chess and doesn't return if it tries. (Incidentally, this observation that strengthens SimonF's position.)
0Simon Fischer
But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here) This claims that generalized chess is EXPTIME-complete, which is in agreement with the above.
0wedrifid
That rule will do it (given the forced).
0wedrifid
(Pardon the below tangent...) I'm somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that'd be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I'm guessing chess will be a stalemate too but I don't know for sure even whether we'll ever be able to prove that one way or the other. Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to 'two moves and a pawn" or somesuch thing. My prediction: As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans 'catching up'. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected 'perfect' result.
2timtyler
Sure there is - see: * Legg, Shane Tests of Machine Intelligence. Shane Legg and Marcus Hutter. In Proc. 50th Anniversary Summit of Artificial Intelligence, Monte Verità, Switzerland. 2007. * Hutter, M.: Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer, Berlin (2004) * Hernández-Orallo, J., Dowe, D.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence. 17, 1508-1539 (2010) * Solomonoff, R. J.: A Formal Theory of Inductive Inference: Parts 1 and 2. Information and Control 7, 1-22 and 224-254 (1964). The only assumption about the environment is that Occam's razor applies to it.
4Simon Fischer
Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
1timtyler
IMO, it is best to think of power and breadth being two orthogonal dimensions - like this. * narrow <-> broad; * weak <-> powerful. The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct. I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can. I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
2[anonymous]
That is a very good point, with wideness orthogonal to power. Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
0Simon Fischer
I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)
0timtyler
Clear - but also clearly wrong. Robin Hanson says: ...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.
0Simon Fischer
Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.
0timtyler
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist. Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".
2whpearson
Can you unpack algorithm and why you think an intelligence is one?
1Simon Fischer
I'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way. Wikipedia says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system." When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.
3whpearson
Does it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine or persistent Turing machine. So some may say it is not an algorithm. The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.
1ata
Do you behave intelligently in domains you were not specifically designed(/selected) for?
0Simon Fischer
No, I don't think I would be capable if the domain is sufficiently different from the EEA.
0[anonymous]
Do you antipredict an AI specialized in AI design, which can't do anything it's not specifically designed to do, but can specifically design itself as needed?

Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)

I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.

4JoshuaZ
Upvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.
5James_Miller
I wrote about it here: http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html Once we have identified genes that play a key role in intelligence then eugenics through massive embryo selection has a good chance at producing lots of super-geniuses especially if you are willing to tolerate a high "error rate." The Chinese are actively looking for the genetic keys to intelligence. (See http://vladtepesblog.com/?p=24064) The Chinese have a long pro-eugenics history (See Imperfect Conceptions by Frank Dikötter) and I suspect have a plan to implement a serious eugenics program as soon as it becomes practical which will likely be within the next five years.
5JoshuaZ
I think the main point of disagreement is the estimate that such a program would be practical in five years (hence my longer-term estimate). My impression is that actual studies of the genetic roots of intelligence are progressing but at a fairly slow pace. I'd give a much lower than 40% chance that we'll have that good an understanding in five years.
0James_Miller
If the following is correct we are already close to finding lots of IQ boosting genes: "SCIENTISTS have identified more than 200 genes potentially associated with academic performance in schoolchildren. Those schoolchildren possessing the 'right' combinations achieved significantly better results in numeracy, literacy and science.'" http://www.theaustralian.com.au/news/nation/found-genes-that-make-kids-smart/story-e6frg6nf-1225926421510
2Douglas_Knight
The article is correct, but we are not close to finding lots of IQ boosting genes. But the relevant question is whether the Chinese government is fooled by this too.
3Jack
Can you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.
2gwern
Recorded: * http://predictionbook.com/predictions/1834
0wedrifid
Hat tip to China.
0magfrump
Tentatively downvoted; I think over a longer time period it's highly likely, but I would be unsurprised to later discover that it started that soon. I might put my (uninformed) guess closer to 10-20% but it feels qualitatively similar.

There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)

7JenniferRM
If I'm interpreting the terms charitably, I think I put this more like 70%... which seems like a big enough numerical spread to count as disagreement -- so upvoted! My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe's Leviathan, and personal musings about Fukuyama's End Of History extrapolated into transhuman contexts, and more ideas in this vein. It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out... but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a "theory of morality". But even then, being able to generate evidence about the absence of an objective object level "theory of morality" would itself seem to offer a strategy for taking a universally acceptable position on the general subject... which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel's "Last Word": "If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it."
0magfrump
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.
4jimrandomh
This probably isn't what you had in mind, but any single complete human brain is a (or contains a) morality, and it's objectively real.
4WrongBot
Indeed, that was not at all what I meant.
3Will_Newsome
Does the morality apply to paperclippers? Babyeaters?
-1WrongBot
I'd say that it's about as likely to apply to paperclippers or babyeaters as it is to us. While I think there's a non-trivial chance that such a morality exists, I can't even begin to speculate about what it might be or how it exists. There's just a lot of uncertainty and very little either evidence. The reason I think there's a chance at all, for what it's worth, is the existence of information theory. If information is a fundamental mathematical concept, I don't think it's inconceivable that there are all kinds of mathematical laws specifically about engines of cognition. Some of which may look like things we call morality. But most likely not.
5Perplexed
Information theory is the wrong place to look for objective morality. Information is purely epistemic - i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days. Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalition members - pretending that they have perfect information about your past actions, even though they may not actually have perfect information. Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out. Treat neutral parties with indifference - if they have no power over you, you have no reason to apply your power over them in either direction. This "objective morality" is strikingly different from the "inter-subjective morality" that evolution presumably installed in our human natures. But this may be an objective advantage if we have to make moral decisions regarding Baby Eaters who presumably received a different endowment from their own evolutionary history.
1AdeleneDawner
This does help bring clarity to the babyeaters' actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the 'fair punishment' that brings the situation back to equilibrium. I suspect that we'd be better served by a less emotionally charged word than 'punishment' for that phenomenon in general, though.
1Perplexed
Oh, I think "punishment" is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I'm not so sure I would agree that the babies in the story are being "punished". I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved - the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory. However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one's own babies does at least provide some incentive to minimize the number of excess babies produced.
0[anonymous]
Does that "game-theoretic golden rule" even tell you how to behave?
0saturn
Do you also think there is a means or mechanism for humans to discover and verify the objectively real morality? If so, what could it be?
2WrongBot
I would assume any objectively real morality would be in some way entailed by the physical universe, and therefore in theory discoverable. I wouldn't say that a thing existed if it could not interact in any causal way with our universe.
0RobinZ
I expect a plurality may vote as you expect, but 10% seems reasonable based on my current state of knowledge.
-8nick012000
[-]Tenek430

The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)

2Will_Newsome
This seems reasonable with the help of FAI, though I doubt CEV would do it; or are you thinking of possible non-FAI technologies?
0Tiiba
So you intend to acquire an extra grandparent somewhere along the line?
[-]Tenek100

No. I intend to revive one. Possibly all four, if necessary. Consider it thawing technology so advanced it can revive even the pyronics crowd.

4JenniferRM
Did you coin the term "pyronics"?
0Tenek
I would imagine not (99%) , although it doesn't appear to be in common usage.
0Tiiba
Sorry, I missed the time machine part.

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?

-3erratio
Sorry, I was letting my own uncertainty get in the way of clarity there. A stronger version of what I was trying to say would be that consciousness gives us the illusion of being in control of our actions when in fact we have no such control. Or to put it another way: we're all P-zombies with delusions of grandeur (yes, this doesn't actually make logical sense, but it works for me)
4LucasSloan
So I agree with the science you cite, right? But what you said really doesn't follow. Just because our phonologic loop doesn't actually have the control it thinks it does, it doesn't follow that sensory modalities are "meaningless." You might want to re-read Joy in the Merely Real with this thought of yours in mind.
-3erratio
Well, sure, you can find meaning wherever you want. I'm currently listening to some music that I find beautiful and meaningful. But that beauty and meaning isn't an inherent trait of the music, it's just something that I read into it. Similarly when I say that consciousness is meaningless I don't mean that we should all become nihilists, only that consciousness doesn't pay rent and so any meaning or usefulness it has is what you invent for it.
4Eugine_Nier
I don't know about you, but I'm not a P-zombie. :)
[-]PeterS290

That emoticon isn't fooling anyone.

Upvoted for 'not even being wrong'.

0Paul Crowley
I'm not sure whether "not even wrong" calls for an upvote, does it?
3NihilCredo
Could you expand a little on this?
7erratio
Sure. Here's a version of the analogy that first got me thinking about it: If I turn on a lamp at night, it sheds both heat and light. But I wouldn't say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn't produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn't do much for us. There's a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.
2Perplexed
(I'm not sure why I pushed the button to reply, but here I am so I guess I'll just make something up to cover my confusion.) Do you also believe that we use language - speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?
0erratio
Hah! I found it amusing at least. I'm.. honestly not sure. I think that the vast majority of the time we don't consciously choose whether to speak or what exact words to say when we do speak. Listening and reading are definitely unconscious processes, otherwise it would be possible to turn them off (also, cocktail party effect is a huge indication of listening being largely unconscious). Arithmetic calculations - that's a matter of learning an algorithm which usually involves mnemonics for the numbers.. On balance I have to go with yes, I don't think those processes require consciousness
4AdeleneDawner
Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing - most often understanding or being able to produce speech, but also other sensory processing - turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I've personally experienced this. Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren't fluent in reading, and do have to consciously parse each word.
3Perplexed
You pretty much have to go with "yes" if you want to claim that "consciousness/self-awareness is just a meaningless side-effect of brain processes." I've got to disagree. What my introspection calls my "consciousness" is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud. Not all of my speech works this way, but some does. And almost all of my writing, including this note. So I have to disagree that consciousness has no causal role in my behavior. Sometimes I act with "malice aforethought". Or at least I sometimes speak that way. For these reasons, I prefer "spotlight" consciousness theories, like "global workspace" or "integrated information theory". Theories that capture the fact that we observe some things consciously and do some things consciously.
0Blueberry
Agreed, but that tells you consciousness requires language. That doesn't tell you language requires consciousness. Drugs such as alcohol or Ambien can cause people to have conversations and engage in other activities while unconscious.
0NihilCredo
Thanks; +1 for the explanation. No mod to the original comment; I would downmod the "consciousness was not a positive factor in the evolution of brains" part and upmod the "we do not actually rely much if at all on conscious thought" one.
0davidad
Upvoted for underconfidence.
0drc500free
Having just stumbled across LW yesterday, I've been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this. “Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans. “I am” a super-agent. It is a stack of component agents. At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it. My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged. My super-agent survives because it has marked the area on its model corresponding to
[-]Kevin410

It does not all add up to normality. We are living in a weird universe. (75%)

6Interpolate
My initial reaction was that this is not a statement of belief but one of opinion, and to think like reality. I'm still not entirely sure what you mean (further elaboration would be very welcome), but going by a naive understanding I upvoted your comment based on the principle of Occam's Razor - whatever your reasons for believing this (presumably perceived inconsistencies, paradoxes etc. in the observable world, physics etc.) I doubt your conceived "weird" universe would the simplest explanation. Additionally, that conceived weird universe in addition to lacking epistemic/empirical ground begs for more explanation than the understanding/lack thereof of the universe/reality that's more of less shared by current scientific consensus. If I'm understanding correctly, your argument for the existence of a "weird universe" is analagous to an argument for the existence of God (or the supernatural, for that matter): where by introducing some cosmic force beyond reason and empiricism, we eliminate the problem of there being phenomena which can't be explained by it.
6Eugine_Nier
Please specify what you mean by a weird universe.
7Kevin
We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.
3[anonymous]
The more I hear about this the more intrigued I get. Could someone with a strong belief in this hypothesis write a post about it? Or at the very least throw out hints about how you updated in this direction?
5Risto_Saarelma
Would "Fortean phenomena really do occur, and some type of anthropic effect keeps them from being verifiable by scientific observers" fit under this statement?
1Kevin
That sounds weird to me.
2Will_Newsome
Downvoted in agreement (I happen to know generally what Kevin's talking about here, but it's really hard to concisely explain the intuition).
1Clippy
Why do you think so?
2Kevin
For some definitions of weird, our deal (assuming it continues to completion) is enough to land this universe in the block of weird universes.
[-][anonymous]390

I think that there are better-than-placebo methods for causing significant fat loss. (60%)

ETA: apparently I need to clarify.

It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.

[This comment is no longer endorsed by its author]Reply

voted up because 60% seems WAAAAAYYYY underconfident to me.

5Eugine_Nier
Now that we're up-voting underconfidence I changed my vote.
2magfrump
From the OP:
0Zvi
I almost want this reworded the opposite way for this reason, as a 40% chance that there are not better-than-placebo methods for causing significant fat loss. Even if I didn't have first and second hand examples to fall back on I don't see why there is real doubt on this question. Another more interesting variation is, does such a method exist that is practical for a large percentage of people?
<