If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
152 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've had a thought about a possible replacement for 'hyperbolic discounting' of future gains: What if, instead of using a simple time-series, the discount used a metric based on how similar your future self is to your present-self? As your future self develops different interests and goals, your present goals would tend to be less fulfilled the further your future self changed; and thus the less invested you would be in helping that future iteration achieve its goals.

Given a minimal level of identification for 'completely different people', then this could even be expanded to ems who can make copies of themselves, and edit those copies, to provide a more coherent set of values about which future selves to value more than others.

(I'm going to guess that Robin Hanson has already come up with this idea, and either worked out all its details or thoroughly debunked it, but I haven't come across any references to that. I wonder if I should start reading that draft /before/ I finish my current long-term project...)

Consistent with why children care so little about their future.

DataPacRat's comment together with your observation strike me as the most interesting thing I've seen in an open thread in a while. I'm not convinced that the idea is in any sense correct or even a good idea but the originality is striking. It is easy to come up with obviously wrong ideas that are original, but coming up with an original (I think) idea that is that plausible is more impressive, and your observation makes it more striking.
4Ben Pace
This echoes my thoughts.

Shane Frederick had the idea that hyperbolic discounting might be because people identify less with their future self. He actually wrote his dissertation on this topic, using Parfit's theory of personal identity (based on psychological continuity & connectedness). He did a few empirical studies to test it, but I think the results weren't all that consistent with his predictions and he moved on to other research topics.

Thank you /very/ much for that link. The first two sections do a much better job explaining the general background and existing thought around my thought than I'd be able to write on my own. I am, however, less confident that the study described in the third section does a very good job of disproving the correlation between amount of selfhood and future discounting. Among other reasons, the paper itself posits that most people likely subscribe to the "simple" theory of identity instead of the "complex" one under discussion. As a third thought, reading the paper has suggested a new variation of my original thought. Perhaps the correlation exists, but I have causation backwards: future discounting could be, in fact, an expression of how much people consider their future selves to be dissimilar to their present selves. At present, I'm not sure what it would take to figure out which version of this idea comes closer to being true, and that's even assuming that the correlation exists in the first place; but it seems worth further consideration, at least.
An idea for an experiment: Sub-part one: Ask participants various questions to determine the minimum value of x for them to agree to the former option in, "Would you rather we give $100+x to a perfect stranger, or $100 to you?". (Initial prediction: values will vary widely, from particularly generous people with an x of -$99.99, to particularly selfish people with an x of infinity.) Sub-part two: Ask participants various questions to determine the minimum value of y for them to agree to the former option in, "Would you rather we give $100+y to you in 5 years, or $100 to you now?". Initial prediction: x and y will be closely correlated; the more a person is willing to give money to perfect strangers, the more they'll be willing to give money to their future selves. Possible variation: Change 'perfect stranger' in sub-part one to people with varying levels of closeness to the participant: distant acquaintance, close friend, family member. Possible variation: Change '5 years' in sub-part two to different time-scales. If both variations are included: Then, possibly, the data may converge into a simple shape. Possible complications for that shape may arise from how likely the participant feels they're likely to still be alive in n years; and in how strongly they trust the experimenters to actually distribute the money. Additional complications: I am completely unaffiliated with any university or other educational institution, have never performed a psychological experiment, and have no budget to perform any such experiment.
Interesting to think that empathy for others might be the same mechanism by which we make long term decisions. Closely related to this whole discussion is Self-empathy as a source of "willpower".
How sure are you that the outcome would be as predicted? My Y is about 10, but my X is undefined because I'd rather they didn't get it.
That smells like an actual insight to me, so please do work it up into a full theory if no-one comes up with an example of how it's been done already.

There seem to be two broad categories of discussion topics on LessWrong: topics that are directly and obviously rationality-related (which seems to me to be an ever-shrinking category), and topics that have come to be incidentally associated with LessWrong to the extent that its founders / first or highest-status members chose to use this website to promote them -- artificial intelligence and MIRI's mission along with it, effective altruism, transhumanism, cryonics, utilitarianism -- especially in the form of implausible but difficult dilemmas in utilitarian ethics or game theory, start-up culture and libertarianism, polyamory, ideas originating from Overcoming Bias which, apparently, "is not about" overcoming bias, NRx (a minor if disturbing concern)... I could even say California itself, as a great place to live in.

As a person interested in rationality and little else that this website has to offer, I would like for there to be a way to filter out cognitive improvement discussions from these topics. Because unrelated and affiliated memes are given more importance here than related and unaffiliated memes, I have since begun to migrate to other websites for my daily dose ... (read more)

With most of the main contributors having left and no new ones emerging (except for an occasional post by Swimmer963 and So8res discussing MIRI research), this forum appears, unfortunately, to have jumped the shark. It is still an OK forum to hang out on, but don't expect great things. Unless you are the one producing them.
I don't think that's necessarily the right conclusion. In general, the quality and quantity of posting here seems to wax and wane over time with no strong patterns to it.
I'm confused why you categorize SSC as appropriate for debiasing but not LW; doesn't SSC have as much of a mix of non-rationality material as LW? Is it a mix you like better? Do you just enjoy SSC for other reasons?
Because 1) Scott posts about politics and that's one of the muddiest areas in debate -- and with the moratorium on politics around here, one could use some good insights on controversial issues somewhere else. 2) LessWrong is a collection of people, but Scott is one guy -- and as far as I've seen he's one of the most level-headed and reasonable people in recent history, two traits which I consider indispensable to rationality. And he posts a lot about what constitutes reasonableness and how important it is. LessWrong does exhibit this as well, naturally, but there as opposed to here there aren't tendencies in the opposite direction that dilute the collective wisdom of the place, so to speak. (Then again I don't read comments.) 3) I just happen to really like his writing style, jokes, way of thinking, just about everything. No. Never said that. It's just that other sites get updated with material of interest to me more often, whereas the best stuff around here is already old for me.
In theory, the main or promoted posts should be more focused in rationality-for-its-own-sake topics while Discussion (and especially many of the more open threads therein, the literal Open Threads most of all) are going to contain a lot of memes of interest to the rationalist community without actually being about rationalism per se. On the other hand, the rate of release of rationality material here isn't terribly high, and some of it does get mixed in with affiliated-but-unrelated topics.
The thing is, Main is like that as well. I went back some three pages on Main to check, and there were a few rationality-related articles, some periodic posts (the survey, rationality quotes), a whole lot more posts relating to organizations of interest for the people on LessWrong who form its central real-life social circle, including reports on recent activity and calls for donations or for paid participation in events. Besides, effective altruist organizations have recently been included in the Rationality Blogs list in the sidebar. (And there was this comment of Eliezer's on a post which, if I remember correctly, called for help in some matter. He said he's not going to devote time -- five minutes, an hour, I don't remember the interval he gave -- to someone who had donated less than $5000 to charity. To get some people out of their American bubble, as a comparison, that's more than my current yearly income... likely much more. Needless to say, I found it rather unpalatable.) And there's the higher bar for posting in Main... Unless you write something at least obviously good enough to break even in terms of karma, you get, technically a punishment of a few tens of negative karma points for having dared to post there. (I think at least that that's how the karma multiplier works.) And people are going to respond more positively to common in-group-y topics. So if anything non-affiliated topics are more likely to be found in Discussion.
Eh. Difference between theory and practice, I guess. I too wish there was more actual rationality stuff coming out; the archive is big, but it's hard to engage with people on those topics now and there's always more to cover. I don't mind the side topics so much as you seem to, but I would like to see more of the core topic. As for the charity thing, that's EY's right if he so chooses to exercise it, but if income where you live is so low that $5000 is more than your annual income, or even if it's just temporarily more than that because you're a student or something (I made about that much per year on summer jobs my first two years of university), then I really doubt he would hold you to that if you were to approach him. On the other hand, EY isn't anywhere near a top contributor to LW at this point in time; I barely see him comment anywhere on the site anymore. That's probably part of the reason for the dearth of good rationality posts, but it also means that his opinions will have less impact on the site as a whole, at least for a while.
This is something that I've noticed and been concerned with. I think this is worthy of a top level discussion post. I think part of the problem is that rationalism is harder that weird and interesting ideas like transhumanism : anyone can dream about the future and fiddle with the implications, but it takes significant study and thought to produce new and worthwhile interventions for how to think better. My feeling is that the Main is for rationality stuff and the Discussion is for whatever the members of this community find interesting, but since we don't have strong leaders who are doing the work and producing novel content on rationality, the Main rarely has a new post, so I at least gravitate to the Discussion.
Also, keep in mind that many of these secondary ideas sprang from rationalist origins: cryonics is presented as an "obvious" rational choice, when you don't let your biases get in the way: you have an expressed desire not to die, this is the only available option to not die. Polyamory similarly came to bear as the result of looking at relationships "with fresh eyes." These secondary topics gain prominence because they are examples (debatablly) of rationality applied to specific problems. They are the object level; "Rationality" is the meta level. But, like I said, it's a lot easier to think at the object level, because that can be visualized, so most people do.
From the point of view of someone who doesn't buy into them, I think it's only incidental that those specific positions are advocated as a logical consequence of more rational thinking and not others. Had the founders not been American programmers, the "natural and obvious" consequences of their rationalism would have looked highly different. My point being that these practices are not at all more rational than the alternatives and very likely less so. But yeah, if these ideas gain rationalist adherents, then obviously some of the advocacy for them is going to take a rationalist-friendly form, with rationalist lingo and emphasized connections to rationalism.
Just curious, are there any positions which you you regard as "a logical consequence of more rational thinking"?
Yes -- atheism. And by extension disbelief in the supernatural. It's the first consequence of acquiring better thinking practices. However, it is not as if atheism in itself forms a good secondary basis for discussion in a rationalist community, since most of the activity would necessarily take the form of "ha ha, look how stupid these people are!". I would know; been there, done that. But it gets very old very quickly, and besides isn't of much use except for novice apostates who need social validation of their new identities. From that point of view I regard atheism as a solved problem and therefore uninteresting. Nothing else seems to spring to mind, though -- or at least no positive rather than negative positions on ideological questions. "Don't be a fanatic", "don't buy snake oil", "don't join cults", "check the prevailing scientific paradigms before denying things left and right [evolution, moon landing, the Holocaust, global warming etc.]"... critical thinking 101. Mostly all other beliefs and practices that seem to go hand in hand with rationalism seem to be explainable by membership of this particular cluster of Silicon Valley culture.
Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic".

This is probably like walking into a crack den and asking the patrons how they deal with impulse control, but...

How do you tame your reading lists? Last year I bought more than twice as many books as I read, so I've put a moratorium on buying new books for the first six months of 2015 while I deplete the pile. Do any of you have some sort of rational scheme, incentive structure or social mechanism that mediates your reading or assists in selecting what to read next?

I've managed to partly transmute my "I want to buy that now" impulse into sending a sample to my kindle. Then if I never get past the first few pages I've not actually spent any money, if I reach the end of the sample and still want to continue I know I'm likely to keep going .

Do you mostly buy physical or digital books? My digital unread list is hundreds of titles long, and it doesn't bother me much. It pales in comparison to my "read later" bookmark list. My physical unread pile already occupies half a wall of my room, and just for being there it gives me aesthetic pleasure and bragging privileges. I used to think I should be worried, but I decided to embrace the pile. I guess it still hasn't reached the point where I should really be worried.
A mixture of the two, but there's not really any psychological distinction between them. Once I resolve to read something, it has the same amount of "weight" regardless of the medium.
I wish I had your problem; mine is to find books I want to read. I very often re-read ones that are already on my shelves, for lack of anything new. However, this suggests a possible approach to your issue: When buying a book, check whether you are actually and genuinely excited to read it, so that you will open it the minute it arrives from Amazon. If not - put it in a "maybe later" pile. If it's more a case of "sure, sounds good" or, even worse, "I want to signal having read that", then give it a miss. If you need to read stuff for work or for Serious Social Purposes like a book club, then treat it like work - set aside a certain time of day or week, and during that time, read.
I'm not particularly excited to read, say, an intermediate textbook on medical statistics. In spite of this, I'm confident that the world will make more sense after I read it, and I'd like that outcome. This describes my attitude to a significant proportion of the books I intend to read. This and other interactions have caused me to re-evaluate just how ascetic my reading habits are.
This was more of a side effect of deciding to pare down on my possessions than an intervention specifically aimed at buying fewer books, but I rarely buy books anymore just because I want to read them. I get books on LibGen or at the university library. In the rare event in which a book turns out to be a really valuable reference I may then buy it.
* I read really, really fast. Like, I've finished some of the largest books in print(Atlas Shrugged and A Memory of Light come to mind here) in a single day before. * The longer my reading list is, the less likely I am to add to it, or at least the less likely I am to take my additions seriously(when I have two items it's "I should read that!" When I have 200, it's "Eh, sounds interesting, might get to it someday"). * My to-read list is about 10 Chrome bookmarks(none short), 20 Amazon wishlist items, plus most of the width of a bookshelf that I've bought and not read, so I'm not sure how well I actually do despite the above.
How did you learn to read so fast?
I read a lot. (I wish I could give you some actionable advice here, but there's nothing I can point to. I suspect it may be innate?) I think I'm semi-skimming, subconsciously. I've noticed myself missing descriptions of characters before when something relevant comes up later on. That said, I'm still quite fast with reading things like internet essays, where missing words does hurt your comprehension badly, so I don't think that's all of it.
Atlas Shrugged in a day? Boy, do I want to see you take Infinite Jest.
I have to want to read the book for it to work. Looking up Infinite Jest, I'd probably prefer swallowing barbed wire to reading it. (Atlas was back during my hardline-libertarian phase, so it was a lot more appealing. It also helped that the day in question was the day of the 2003 blackout, so there was nothing to do but read).
My library system lets you request books from anywhere in the county from your home computer. I put the book on hold, and then I get an e-mail when it arrives. Also, when a book isn't in the library system, I'll often buy an ebook edition or wait for it to enter the system.
Yup! Also a put it on hold person. I always wind up reading library books first because they have to be returned, and it makes it easier for me to carve out time that I want to carve out for reading, because the library book has a deadline.
For physical I hold myself to a "half read" rule. That still means a growing collection, mind. I prune aggressively but only from things I've already read, which unfortunately means I have a rump of books that I suspect are mediocre and therefore don't particularly want to read, but don't feel justified in getting rid of, whereas I get rid of books that are probably better than that if I've read them and don't think they're good enough to keep. I don't have a solution but thankfully FiftyTwo's approach works now.Hopefully I can slowly shrink my physical pile to the point where it fits on my bookshelves.

Some people think that the universe is fine-tuned for life perhaps because there exists a huge number of universes with different laws of physics and only under a tiny set of these laws can sentient life exist. What if our universe is also fined-tuned for the Fermi paradox? Perhaps if you look at the set of laws of physics under which sentient life can exist, in a tiny subset of this set you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring. If the natural course of events for sentient life in non-Fermi-tuned universes is for space faring civilizations to expand at nearly the speed of light as soon as they can, consuming all the resources in their path, then most civilizations at our stage of development might exist in Fermi-tuned universes.

Well we can for sure say that in our Universe interstellar travel is not hard. It's extremely easy, once you take humans out of the picture. With current technology we have the means to push spacecraft to 60 km/s. This isn't hypothetical tech, it's stuff that's sitting in the shed. At such velocities, craft could traverse the milky way galaxy 20 times over during the (current) lifetime of the galaxy (estimated at around 13 billion years). The galaxy is big, but it's not that big, not compared to the time scales involved here. Unfortunately this only makes it much more likely that the second possibility is true: Evolution of AIs that spread outward and colonize the galaxy must be extremely unlikely.
Can the 60 km / s spacecraft in question slow down again (I honestly don't know).
In that case the vast majority of individuals (considered across all universes) would be members of those large spacefaring civilizations, no? In which case, why aren't we?
Possibly not if universes fined-tuned for life but not the Fermi paradox are dominated by paperclip maximizers or the post-singularity lifeforms in these universes turn themselves into something we wouldn't consider "individuals" while also preventing new civilizations from arising.
It only takes a few universes where that doesn't happen to mess with those numbers. Or to put it another way, fine-tuning for the existence of individuals seems like a smaller amount of fine-tuning than fine-tuning for the Fermi paradox.
In universes not fine-tuned for the Fermi paradox, the more fine-tuned for life the universe is, the sooner some civilization will arise that expands at the maximum possible speed devouring all the resources in its expansion path, which limits the number of civilizations like ours that can arise in any universe not fine-tuned for the Fermi paradox. Part of being fine-tuned for life might, therefore, be being fined-tuned for the Fermi paradox. (But you are raising excellent counterarguments to an issue I greatly care about so thanks!)
Yes, you can apply the Anthropic principle to the Fermi paradox, if you make some assumptions, but even then the case is nowhere near as clear-cut as applying it to the 'fine-tunenes' of the universe.

I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:

Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.

Furthermore, a cold spot extends over a patch of sky that is much larger than expected.

The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.

“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.

... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scal

... (read more)
I think you should post this as its own thread in Discussion.
If that sounds good, please, it'd be great if you could do it. I don't have the status. The link for the dwarf galaxy article use wrong, it should be: http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm Thanks.
Did somebody karma bomb your account? It looks like an awful lot of your posts have a single down-vote on them.
I assign a low probability to that. The user in question has just made a number of comments which are of low quality.
I've posted it here.

Should we have some sort of re-run for the various repositories we have? I mean, there is the Repository repository and it's great for looking things up if you know such a thing exists, but (i) not everyone knows this exists and more importantly, (ii) while these repositories are great for looking things up, I feel that not much content gets added to the repositories. For example, the last top-level comment to the boring advice repository was created in march 2014.

Since there's 12 repositories linked in the meta repository as of today, I suggest we spend each month of 2015 re-running one of them.

I'm not certain which form these re-runs should take, since IMO, all content should be in one place and I'd like to avoid the trivial inconvenience for visitors clicking on the re-run post and then having to click one more time.

Should there be some sort of re-run of the 12 repositories during 2015, one per month? [pollid:808]

Which form should the re-run have, conditional on there being one? [pollid:809]

Why not a monthly post that links to all the respositories?, as a chance to add suggestions to them, and remind people that they're there. This could in fact be combined with a focus on a "specific' repository.
0mako yass
By "advice in the comments", you mean new entries to the repositories, right? So you're suggesting that we fragment the repository through a number of separate comment sections, labeled by year, and that is a really awful way of organizing a global repository of timeless articles. If you're worried about incumbents taking disproportionate precedence in the list(as more salient posts tend to get more attention; more votes; more salience), IIRC, reddits have a comment ordering that's designed to promote posts on merit rather than seniority. If that isn't sufficient to address incumbent bias then we should probably be talking about building a better one.
I meant, "in the comments of the new article". I'm sorry if that wasn't clear. The goal was to get some discussion and new advice going, and that's difficult if you just link to the old repository, which means one more click on the way, one trivial inconvenience more. I had thought about copying all the advice (or the good pieces only) over to the old repository once this one is obsolete, i.e. once the rerun repository for march is posted, and I might do this then, if I find the time.

Crazy hypothesis:

If Omega runs a simulation of intelligent agents, it is presumable that Omega is interested in finding out with sufficient accuracy what those agents would do if they were in the real situation. But once we assign a nonzero chance that we're being simulated, and incorporate that possibility into our decision theories, we've corrupted the experiment because we're metagaming: we're no longer behaving as if we were in the real situation. Once we suspect we're being simulated, we're no longer useful as a simulation, which might entail that every simulated civilization that develops simulation theories runs the risk of having its simulation shut down.


I suppose the best thing to do is to tell you to shut up now, right?

This (your hypothesis) appears wrong, however. Assuming the simulation is accurate, the fact that we can think about the simulation hypothesis means that whatever is being simulated would also think about it. If there's an accuracy deficiency, it's no more likely to manifest itself around the simulation-hypothesis than any other difference in accuracy.

Although that depends on how we come by the hypothesis. If we come by it like our world did, which is philosophers and other people making argument without any evidence, then there's no special reason for us to diverge from the simulated, but if we would have evidence (like the kind proposed in http://arxiv.org/abs/1210.1847 or similar proposals) then we would have a reason to believe that we weren't an exact simulation. In that case, we'd also have evidence of the simulation and not been shut down, so we'd know that your theory is wrong. OTOH, if you're correct we shouldn't try to test the simulation hypothesis experimentally.

PSA: Thinking a thought that might cause you to have never existed, might cause you to have never existed. You might think that you are thinking that thought, but that's just how the logically impossible hypothetical of thinking it feels like from the inside. Think twice before you hypothetically think it. (P.S. Noticing that you are certain to be right to worry about it seems to be an example of such a thought, for our world. Like correctly believing anything else that's false in a suitable sense. As far as I know.)
How would you act differently even if we assume that your whole life merely exists inside a simulation? You still have to live the life you've been given - it's not like you can break out of the simulation and go take your real life back. Your actions in the simulation still have their usual effect on the life in the simulation. The only case where it matters is if the simulator wants you to behave certain ways and will reward you accordingly(either real-you or by moving you to a nicer simulation), but that's just a different way to talk about religion.
Imagine that you learn tomorrow that we're in a simulation, because scientists did a test and found a bug in the program. Perhaps you would act differently? Maybe email all your friends about it, head over to lesswrong to discuss it, whatever. These things wouldn't happen in the original. The main distinction is the way you'd learn about the simulation, like I said in my response.
Please define the difference between "bug in the simulation" and "previously unknown law of physics". That said, I do agree in principle. However, simulation theories are sufficiently obvious(at least to creatures that dream/build computers/etc.) that they can't count as corruption - it'd be weirder for a simulated civilization to not have them.
There have been plausible tests given that would seem to produce Bayesian evidence of simulation. To give an analogy, if tomorrow you'd hear a loud voice coming from Mount Sinai reciting the 10 commandments, more of your probability would go to the theory "The Bible is more-or-less true and God's coming back to prove it" than "there's a law of physics that makes sounds like this one happen at random times". The same way, there are observations that are strictly more likely to occur if we're in a simulation than if not. There are some proposed in http://arxiv.org/abs/1210.1847 , and other places as well.
This is not true in general. This is true for some particular kinds of simulations (e.g. your link says "we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization"), but not all of them.
Let's rephrase: our expectations are different conditioning on simulation than on ~simulation. The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws. If you disagree, then you need to hold that exactly the right kinds of simulations (with opposite effects) have exactly the right kind of probability to cancel out the effects of "particular kinds of simulations". That seems a very strong claim which needs defending. Otherwise, there do exist possible observations which would be Bayesian evidence for simulation.
I don't think mine are. That is a content-free statement. You have no idea about either of the distributions, about what "possible simulation types" there might be, or what "possible physics laws" might be. Well, barring things which actually break the simulation (e.g. an alien teenager appearing in the sky and saying that his parents are making him shut off this sim, so goodbye all y'all), can you give me an example?
Any of the things proposed in papers with the same aims of the one I linked above. The reason I'm not giving specifics is because I don't know enough of the technical points made to discuss them properly. I wouldn't be the one making the observations, physicists would, so my observation is "physicists announce a test which shows that we are likely to be living in a simulation" and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc. (Note: I'm explicitly outlawing something which uses philosophy/anthropics/"thinking about physics". Only actual experiments. Although I'd expect only good ones to get past the bar I set, anyway, so that may not be needed.) I couldn't judge myself whether the results mean anything, so I'd rely on consensus of physicists. Using that observation: are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
Ugh, so all you have in an argument to authority? A few centuries ago the scientists had a consensus that God exists. And? No, I'm telling you that "evidence of simulation" is an expression which doesn't mean anything to me. To go back to Alsadius' point, how are you going to distinguish between "this is a feature of the simulation" and "this is how the physical world works"?
I gave my observation, which is basically deferring to physicists. "evidence of simulation" may not mean anything to you, but surely "physicists announce finding evidence of simulation" means something to you? Could you give an example of something that could happen where you wouldn't be sure whether it counted as "physicists announce finding evidence of simulation"? Right now, as I'm not trained in physics, I'd defer to the consensus of experts. I expect someone who wrote those kinds of papers would have a better answer for you. Or is your problem of defining "evidence of simulation" something you'd complain about even if real experts used that in a paper?
Yes, it means "somebody wanted publicity" (don't think it would get as far as grants). Yes, of course. I do not subscribe to the esoteric-knowledge-available-only-to-high-priests view of science.
Which is why I laid out a bunch of additional steps needed above: You seem to be taking parts of my argument out of context. Me neither, but I'm trying to use a hypothetical paper as a proxy because I'm not well versed enough to talk about specifics. On some level you have to accept arguments from authority. (Or do you either reject quantum mechanics or have seen evidence yourself?) Imagine that simulation was as well established in physics as quantum mechanics is now. I find it very hard to say that that occurrence is completely orthogonal to the truth of simulation.
The problem is that you offer nothing but an argument from authority. Well, of course I have. The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc. In any case, let me express my position clearly. I do not believe it possible to prove we're NOT living in a simulation. The question is whether it's possible to prove we ARE living in a simulation is complex. Part of the complexity involves the meaning of "simulation" in this context. For example, if we assume that there is an omnipotent Creator of the universe, can we call this universe "a simulation"? It might be possible to test whether we are in a specific kind of simulation (see the paper you linked to), but I don't think it's possible to test whether we are in some, unspecified, unknown simulation.
My position is that it is possible for us to get both Bayesian evidence for and against simulation. I was not talking at all about "proof" in the sense you seem to use it. If it's possible to get evidence for a "specific kind of simulation", then lacking that evidence is weak evidence against simulation. If we test many different possible simulation hypotheses and don't find anything, that's slightly stronger evidence. It's inconsistent to say that we can't prove ~simulation but can prove simulation. I'm curious if you understand QM well enough to say that computers wouldn't work without it. Is there no possible design for computers in classical physics that we would recognize as computer? Couldn't QM be false and all these things work differently, and you'd have no way of knowing? Whatever you say, I doubt there are no areas in your life where you just rely on authority without understanding the subject. If not physics, then medicine, or something else.
Of course there is -- from Babbage to the mechanical calculators or the mid-XX century. But I didn't mean computers in general -- I meant the specific computer that I'm typing these words on, the computer that relies on semiconductor microchips.
How do you know your computer relies on semiconductor microchips? Could you explain to me why semiconductor microchips require QM to work?
I looked :-) See e.g. this.
Although I can't think of any way that I personally would behave differently based on a belief that I exist in a simulation, Nick Bostrom suggests a pretty interesting reason why an AI might, in chapter 9 of Superintelligence (in Box 8). Specifically, an AI that assigns a non-zero probability to the belief that it might exist in a simulated universe might choose not to "escape from the box" out of a concern that whoever is running the simulation might shut down the simulation if an AI within the simulation escapes from the box or otherwise exhibits undesirable behavior. He suggests that the threat of a possibly non-existent simulator could be effectively exploited to keep an AI "inside of the box".
Unless there's a flow of information from outside the simulation to inside of it, you have zero evidence of what would cause the simulators to shut down the machine. Trying to guess is futile.
Bostrom suggested that a simulation containing an AI that is expanding throughout (and beyond) the galaxy and utilizing resources at a galactic level would be more expensive from a computational standpoint than a simulation that did not contain such an AI. Presumably this would be the case because a simulator would take computational shortcuts and simulate regions of the universe that are not being observed at a much coarser granularity than those parts that are being observed. So, the AI might reason that the simulation in which it lives would grow too expensive computationally for the simulator to continue to run. And, since having the simulation shut down would presumably interfere with the AI achieving its goals, the AI would seek to avoid that possibility.
Observed by what? For this to make sense there'd need to be no life anywhere in the universe but here that could be relevant to the simulation.
Actually, all it requires is that the universe is somewhat sparsely populated - there is no requirement that there must be no life anywhere but here. Furthermore, for all we know, maybe there is no life in the universe anywhere but here.
There's no reason to limit simulation to one level, nor to privilege "real" as any special thing. All reality is emergent from a set of (highly complex, or maybe not) rules. This is true of n=0 ("reality", or "the natural simulation"), as well as every n+1 (where a level N entity simulates something). It's turtles all the way up. Put another way, the simulation parent entities wonder if they're being simulated, so it's exactly proper for the simulation target entities to wonder, for exactly the same reasons. I suspect that in every universe, thinking processes that can consider simulation will consider that they might be simulated. I don't know if they'll reach the conclusion that it doesn't matter - finding the boundaries of the simulation is exactly identical to finding the boundaries of a "natural" universe, and we're gonna try to do so.
However, see my point about how the method of learning about the simulation matters for a imperfect-fidelity simulation.
0mako yass
Any being that does not at some point consider the possibility that it is inside a simulation, is not worth simulating.

What are the marginal effects of donating with an employer gift match? The one I have has a per-employee cap and no overall cap, but presumably the utilization rate negatively influences the cap. How much credit should I be giving myself for the gifts I cause my employer to give?

If the notion of 'credit' is too poorly defined, suppose I were deciding between job A which has a gift match and job B which has a higher salary, such that (my personal gift if I take job A) < (my total gift if I take job B) < (my total gift including match if I take job A)... (read more)

If your employer responds to the number of employees who are giving by modifying the cap, then that means that whether or not you give will not change how much money your employer gives in the long term. However, even if that is true, you still are choosing where your employer donates your portion of the gift match. Therefore, if you believe that most of the money given to charity is being used at only moderate effectiveness, but that you could choose a good givewell charity to donate to and achieve much greater results, then the impact of the employee match is still signficant.
Given that the most effective charities are at least an order of magnitude more effective, probably more than that, than average charities, any decrease you cause to other people's matches is probably insignificant compared to the match you get.

Is there any Egan or Vinge fanfic except EY's crossover Finale of the...?

Don't know about Egan or Vinge specifically, but fanfic of literary SF not targeted at the YA market is very rare. I'd speculate this is partly due to demographics and partly due to the fact that a lot of trad SF's appeal lies in conceptual stuff that's generally more or less fully explored in its work of origin.

I can code, as in I can do pretty much any calculation I want and have little problem on school assignments. However, I don't know how to get from here to making applications that don't look like they've been drawn in MS Paint. Does anyone know a good resource on getting from "I can write code that'll run on command line" to "I can make nice-looking stuff my grandmother could use"?

Buy a good textbook on visual design principles. I don't have a recommendation in this area, so you'll have to do some homework to find the right one. Start looking at the work of professional designers in the area you're interested in. I use a blogroll for this, but you can pick your own path. The design section of my RSS currently consists of abduzeedo, design milk, and grain edit. For mock-up tools, I like Inkscape a lot. It's free and mock-ups are mostly about the text, shape, and pen tools anyways. In the area of raster graphics, I haven't seen any good alternatives to Photoshop; not that I've been looking. You can also look into some user experience stuff too, but that strikes me as overrated. After that, which programming tools you pick up will depend on your needs.
I've used The bootstrap framework to make web apps that don't look horribly ugly. Learning all the things you'd need to make apps that use that (so a bit of JS, CSS, HTML, etc. as sixes_and_seven says) would probably be a good start. (It would be probably easier than trying to make good-looking CSS from scratch, which is more of a pain).
Bootstrap is particularly good if you're a design doofus and have minimal knowledge of web standards, accessibility, fluid layouts, etc. I'm sure ancestor commenters know this, but it's worth mentioning that design is a distinct discipline which doesn't come for free when you learn to code.
This might be a disappointing answer, but HTML, CSS and JavaScript are extremely valuable skills if you want to throw together accessible GUI applications.
You can try learning how to create mobile apps, seems like a very useful skill. For example, Android programming: https://developer.android.com/training/index.html
Depending on how much you want to invest in aesthetics vs. simply producing a user-friendly GUI, Visual Studio takes almost all of the tricky work out of producing basic GUIs (whether you're working in Visual Basic or C++) and is an easy go-to solution especially since it's now free for individuals, even for commercial use (still requires Windows though; I don't have a lot of experience writing GUI apps for other desktop OSes). The results will likely look somewhere between utilitarian and just ugly until/unless you learn some UI design aesthetics and expend the effort to apply them, but even there tools such as Blend exist to help out (especially on mobile, but some of that stuff can be applied to PC software too).

Runaway Rationalism and how to escape it by Nydwracu

One of the better rationalist short posts of last year, unfortunately mostly read by non-rationalists so far. Many important concepts tightly packed and neatly explained if the reader thinks closely about them.

A favored quote.

It is noteworthy that the scientific method, the most successful method for discovering reality, only arose once, a few hundred years ago, in an environment where the goddess of war and wisdom demanded it. It is also noteworthy that the goddess of war is the goddess of wisdom: wit

... (read more)

At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)

The comic book Magnus Robot Fighter #10, published this month, mentions Roko's Basilisk by name and has an A.I. villain who named himself after Roko's Basilisk. The Basilisk is described as "the proposition that an all-powerful A.I. may retroactively punish those humans who did not actively help its creation... thus inspiring roboticists, subconsciously or unconsciously, toinvent that A.I. as a matter of self-preservation". Which is not quite correct because of the "subconsciously", and doesn't mention simulation (although Magnus grew up in a simulation), but otherwise is roughly the right idea.

I'm not sure where an appropriate place to ask this is. Tell me if there's a place where this goes.

I'm coming to the Bay Area for the CFAR workshop from the 16th to the 19th. I have a way to get back home, but I think I might want to stay a few extra day in San Fransisco. That screws up my travel arrangements, so I'm seeing if there's a workaround. Are there any aspiring rationalists (or rationalist sympathizers) in northern California who might want to drive with me down to Phoenix (AZ) between the 23rd and the 25th, more or less for the hell of it? I'm u... (read more)


Is there a better way to search LW other than Google?


What's the typical (or atypical) delivery time on Mealsquares orders?

They arrive in about 4-6 days after ordering for me.

Below, gjm was being a self-acknowledged pedant and I didn't like it at first and I pedanted right back at him and then I realized I enjoyed it and that pedantry is a terminal human value and that I wouldn't have it any other way and that I didn't really care that he was being a pedant anymore and that it was actually a weird accidental celebration of our humanity and that I probably won't care about future pedantry as long as it isn't harmful. This is an auspicious day.

I think a good principle for critical people - that is, people who put a lot of mental effort into criticism - to practice is that of even-handedness. This is the flip-side of steelmanning, and probably more natural to most. Instead of trying to see the good in ideas or people or systems that frankly don't have much good in them, seek to criticize the alternatives that you haven't put under your critical gaze.

Quotes like [the slight misquote] "Democracy is the worst form of government except for all the others that have been tried from time to time&qu... (read more)

Somebody has thought of selling plush basilisks already.

I don't think that the basilisk you linked to is the specific basilisk of LW notoriety; basalisks were creatures of legend all the way back to Pliny the Elder's time; they were mentioned in Pliny's Natural History written around 79 AD.

Who knew acausality could reach that far back?

In case anyone is wondering what needle felting is.... I didn't have a visualization, but when I saw that, I knew the basilisk can't possibly be that cute. I've settled on shifting shapes in unpleasant colors which sometimes coalesce into something that vaguely resembles Tchernobog from Night on Bald Mountain.
As pointed out: Not the same basilisk. I wonder how people visualize Rocko's variant.

Klein bottle uroboros.

So, a snake that eats its own tail from the inside?

Can anyone recommend a good smarphone/tablet app for habit-building?


I'm vegetarian and currently ordering some dietary supplements to help, erm, supplement any possible deficits in my diet. For now, I'm getting B12, iron, and creatine. Two questions:

  • Are there any important ones that I've missed? (Other things I've heard mentioned but of whose importance and effectiveness I'm not sure: zinc, taurine, carnitine, carnosine.)
  • Of the ones I've mentioned, how much should I be taking? In particular, all the information I could find on creatine was for bodybuilders trying to develop muscle mass. I did manage to find that the ave
... (read more)

Is there a way to subscribe to the RSS feed for Less Wrong with MS Outlook? When I use the link on the sidebar, Outlook says the file name or directory is invalid.

I know a very intelligent, philosophically sophisticated (those are probably part of the problem) creationist. A full blown, earth-is-6000-years-old creationist.

If he is willing to read some books with me, which ones should I suggest? Something that lays our the evidence in a way that the layman can understand, and conveys the enormity of this evidence.

This isn't a direct response to your request as it's not a book suggestion, but ... I've had plenty of conversations with very thoughtful, intelligent creationists. Virtually all of my friends and family are creationists in some sense or another. So far I've never discussed it with anyone (at least anyone who's thoughtful and intelligent) who has disagreed with the following argument: 1) The world clearly looks old. For example: * Light from the stars needs to travel much more than several thousand years before it reaches us. * There are many more than several thousand layers of annual ice in Antarctica. * The Colorado River is wearing away at the bottom of the Grand Canyon as we speak. The rest of the Grand Canyon is exactly the sort of thing we'd expect if we extrapolate backwards a few million years. * In general, there are innumerable geological features all over the world that are exactly what we'd expect if we extrapolate backwards millions or billions of years based on processes that are happening right now. (Any good geology textbook should demonstrate this pretty clearly. As I tell people: "Read a basic geology textbook, go to a national park and read the signs explaining the local geology, and then come back and tell me it doesn't look old.") * [You can also mention some of the principles of geological layering - e.g., that we consistently find the same types of fossils in the same types of layers. But I've found that this sort of thing is a little too complicated to explain quickly.] 2) One could perhaps respond with something like, "maybe God used some alternative unknown form of physics in the six days of creation" or, "maybe Noah's Flood caused geology to go haywire in unknown ways". However, the key point is that to say that it doesn't even look old is simply false. Saying that the Flood or some alternative physics caused it means that e.g. the bottom 30-40 feet of the Grand Canyon (the part that's been eroded in the past 5000-6000 years) was

I suspect that I'm gonna keep sharing quotes as I read Superintelligence over the next few weeks, in large part because Professor Bostrom has a better sense of humor than I thought he would when I saw him on YouTube.

I've known for a long time that intelligences with faster cognitive processes would experience a sort of time dilation, but I've never seen it described in such an evocative and amusing way:

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000×. If your fleshly friend should happen

... (read more)
If you drop a teacup from a height of 2m then s = 1/2 at^2 says t ~= 0.6 seconds which at 10000x becomes 6000 seconds or 1h40m. If the figure for Nick Bostrom is "several hours" then he must be, I dunno, attending tea parties on stilts or something. (Of course this is mere pedantry. But it seems like the sort of thing it should have been easy to get right.)
You reminded me of something he wrote in the acknowledgements: Citations are one of the most important aspects of any non-fiction book, and even in this regard he acknowledges that he could not be exhaustive. To confirm the physical calculations implicit in a descriptive passage would almost certainly have been suboptimal. All to say, I am happy that Professor Bostrom is the one writing the Superintelligences of the world, and not the pedants.
He could have made the descriptive passage correct simply by not pretending to be so quantitative. "Suppose your mind ran tens of thousands of times faster than normal."
Extraneous considerations of extraneous yet harmless quantitativeness would have been similarly suboptimal. The pseudo-quantativeness also has a positive effect upon the tone of the passage, for everyone but the one in a million who notice its extremely technical inaccuracy. This is not math, but writing.

Is there a Chrome extension or something that will adjust displayed prices of online merchants to take into account rewards benefits? For example, if my credit card has 1% cashback, the extension could reduce the displayed price to be 1% cheaper.

So I signed up for a password manager, and even got a complex password. But how do I remember the password? It's a random combination of upper and lower case letters plus numbers. I suppose I could use space repition software to memorize it, but wouldn't that be insecure?

I learned a few interesting memory tricks from the movie Memento. One thing you can try is to tattoo important information on yourself, so that you don't forget it. I can think of a few security caveats for sensitive information though: * It's probably better if you choose a location that's not easily visible (e.g. chest, part of your arm that's covered by a shirt), though you should probably choose a location that's still somewhat accessible (i.e. not your lower back) * If you absolutely have to use a more visible location, like your forehead, make sure you get the sensitive information tattoo'd BACKWARDS, so that only you can read it (and only when you're looking in a mirror) On a more serious note, I find it much easier to remember random alphanumeric characters "kinesthetically" (i.e. by developing muscle memory for the act of actually typing the password), as suggested by polymathwannabe. The only downside to this approach is that it's extremely difficult for me to enter such a password on a cell phone.
I endorse the serious note - I have a key layout I use for throwaway passwords based on taking an initial character from the website name, which is quick and easy to type on keyboards (but admittedly hard on iPhone). Eg I went back to confused.com (insurance comparison site) recently after a year and got in with a couple of guesses. Emphasise throwaway passwords though - I use XKCD method for anything that gives control over other stuff (Gmail especially) but it takes some cognitive load off the unimportant stuff while still protecting against password leaks.
Despite my other comment, there are cases when we simply can't choose. My university gave me an alphanumeric sequence that I am able to remember because I'm a trained typist. So I didn't memorize the letters and numbers; I memorized the finger movements.
Just write it down. Eventually, you'll memorize it. It will be faster if you challenge yourself each time: see how many characters you can type before having to look. It's important to keep in mind threat models. The biggest threat is that someone attacks one website you use and uses that password to take control of your account on another website. The password manager solves this problem. (It also give you strong passwords, which is overkill.) People physically close to you that might steal the piece of paper with the password aren't much of a threat and even if they were, they probably wouldn't figure out the meaning of it. But you can destroy it after memorization.
* Write the password down on paper and keep that paper somewhere safe. * Practice typing it in. Practice writing it down. Practice singing it in your head. * Set things up so you have to enter it periodically.
I use a passphrase, which has higher entropy than a short password and is easier to remember at the same time. Take a dictionary of 50k words and choose a sequence of 6 words at random. (Use software for this; opening a printed dictionary "at random" won't produce really random results). This provides log2(50000^6) = 94 bits of entropy. This is a similar amount to choosing 15 characters from an 80-character set (lowercase and uppercase letters, numbers, and 18 other characters) which would produce log2(80^15) = 95 bits. It's much easier to remember 6 random words than 15 random characters. You can generate some passphrases here to estimate how difficult they might be to remember. (Of course you wouldn't generate your real passphrase using an online tool :-)
If you often need to generate XKCD-compliant passwords on Linux machines, you may find this command line handy: egrep -x '[a-z]{3,9}' /usr/share/dict/words | shuf -n4 | xargs (It will work on a Mac if you install coreutils and change shuf to gshuf.)
On my Ubuntu install, /usr/share/dict/words is symlinked to /usr/share/dict/american-english, which has about 100k words. log2(100000^6)=100, which surprised me by being not that much bigger than log2(50000^6) = 94. Bad math intuition on my part.
How is a computer more random than flipping pages?
The word "set" in my dictionary has a definition spanning an entire page. Most other pages have between 20 and 50 words on them. This implies that the word "set" will be chosen about 1 in 1000 times, giving only 10 bits of entropy, whereas choosing completely at random, each word would have about a 1 in 50,000 chance of being chosen, giving about 15 bits of entropy. In practice, picking 5 random pages of a 1000 page dictionary, then picking your favorite word on each page would still give 50 bits of entropy, which beats the correcthorsebatterystaple standard, and probably a more memorable passphrase.
Take a 100 page book, get 100 random numbers from that, then do an analysis of the numbers. First of all, how do you decide right page/left? Likely by generating randomity in your head, which may not be so good. First few pages and last few are unlikely. Probably other things also. For one, words with longer definitions are more likely depending on the exact method. I don't think using a computer is a very secure solution once your going to that level anyway. Try using dice.
It's well known in the security industry / compsci that humans are are very bad at generating, and recognizing, random numbers. I can't recall if there's a name for this bias; there's the clustering illusion but that's about recognizing random numbers, not trying to generate them. This paper tries to analyze why this is hard for humans to do.
You'll get used to it. All my passwords are long (~20) strings of random alphanumeric characters. Initially, when I started using this system, I had doubts that I would be able to memorize them all, but after a while it got easy. If you're really in need of some outside help, write it somewhere in rot13; since it's random, nobody can guess through the pattern of the letters that the rot13 version is not the actual password; a random string of letters and its rot13'd version are much the same for all practical purposes. And if you want some extra security and you're not worried about getting tangled in all your weird personalized decoding rules, write it backwards; write every number as ten minus that number; make all capitals lowercase letters and vice versa; add known short strings of characters at the beginning and/or at the end, etc. But I really don't recommend going down that route.
Keep it written down and read it consciously every time you need to enter it(ideally, often). Whenever you have it memorized, destroy the physical copy.
For my non-phrase passwords, I make myself enter the password at least once per day, and I recite it in my head frequently.
Alphanumeric passwords are overrated.

That comic makes a good argument against the kinds of alphanumeric passwords most people naively come up with to match password policies, but the randomized ones that a password manager will give you are far stronger. Assuming 6 bits of entropy per character (equivalent to a choice of 64 characters) and a good source of randomness, a random 8-character password is stronger than "correct horse battery staple" (48 bits of entropy vs. ~44), and 10 characters (for 60 bits of entropy) blows it out of the water.

Of course, since you typically won't be able to remember eight base64 characters for each of the fifty sites you need a password for, that makes the security of the entire system depend on that of the password manager or wherever else you're storing your passwords. A mix of systems might work best in practice, and I'd recommend using two-factor authentication where it's offered on anything you really need secured.

That comic got me to change all my passwords. I now have a stack of virtual movieposters in my head using that principle. Nothing written down anywhere, not forgotten one yet, far more secure. Works fantastically well for any password function where you are permitted long passwords. I start swearing at places that impose limits, now.
What really annoys me is places that won't let you use those passwords because they're too long and they don't have any numbers in them.
The problem is "correct horse battery staple"-style passwords are easy to remember, but annoying to type. Memorizing a random eight-character password is hard, but typing one is easy.
[This comment is no longer endorsed by its author]Reply
Might I suggest you create a new discussion thread when you write up long posts like this one?
OK. Should I delete the old comment?