We've had these for a year, I'm sure we all know what to do by now.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Open Thread: March 2010
New Comment
680 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm

4Morendil
Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be. When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence. ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.
2Tyrrell_McAllister
Very interesting. And sobering.
[-]Cyan240

Call for examples

When I posted my case study of an abuse of frequentist statistics, cupholder wrote:

Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.

So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.

4khafra
Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.
0Cyan
That's a great find!

How do you introduce your friends to LessWrong?

Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?

Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.

For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of argume... (read more)

7RobinZ
I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down. Edit: Related to The Lens That Sees Its Flaws.
4XiXiDu
It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.
4[anonymous]
I am probably a miserable talker, as usually after my introduction of rationality/singularity related topics people tend to even strengthen their former opinions. I could well use a "good argumentation for rationality dummys" article. No, reading through all the sequences does not help. (Understanding would?) Often enough it seems that I achieve better results by trying not to touch any "religious" topic too early; religious meaning that the argument for not having that opinion requires an understanding of reductionism and epistemology worth a third year philosophy student (btw, acceptance is also required). This may seem to take enormous amounts of time to get people onto this train, but, well, the average IQ is 100, and getting rationality seems to be even less far spread than intelligence, so it may actually be more useful to hint in the right direction for special topics than to catch it all. And, how does this actually help your own intentions? It seems non-trivial to me that finding a utility-function where taking the time to improve the rationality-q of a few philosophy/arts students or electricians or whatever is actually a net-win for what one can improve. Or is everybody here just hanging out with (gonna-be) scientists?
4michaelkeenan
I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers. Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people. If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).
2nazgulnarsil
the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.

The following stuff isn't new, but I still find it fascinating:

Reverse-engineering the Seagull

The Mouse and the Rectangle

3AdeleneDawner
Neat!
2nazgulnarsil
what's depressing is the vast disconnect between how well marketers understand super stimulus and how poorly everyone else does. also this: http://www.theonion.com/content/video/new_live_poll_allows_pundits_to

TL;DR: Help me go less crazy and I'll give you $100 after six months.

I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.

I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).

One-time tricks to do one important thing are also welcome, but I'd offer less.

6CronoDAS
After reading this thread, I can only offer one piece of advice: You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.
6pjeby
This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine. I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.) Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)
3MixedNuts
Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.
5Cyan
It's a marketing tool, not a test.
2pjeby
FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.
6anonymous259
I'll come out of the shadows (well not really, I'm too ashamed to post this under my normal LW username) and announce that I am, or anyway have been, in more or less the same situation as MixedNuts. Maybe not as severe (there are some important things I can do, at the moment, and I have in the past been much worse than I am now -- I would actually appear externally to be keeping up with my life at this exact moment, though that may come crashing down before too long), but generally speaking almost everything MixedNuts says rings true to me. I don't live with anyone or have any nearby family, so that adds some extra difficulty. Right now, as I said, this is actually a relatively good moment, I've got some interesting projects to work on that are currently helping me get out of bed. But I know myself too well to assume that this will last. Plus, I'm way behind on all kinds of other things I'm supposed to be doing (or already have done). I'm not offering any money, but I'd be interested to see if anyone is interested in conversing with me about this (whether here or by PM). Otherwise, my reason for posting this comment was to add some evidence that this may be a common problem (even afflicting people you wouldn't necessarily guess suffered from it).

I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.

6AdeleneDawner
I have limited mental resources myself, and am sometimes busy, but I'm generally willing to (and find it enjoyable to) talk to people about this kind of thing via IM. I'm fairly easily findable on Skype (put a dot between my first and last names; text only, please), AIM (same name as here), GChat (same name at gmail dot com), and MSN (same name at hotmail dot com). The google email is the one I pay attention to, but I'm not so great at responding to email unless it has obvious questions in it for me to answer. It's also noteworthy that my sleep schedule is quite random - it is worth checking to see if I'm awake at 5am if you want to, but also don't assume that just because it's daytime I'll be awake.
4ata
Hope this doesn't turn into a free-therapy bandwagon, but I have a lot of the same issues as MixedNuts and anonymous259, so if anyone has any tips or other insights they'd like to share with me, that would be delightful. My main problem seems to be that, if I don't find something thrilling or fascinating, and it requires much mental or physical effort, I don't do it, even if I know I need to do it, even if I really want to do it. Immediate rewards and punishments help very little (sometimes they actually make things worse, if the task requires a lot of thought or creativity). There are sometimes exceptions when the boring+mentally/physically-demanding task is to help someone, but that's only when the person is actually relying on me for something, not just imposing an artificial expectation, and it usually only works if it's someone I know and care about (except myself). A related problem is that I rarely find anything thrilling or fascinating (enough to make me actually do it, at least) for very long. In my room I have stacks of books that I've only read a few chapters into; on my computer I have probably hundreds of unfinished (or barely started) programs and essays and designs, and countless others that only exist in my mind; on my academic transcripts are many 'W's and 'F's, not because the classes were difficult (a more self-controlled me would have breezed through them), but because I stopped being interested halfway through. So even when something starts out intrinsically motivating for me, the momentum usually doesn't last. Like anon259, I can't offer any money — this sort of problem really gets in the way of wanting/finding/keeping a job — but drop me a PM if gratitude motivates you. :)
3RobinZ
To some extent, the purpose of LessWrong is to fix problems with ourselves, and the distinction between errors in reasoning and errors in action is subtle enough that I would hesitate to declare this on- or off-topic. It should be mentioned, however, that the population of LessWrongers-asking-for-advice is unlikely to be representative of the population of LessWrongers, and even less so of the population of agents-LessWrongers-care-about. This is likely to make generalizations drawn from observations here narrower in scope than we might like.
2Alicorn
Same deal as the other two - PM me IM contact info, we can chat :)
2Alicorn
PM me with your IM contact info and I'll try to help you too. Look, I'll do it for free too!
5Jordan
For what it's worth: A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying. Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are: * Don't eat things with ingredients in them, instead eat ingredients * Limit each meal to less than 5 different ingredients * Try and have the same handful of ingredients for every meal for at least a week at a time.
1wedrifid
I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?
2Jordan
I expanded upon it here. What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).
5hugh
Also, don't offer money. External motivators are disincentives. By offering $100, you are attaching a specific worth to the request, and undermining our own intrinsic motivations to help. Since allowing a reward to disincentivize a behavior is irrational, I'm curious how much effect it has on the LessWrong crowd; regardless, I would be surprised if anyone here tried to collect, so I don't see the point.
2Alicorn
My understanding is that the mechanism by which this works lets you sidestep it pretty neatly by also doing basically similar things for free. That way you can credibly tell yourself that you would do it for free, and being paid is unrelated.
2hugh
To the contrary. If you pay volunteers, they stop enjoying their work. Other similar studies have been done that show that paying people who already enjoy something will sometimes make them stop the activity altogether, or to at least stop doing it without an external incentive. Edit: AdeleneDawner and thomblake agree with the parent. This may be a counterargument, or just an answer to my earlier question, namely "Are LessWrongers better able to control this irrational impulse?"
1Liron
So can a person ever love their day job? It seems that moneymaking/entrepreneurship should be the only reflectively stable passion.
1hugh
Obviously, many people do love their day job. However, your question is apt, and I have no answer to it---even with regards to myself. I often have struggled with doing the exact same things at work and for myself, and enjoying one but not the other. I think in my case, it is more an issue of pressure and expectations. However, when trying to answer the question of what I should do with my life, it makes things difficult!
1Alicorn
I didn't download the .pdf, but it looks like this was probably conducted by paying volunteers for all of their volunteer work. If someone got paid for half of their hours volunteering, or had two positions doing very similar work and then one of them started paying, I'd expect this effect to diminish.
3hugh
The study concerns how many hours per week were spent volunteering; some was paid, some was not, though presumably a single organization would either pay or not pay volunteers, rather than both. Paid volunteers worked less per week overall. The study I referenced was not the one I intended to reference, but I have not found the one I most specifically remember. Citing studies is one of the things I most desperately want an eidetic memory for.
0AdeleneDawner
On reflection, it seems to me to be the latter - my cognitive model of money is unusual in general, but this particular reaction seems to be a result of an intentional tweak that I made to reduce my chance of being bribe-able. (Not that I've had a problem with being bribed, but that broad kind of situation registers as 'having my values co-opted', which I'm not at all willing to take risks with.)
1thomblake
That seems to work. If I were teaching part-time simply because I needed the money, I wouldn't do it. But I decided that I'd teach this class for free, so I also have no problem doing it for very little money.
1AdeleneDawner
Agreed - I do basically similar things for free, and am reasonably confident that my reaction would be "*shrug* ok" if I were to work with MixedNuts and xe wanted to pay me. (I do intend to offer help here; I'm still trying to determine what the most useful offer would be.)
5hugh
MixedNuts, I'm in a similar position, though perhaps less severely, and more intermittently. I've been diagnosed with bipolar, though I've had difficulty taking my meds. At this point in my life, I'm being supported almost entirely by a network of family, friends, and associates that is working hard to help me be a real person and getting very little in return. I have one book that has helped me tremendously, "The Depression Cure", by Dr. Ilardi. He claims that depression-spectrum disorders are primarily caused by lifestyle, and that almost everyone can benefit from simple changes. As any book--especially a self-help book---it ought to be read skeptically, and it doesn't introduce any ideas that can't be found in modern psychological research. Rather, it aggregates what in Ilardi's opinion are the most important: exercise works more effectively than SSRIs, etc. If you really want a copy, and you really can't get one yourself, I will send you one if you can send me your address. It helped me that much. Which is not to say that I am problem free. Still, a 40% reduction in problem behavior, after 6 months, with increasing rather than decreasing results, is a huge deal for me. Rather, I want to give you your "one trick". It is the easiest rather than the most effective; but it has an immediate effect, which helped me implement the others. Morning sunlight. I don't know where you live; I live in a place where I can comfortably sit outside in the morning even this time of year. Get up as soon as you can after waking, and wake as early in the day as you would ideally like to. Walk around, sit, or lie down in the brightest area outside for half an hour. You can go read studies on why this works, or that debate its efficacy, but for me it helps. I realize that your post didn't say anything about depression; just lack of willpower. For me, they were tightly intertwined, and they might not be for you. Please try it anyway.
4MixedNuts
Thanks. I'll try the morning light thing; from experience it seems to help somewhat, but I can't keep it going for long. If nothing else works, I'll ask you for the book. I'm skeptical since they tend to recommend unbootstrapable things such as exercise, but it could help.
4hugh
There is one boot process that works well, which is to contract an overseer. For me, it was my father. I felt embarrassed to be a grown adult asking for his father's oversight, but it helped when I was at my worst. Now, I have him, my roommate, two ex-girlfriends, and my advisor who are all concerned about me and check up with me on a regular basis. I can be honest with them, and if I've stopped taking care of myself, they'll call or even come over to drag me out of bed, feed me, and/or take me for a run. I have periodically been an immense burden on the people who love me. However, I eventually came to the realization that being miserable, useless, and isolated was harder and more unpleasant for them than being let in on what was wrong with me and being asked to help. I've been a net negative to this world, but for some reason people still care for me, and as long as they do, my best course of action seems to be to let them try to help me. I suspect you have a set of people who would likewise prefer to help you than to watch you suffer. Feeling less helpless was nearly as good for them as for me. I have a debt to them that I am continuing to increase, because I'm still not healthy or self-sufficient. I don't know if I can ever repay it, but
1MixedNuts
Yes, I've considered that. There are people who can and do help, but not to the extent I'd need. I believe they help me as much as they can while still having a life that isn't me. I shouldn't ask for more, should I? If you have tips for getting more efficient help out of them, suggestions of people who'd help though I don't expect them to, or ways to get help from other people (professional caretakers?), by all means please shoot.
4hugh
You indicated that you had trouble maintaining the behavior of getting daily morning light. Ask someone who 1) likes talking to you, 2) is generally up at that hour, and 3) is free to talk on the phone, to call you most mornings. They can set an alarm on their phone and have a 2 minute chat with you each day. In my experience if I can pick up the phone (which admittedly can be difficult), the conversation is enough of a distraction and a motivation to get outside, and then inertia is enough to keep me out there. The reason I chose my father is that he is an early riser, self-employed, and he would like to talk to me more than he gets to. You might not have someone like that in your life, but if you do, it is minimally intrusive to them, and may be a big help to you.
3MixedNuts
This sounds like a great idea. I have a strong impulse to answer phones, so if I put the phone far enough from my bed I had to get up to answer it, I'd get past the biggest obstacle. There are two minor problems: None of the people I know have free time early in the morning, but two minutes is manageable. When outside, I'm not sure what to do so there's a risk I'd get anxious and default to going home. I'll try it, thanks.
1jimmy
If you're going to go to the trouble of talking to someone every morning, you might as well see their face: http://www.blog.sethroberts.net/2009/10/15/more-about-faces-and-mood-2/ Seth found that his mood the next day was significantly improved if he saw enough faces the previous morning. There was a LessWronger that posted somewhere that this trick helped him a lot, but I can't remember who or where right now.
3MixedNuts
I see quite a lot of faces in the morning already. Maybe not early enough? Though I'm pretty skeptical; it looks like it'd work best for extroverted neurotypicals, and I'm neither. I added it to the list of tricks, but I'll try others first.
4Alicorn
I'm willing to try to help you but I think I'd be substantially more effective in real time. If you would like to IM, send me your contact info in a private message.
3Kevin
Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness. http://news.ycombinator.com/item?id=1093866
0komponisto
Use with caution, however.
2wedrifid
I don't understand the link. It doesn't mention fish oil but does suggest that she changed her medication (for depression and anorexia) and then experienced suicidal ideation, which she later acted upon. Medications causing suicidal ideation is not unheard of but I haven't heard of Omega-3 having any such effect. Some googling gives me more information. It seems that her psychiatrist was transitioning her from one antidepressant to another, and adding fish oil supplements. There is also suggestions that her depression was bipolar. Going off an antidepressant is known to provoke manic episodes in bipolar patients and even those vulnerable to bipolar that had never had an episode. Going on to an antidepressant (and in particular SSRIs, for both 'on' and 'off') can also provoke mania. A manic episode while suffering withdrawal symptoms and the symptoms of a preexisting anxiety based disorder is a recipe for suicide. As for Omega-3... the prior for her being responsible is low and she just happened to be on the scene when people were looking for something to blame!
0komponisto
Ah, sorry, I should have checked. (I guess it seemed an important enough detail that I just assumed it would be mentioned.) Here (18:20 in the video) is an explicit mention of the fish oil, by her mother; apparently she was taking 12 tablets daily. The way I had interpreted it, which prompted my caution above, was as a case of replacing antidepressants with fish oil, which seems unwise. Looking at it again now reveals there was in fact a plan to continue with antidepressants. It's unclear, however, how far along she was with this plan. In any case, you're right that fish oil may not necessarily have been to blame as the trigger for suicide; but at the very least, it certainly didn't work here, and to the extent that it may have replaced the regular antidepressant treatment...that would seem a rather dubious decision.
3Psy-Kosh
I have had and sometimes still struggle with similar problems, but there is something that sometimes has helped me: If there's something you need to do, try to do something with it, however little, as soon after you get up as possible. The example I'm going to use is studying, but you can generalize from it. Pretty much soon as you get up, BEFORE checking email or anything like that, study (or whatever it is you need to do) a bit. And keep doing until you feel your mental energy "running out".. but then, any time later in the day that you feel a smigen of motivation, don't let go of it: run immediately to continue doing. But starting the day with doing some, however little, seemed to help. I think with me the psychology was sort of "this is the sort of day when I'm working on this", so once I start on it, it's as if I'm "allowed" to periodically keep doing stuff with it during the day. Anyways, as I said, this has sometimes helped me, so...
0MixedNuts
Hmm, this may be why there's such a gap between good and bad days. It only applies to things you can do little by little and whenever you want, which is pretty limited but still useful. Thanks.
3wedrifid
Order modafinil online. Take it, using 'count backwards then swallow the pill' if necessary. Then, use the temporary boost in mental energy to call a shrink. I have found this useful at times.
2knb
Modafinil is a prescription drug, so he would have to see a doctor first, right?
9wedrifid
Yes, full compliance with laws and schedules, even ones that are trivial to ignore, is something I publicly advocate.
2knb
Ok, I didn't know that scoring illegal prescription drugs online was so easy. Isn't it risky? I know people have been busted for this the USA, though it may be easier in France.
8wedrifid
I will not go into detail on what I understand to be the pragmatic considerations here, since the lesswrong morality encourages a more conservative approach to choosing what to do. The life-extentionists over at imminst.org tend to be experienced in acquiring whatever they happen to need to meet their health and cognitive enhancement goals. They tend to give a fairly unbiased reports on the best way to go about getting what you need, accounting for legal risks, product quality risks, price and convenience. I do note that when I want something that is restricted I usually just go tell a doctor that "I have run out" and get them to print me 'another' prescription.
0[anonymous]
I'm curious why you say this. I don't get the impression that more than a tiny number of people here would have moral or even ethical qualms about ordering drugs online, though I would non-confidently expect us to overestimate the risk on average.
4Kevin
In the USA it's no problem to order unscheduled prescription drugs over the internet. Schedule IV drugs can be imported, but customs occasionally seizes them with no penalty for the importer. No company that takes credit cards will ship Schedule II or Schedule III drugs to the USA; at least not one that will be in business for more than a month or two. I believe it's all easier in Europe but I don't know for sure. PM for more info.
2sketerpot
And for completeness, I should note that Modafinil is a Schedule IV drug in the US.
3gwern
Also, downloading music & movies is usually a copyright violation, frequently both civil & criminal.
1MixedNuts
Thanks, but it gets worse. I can't order anything online, because I need to see my bank about checks or debit cards first. I can imagine asking a friend to do it for me, though it's terrifying; I could probably do it on a good day. Also, I doubt the thing modafinil boosts is the same thing I lack, but it could help, if only through placebo effect.
2wedrifid
Terrifying? That's troubling. A shrink can definitely help you! It may boost everything just enough to get you over the line. Good luck getting something done. I hope something works for you. Do whatever it takes.
0HumanFlesh
Adrafinil is similar to modafinil, only it's much cheaper because its patent has expired.
3MrHen
What do you do when you aren't doing anything? EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.
3MixedNuts
I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.
3MrHen
Possible solutions: * Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else. * Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area. * Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there. These are somewhat complicated and broad categories and I don't know how much they would help.
3MixedNuts
I've tried all that (they're on LW already). * That wouldn't work. I do these things by default, because I can't do the things I want. I don't even have a problem with standard akrasia anymore, because I immediately act on any impulse I have to do something, given how rare they are. Also, I can expend willpower do stop doing something, whereas "I need to do this but I can't" seems impervious to it, at least in the amounts I have. * There are plenty of things to be done here, but they're too hard to bootstrap. The easy ones helped somewhat. * That helped me most. In the grey area between things I can do and things I can't (currently, cleaning, homework, most phone calls), pressure helps. But no amount of ass-kicking has made me do the things I've been trying to do for a while.
2AdeleneDawner
What classes of things are on the 'can't do' list?
3MixedNuts
The worst are semi-routine activities; the kind of things you need to do sometimes but not frequently enough to mesh with the daily routine. Going to the bank, making most appointments, looking for an apartment, buying clothes (don't ask me why food is okay but clothes aren't). That list is expanding. Other factors that hurt are: * need to do in one setting, no way of doing a small part at a time * need to go out * social situations * new situations * being watched while I do it (I can't cook because I share the kitchen with other students, but I could if I didn't) * having to do it quickly once I start Most of these cause me fear, which makes it harder to do things, rather than make it harder directly.

This matches my experience very closely. One observation I'd like to add is that one of my strongest triggers for procrastination spirals is having a task repeatedly brought to my attention in a context where it's impossible to follow through on it - ie, reminders to do things from well-intentioned friends, delivered at inappropriate times. For example, if someone reminds me to get some car maintenance done, the fact that I obviously can't go do it right then means it gets mentally tagged as a wrong course of action, and then later when I really ought to do it the tag is still there.

3MixedNuts
Definitely. So that's why I can't do the stuff I should have done a while ago! Thanks for the insight. What works for you?
6jimrandomh
I ended up just explaining the issue to the person who was generating most of the reminders. It wasn't an easy conversation to have (it can sound like being ungrateful and passing blame) but it was definitely necessary. Sending a link to this thread and then bringing it up later seems like it'd mitigate that problem, so that's probably the way to go. Note that it's very important to draw a distinction between things you haven't done because you've forgotten, for which reminders can actually be helpful, and things you aren't doing because of lack of motivation, for which reminders are harmful. If you're reading this because a chronic procrastinator sent you a link, then please take this one piece of advice: The very worst thing you can do is remind them every time you speak. If you do that, you will not only reduce the chance that they'll actually do it, you'll also poison your relationship with them by getting yourself mentally classified as a nag.
3MixedNuts
I can't do that, but thanks anyway. A good deal of the reminders happen in a (semi-)professional context where the top priority is pretending to be normal (yes, my priorities are screwed up). Most others come from a person who doesn't react to "this thing you do is causing me physical pain", so forget it.
3Alicorn
Why do you interact with this person?
3MixedNuts
They're family. I planned to be as independent from the family ASAP, but couldn't due to my worsening problems.
2jimrandomh
In that case, you'll have to mindhack yourself to change the way you react to reminders like this. This isn't necessarily easy, but if you pull it off it's a one-time act with results that stick with you.
3AdeleneDawner
That's a good change to make, and there's also a complementary third option: A specific variant of 'making a mental note' that seems to work very well, at least for me. 1) Determine a point in your regular or planned schedule where you could divert from your regular schedule to do the thing that you need to do. This doesn't have to be the optimal point of departure, just a workable one; you should naturally learn how to spot better points of departure as time goes on, but it's more important to have a point of departure than it is to have a perfect one. It is, however, important that the point of departure is a task during which you will be thinking, rather than being on autopilot. I like to use doorway passages as my points of departure (for example, 'when I get home from running the errands I'm going to do tomorrow, and go to open my front door') because they tend to be natural transition times, but there are many other options. (Other favorites are 'next time I see a certain person' and 'when I finish (or start) a certain task'.) 2) Envision what you would perceive as you entered that situation, using whatever visualization method most closely matches your normal way of paying attention to the world. I tend to use my senses of sight and touch most, so I might visualize what I'd see as I walked up to my front door, or the feel of holding my keys as I got ready to open it. 3) Envision yourself suddenly and strongly remembering your task in the situation you envisioned in step two. It may also work, if you aren't able to envision your thoughts like that, to visualize yourself taking the first few task-specific steps - for example, if the task is to write an email, you'd want to visualize not just turning on your computer or starting up your email program, but entering the recipient's name into the from: field and writing the greeting. If this works for you like it works for me, it should cause the appropriate thought (or task, if you used that variant of step 3)
0MixedNuts
I'm doing this wrong. How do you prevent tasks from nagging you at other times?
0AdeleneDawner
The technique should work even if you find yourself thinking about the task at other times; it just might not work as well, because of the effect that jimrandmoh mentioned about reminders reducing your inclination to do something. A variation of the workaround I mentioned for dealing with others works to mitigate the effect of self-reminders, though - don't just tell yourself 'not right now', tell yourself 'not right now, but at [time/event]'. I can't say much about how to disable involuntary self-reminders altogether, unfortunately. I don't experience them, and if I ever did, it was long enough ago that I've forgotten both that I did and how I stopped. I have, however, read in several different places that using a reliable reminder system (whether one like I'm suggesting, or something more formal like a written or typed list, or whatever) tends to make them eventually stop happening without any particular effort, as the relevant brain-bits learn that the reliable system is in fact reliable, which seems quite plausible to me.
3AdeleneDawner
That sounds like a cognitive-load issue at least as much as it sounds like inertia, to me. (Except the being-watched part, that is. I have that quirk too, and I still haven't figured out what that's about.) There are things that can be done about that, but most of them are minor tweaks that would need to be personalized for you. I suspect I might have some useful things to say about the fear, too. I'll PM you my contact info.
3MixedNuts
What do you mean by "cognitive load"? I read the Wikipedia article on cognitive load theory, but I don't see the connection. For me, the being-watched part is about embarrassment. I often need to stop and examine a situation and explicitly model it, when most people would just go ahead naturally. Awkward looks cause anxiety.
5AdeleneDawner
The concept I'm talking about is broader than the concept that Wikipedia talks about; it's the general idea that brains only have so many resources to go around, and that some brains have less resources than others or find certain tasks more costly than others, and that it takes a while for those resources to regenerate. Something like this idea has come up a few times here, mostly regarding willpower specifically (and we've found studies supporting it in that case), but my experience is that it's much more generally applicable then that. And, if your brain regenerates that resource particularly slowly, and if you haven't been thinking in terms of conserving that limited resource (or set of resources, depending on how exactly you're modeling it), it's fairly easy to set yourself up with a lifestyle that uses the resource faster than it can regenerate, which has pretty much the effect you described. (I've experienced it, too, and it's not an uncommon situation to hear about in the autistic community.)
5MixedNuts
Yes! It does feel like running out of a scarce resource most people have in heaps. I don't know exactly how that resource is generated and how to tell how much I have left before I run out, though.
5AdeleneDawner
Fortunately, the latter at least seems to be a learnable skill for most people. :)
1Unnamed
There is evidence linking people's limited resources for thought and willpower to their blood glucose, which is another good reason to see a doctor to find out if there's something physiological underlying some of your problems.
1NancyLebovitz
Does thinking about having less of that resource than other people tend to consume it?
0MixedNuts
That's a good question. There is a correlation between running out of it and thinking about it, but it's pretty obvious that most of the causation happens the other way around. Talking about it here doesn't seem to hurt, so probably not.
2Kutta
I have a couple of questions, MixedNuts: * Have you ever been to a therapist? * What kind of you history do you have regarding any kinds of medical conditions? * What kind of diagnostic information do you currently have? (blood profile, expert assesment, hair analysis, etc.) * What kind of drugs have you been taking, if you've been? * What does your diet look like?
2MixedNuts
* I have, for a few months, about a year and a half ago. It was slightly effective. I stopped when I moved and couldn't get myself to call again. * Nothing that looks like it should matter. * Not much. I had a routine blood test some years ago. Everything was normal, though they probably only measured a few things. * No prescription drugs. * When I'm on campus I eat mostly vegetables, fresh or canned, and some canned fish or meat, and generic cafeteria food (balanced diet plus a heap of French fries); nothing that requires a lot of effort. At my parents', I eat, um, traditional wholesome food. I eat a lot between meals for comfort, mostly apples. I think my diet is fine in quality but terrible in quantity; I eat way too much and skip meals at random.
4CronoDAS
Given your symptoms, the best advice I can give you is to see a medical doctor of some kind, probably a psychiatrist, and describe your problems. It has to be someone who can order medical tests and write prescriptions. You might very well have a thyroid problem - they cause all kinds of problems with energy and such - and you need someone who can diagnose them. I don't know how to get you to a doctor's office, but I guess you could ask someone else to take you?
0blogospheroid
How much fresh citrus fruit is there in your diet? One of the things that helped me with near depression symptoms when i was in another country was consumption of fresh fruit. Apples and pears helped me, but you already are having apples. hmm.. Try some fresh orange/lemon/sweet lime/grapefruit juices. Might help.
0MixedNuts
Quite a lot, but possibly too sporadically. I'll try it, thanks.
1MrHen
Okay. Nothing I have will help you. My problems are generally OCD based procrastination loops or modifying bad habits and rituals. Solutions to these assume impulses to do things. I have nothing that would provide you with impulses to do. All of my interpretations of "I can't do X" assume what I mean when I tell myself I can't do X. Sorry. If I were actually there I could probably come up with something but I highly doubt I would be able to "see" you well enough through text to be able to find a relevant answer.
2Unnamed
The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of. If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help. My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).
1MixedNuts
This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?
1Unnamed
You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others). The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.
1knb
I recommend a counseling psychologist rather than a psychiatrist. Or, if you can manage it, do both. I used to be just like this, I actually put off applying for college until I missed the deadlines for my favorite schools, just because I couldn't get myself started. Something changed for me over the last couple years, though, and I'm now really thriving. One big thing that helps in the short term is stimulants: ephedrine and caffeine are OTC in most countries. Make sure you learn how to cycle them, if you do decide to use them. Things seem to get easier over time.
1MixedNuts
Why? (The psychiatrist is the one who's a psychologist but can also give you meds, right?) Caffeine seems to work at least a little, but makes me anxious; it's almost always worth it. Thanks. Ephedrine is illegal in France. ETA: Actually, scratch that. I tried drinking coffee and soda when I wasn't unusually relaxed, and the anxiety is too extreme to make me more productive.
3Alicorn
A psychiatrist is someone who went to medical school and specialized in the brain. A psychologist is someone who has a PhD in psychology. Putting "clinical" before either means they treat patients; "experimental" means what it sounds like. There's some crosstraining, but not as much as one might imagine. ("Therapist" and "counselor" imply no specific degree.)
2knb
Some common misconceptions: Counseling Psychology is a very specific degree program within psychology. A psychologist can have a PhD, a PsyD, (doctor of psychology degree), or in some fields, even a masters. Psychiatrists also don't specialize in "the brain" (that's neurology), they specialize in treating psychiatric disorders using the medical model.
2CronoDAS
See the psychiatrist first. Your problems may be caused by some more physiological cause, such as a problem with your thyroid, and a medical doctor is more likely to be able to diagnose them.
2knb
(Note: I'm a psychology grad student, my undergrad work was in neuroscience and psychology.) Psychiatrists (in America at least) are usually too busy to do much psychotherapy. When they do, get ready to pay big time. It just isn't worth their extremely valuable time and in any case, it isn't their specialty. You don't want to see a clinical psychologist because they treat people with diagnosable psych. disorders. You may have melancholic depression, but it sounds like you just have extreme akrasia issues. If you go to a psychiatrist first, they'll likely just try to give you worthless SSRIs.
2orthonormal
Psychologists are for that reason often cheaper. In fact, a counseling psychologist in a training clinic can be downright affordable, and most of the benefits of therapy seem to be independent of the therapist anyway. Also, it would be worth checking for data on the effectiveness of a psychiatric drug before spending on it; many may be ineffective or not worth the side effects.
4MixedNuts
Is Crazy meds as good as it looks?
2wedrifid
Absolutely. Just reading it made my day! Hilarious. (And the info isn't bad either. )
1wedrifid
And if you live in Australia can sometimes be free!
-3[anonymous]
(Suggest seeing a psychiatrist first then a psychologist. Therapy works far better once your brain is functioning. Usually just go to a doctor and they will refer you as appropriate.)
1whpearson
Do you want a companion of some sort? If so, a mind hack that might work is imagining what a hypothetical companion might find attractive in a person. Then try and become that person. Do this by using your hypothetical companion as a filter on what you are doing. Don't beat yourself up about not doing what the hypothetical companion would find attractive, that isn't attractive! Your hypothetical companion does not have to be neurotypical but should be someone you would want to be around. We should be good at following on from these kinds of motivations as we have a long history of trying to get mates by adjusting behaviour.
1MixedNuts
I've sort of considered that, though not framed that way. It might be useful later, but not at my current level. Thanks.
1Mitchell_Porter
Maybe you need to go more crazy, not less. Accept that you are in an existential desert and your soul is dying. But there are other places over the horizon, where you may or may not be better off. So either you die where you are, or you pick a direction, crawl, and see if you end up somewhere better.
1MixedNuts
I've considered that. There are changes in circumstances that would effect positive changes in my mental state, like hopping on the first train to a faraway town or just stop pretending I'm normal in public. I'd be much happier, until I run out of money.
1Mitchell_Porter
Why would you run out of money if you stopped pretending you're normal?
1MixedNuts
I couldn't go to school or get a job. If I stay in school, I have a career ahead of me if I can pursue it.
3Mitchell_Porter
What is this abnormality you have which, if you displayed it, would make it impossible to go to school or get a job?
0MixedNuts
Not one big abnormality. Inability to work for long stretches of time (you can get good at faking). Trouble focusing at random-ish times (even easier to fake). Inability to do certain things out of routine (now I pretend I'll do it later). Extreme anxiety at things like paperwork. Panic attacks (I can delay them until I'm alone, but the cost is high). Sometimes after a panic attack my legs refuse to work, so I just sit there; I could crawl, but I don't in public. Stimming (I choose consciously to do it, but the effects of not doing it when it's needed are bad; I do it as discreetly as possible while still effective).
2CronoDAS
Panic attacks are a very treatable illness. See a medical doctor and tell him or her all about this.
0Kevin
Not wanting to go to school or get a job?
1MixedNuts
Nice try. I do, very much; I want a job so I can get money so I can do things (such as, you know, saving the world). I don't particularly like schooling but it helps get jobs, and has less variance than being an autodidact.
1Jack
I imagine a specific authority in my life or from my past (okay, this is usually my mother) getting really angry and yelling at me to get my ass up and get to work. If you have any memories of being yelled at by an authority figure, use those to help build the image.
1MixedNuts
I promise to give this a honest try, but I expect it to result in panic more than anything.
1h-H
try this http://www.antiprocrastinator.com/ also, contact someone who is proficient in helping people, for eg. here we have Alicorn, or try some googling.
1MixedNuts
I'm desperate enough to ask on LW. Of course I've Googled everything I could think of. The link is decent, combining two good tricks and a valuable insight, but all three have been on LW before so I knew them. Pointing out Alicorn in particular may be useful, but isn't it sort of forcing her to offer help? She already did, though, which makes this point moot.
0h-H
I more or less meant direct a question to her and see what happens rather than impose and keep bugging, which I had a feeling you wouldn't do in either case.
1Alicorn
I'm flattered, but while I enjoy helping people, I'm not sure how I've projected being proficient at it such that you'd notice - can you explain whence this charming compliment?
1h-H
why of course! I've been lurking for a few years now so I remember when you began posting on self help etc. now that think more about it though, I might've had pjeby in mind as well, you two sort of 'merged' when I wrote that above comment, heh but really, proficient is just a word choice, I guess it is flattery, and I did mean to signal you, but that's how I usually write. apologies if that overburdened you in anyway.. ETA: oh and I'd meant to write 'more proficient', not just 'proficient'.
0markrkrebs
I suggest you pay me $50 for each week you don't get and hold a job. Else, avoid paying me by getting one, and save yourself 6mo x 4wk/mo x $50 -$100 = $400! Wooo! What a deal for us both, eh?
3MixedNuts
That's an amusing idea, but disincentives don't work well, and paying money is too Far a disincentive to work (now, if you followed me around and punched me, that might do the trick). This reminds me of the joke about a beggar who asks Rothschild for money. Rothschild thinks and says "A janitor is retiring next week, you can have their job and I'll double the pay.", and the beggar replies "Don't bother, I have a cousin who can do it for the original wage, just give me the difference!"

Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.

It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).

8orthonormal
I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers). If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do). In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin! (I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.) If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.
7Dagon
A few principles that can help in such cases (major decision, very little direct data): * Outside view. You're probably more similar to other people than you like to think. What has worked for them? * Far vs Near mode: beware of generalizations when visualizing distant (more than a few weeks!) results of a choice. Consider what daily activities will be like. * Avoiding oversimplified modeling: With the exceptions of procreation and suicide, there are almost no life decisions that are permanent and unchangeable. * Shut up and multiply, even for yourself: Many times it turns out that minor-but-frequent issues dominate your happiness. Weight your pros/cons for future choices based on this, not just on how important something "should" be.
7Eliezer Yudkowsky
...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little. I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.
5MrHen
I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.) Hope that helps.
4Morendil
Based on those two lucid observations, I'd say you're doing well so far. There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me. Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components. If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results? Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.
4[anonymous]
Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma... :)
3Cyan
A few more than five is an oligolemma; many more is a polylemma.
1knb
Many more is called perfect competition. :3
3RobinZ
Just remembered: I managed not to be stupid on one or two times by asking whether, not why.
3Jordan
I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events. Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).
2RobinZ
The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.
0[anonymous]
Well, what are "major" life decisions? Working in the area of Friendly AGI instead of, say, just String Theory? Quit smoking? Or things like getting a child or not? As one may guess from those questions, I did not have any more success by coercing the bayesian monster than I would have had by just doing the things which already seemed well supported by major pop-science-newspaper-articles. What I do know is, that although it is difficult to get information on what to do next in my special situation, it seems much easier to get information on things many people already do. I just try to make and educated guess and say that nearly everybody does many things which many people do. And often enough one can find things which one does but which should not be done. It may sound silly, but I include things like not smoking, not talking to your friends when you're depressed (writing personal notes works better as friends seem to reinforce the bad mood), and not trying to work as a researcher (y'a know, 80% of the people think they are above average...). What you describe as "X, the fictional character", seems like setting up an in-brain story to think about difficult topics which require analytical thinking, helping to concentrate on one topic by actively blocking random interference of visual/auditory ideas. This is not an "convincing argument" (maybe it's just my English skills, but "convincing argument ... what would do" just does not parse into something meaningful) but just a technique. Similar to concentrate on breathing or muscle tonus or your thoughts or some real or imaginary candle or smell when exeucting the meditation of your preference.

Pigeons can solve Monty hall (MHD)?

A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.

Behind a paywall

[-]toto200

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

How much information is preserved by plastination? Is it a reasonable alternative to cryonics?

4Jack
Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo: Edit: Further googling suggest there might be some unsolved implementation issues.
1Paul Crowley
See the last question in this list

This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P


Perceived Change

Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the ord... (read more)

[-]RobinZ140

To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.

8MrHen
No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly. I initially had a second example as such: It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.
6prase
Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker. When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.) Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds. I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.
3MrHen
I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments. A summary: * The poker game is an example. There are more examples involving things with less obvious rules. * My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic. * I confronted my friends and asked for a reasonable answer. Heuristics were not offered. No one complained about broken rules or cheating. They complained that they were not going to get their card. It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride? One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered. Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?
7Nick_Tarleton
The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated. (Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)
0MrHen
Why can I override mine? What makes me different from my friends? The answer isn't knowledge of math or probabilities.
2Nick_Tarleton
I really don't know. Unusual mental architecture, like high reflectivity or 'stronger' deliberative relative to non-deliberative motivation? Low paranoia? High trust in logical argument?
1prase
Depends, of course, on what exactly you would say and how much unpleasant the writing is for you. I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters. EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.
1MrHen
This behavior is repeated in scenarios where the rules are not being changed or there aren't "rules" in the sense of a game and its rules. These examples are significantly fuzzier which is why I chose the poker example. The lottery ticket example is the first that comes to mind. Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?
[-]prase120

Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?

Because people aren't good at telling their actual reason for disagreement. I suspect that they are aware that the particular rule is arbitrary and doesn't influence the game, and almost everybody agrees that blindly following the rules is not a good idea. So "you broke the rules" doesn't sound as a good justification. "You have influenced the outcome", on the other hand, does sound like a good justification, even if it is irrelevant.

The lottery ticked example is a valid argument, which is easily explained by attachment to random objects and which can't be explained by rule-changing heuristic. However, rule-fixing sentiments certainly exist and I am not sure which play stronger role in the poker scenario. My intuition was that the poker scenario was more akin to, say, playing tennis in non-white clothes in the old times when it was demanded, or missing the obligatory bow before the match in judo.

Now, I am not sure which of these effects is more important in the poker scenario, and moreover I don't see by which experiment we can discriminate between the explanation.

2RobinZ
This is the best synopsis of the "true rejection" article I have ever seen.
1MrHen
That works for me. I am not convinced that the rule-changing heuristic was the cause but I think you have defended your position adequately.
-1Sniffnoy
But this isn't a rule of the game - it's an implementation issue. The game is the same so long as cards are randomly selected without replacement from a deck of the appropriate sort.
3Nick_Tarleton
(The first Google hit for "texas hold'em rules" in fact mentions burning cards.) That the game has the same structure either way is recognized only at a more abstract mental level than the level that the negative reaction comes from; in most people, I suspect the abstract level isn't 'strong enough' here to override the more concrete/non-inferential/sphexish level.
1prase
The ideal decision algorithm used in the game remains the same, but people don't look at it this way. It is a rule, since it is how they have learned the game.
4RobinZ
I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist. Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not. (Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.
5MrHen
EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense. Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck. (By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.) Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior. I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story: Granted, there are a handful of obvious holes in this particular story. The list includes: * My grandfather could have merely used it as an excuse to jab his son-in-law in the ribs (very likely) * My grandfather was lying (not likely) * The bingo organizers knew that rhinos were chosen more often than turtles (not likely) * My grandfather wasn't very good at probability (likely, considering he was playing bingo) * Etc. More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they s
3RobinZ
Have no fear - your comment is clear. I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint. Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting. When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.
3MrHen
Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem. But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?
8thomblake
Something like "ownership" seems right, as well as the loss aversion issue. Somehow, this seemingly-irrational behavior seems perfectly natural to me (and I'm familiar with similar complaints about the order of cards coming out). If you look at it from the standpoint of causality and counterfactuals, I think it will snap into place... Suppose that Tim was waiting for the king of hearts to complete his royal flush, and was about to be dealt that card. Then, you cut the deck, putting the king of hearts in the middle of the deck. Therefore, you caused him to not get the king of hearts; if your cutting of the deck were surgically removed, he would have had a straight flush. Presumably, your rejoinder would be that this scenario is just as likely as the one where he would not have gotten the king of hearts but your cutting of the deck gave it to him. But note that in this situation the other players have just as much reason to complain that you caused Tim to win! Of course, any of them is as likely to have been benefited or hurt by this cut, assuming a uniform distribution of cards, and shuffling is not more or less "random" than shuffling plus cutting. A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. The frequentist would charitably assume that the shuffle guarantees a uniform distribution, so that the cards each have the same probability of appearing on any particular draw. The Bayesian will symmetrically note that shuffling makes everyone involved assign the same probability to each card appearing on any particular draw, due to their ignorance of which ones are more likely. But this only works because everyone involved grants that shuffling has this property. You could imagine someone who payed attention to the shuffle and knew exactly which card was going to come up, and then was duly annoyed when you unexpectedly cut the deck. Given that such a person is possible in princip
3MrHen
Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends: We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test... Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related: This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.
6thomblake
On a tangent, myself and friends always pick the initial draw of cards using no particular method when playing Munchkin, to emphasize that we aren't supposed to be taking this very seriously. I favor snatching a card off the deck just as someone else was reaching for it.
1hugh
When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without [suspicion of] malicious intent breaks a ritual associated with the game. While in this instance, the ritual doesn't protect the integrity of the game, rituals can be very important in getting into and enjoying activities. Humans are badly wired, and Less Wrong readers work hard to control our irrationalities. One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.
1MrHen
We didn't until the people on TV did it. The ritual was only important in the sense that this is how they were predicting which card they were going to get. Their point was based entirely on the fact that the card they were going to get is not the card they ended up getting. As a reminder to the ongoing conversation, we had arguments about the topic. They didn't say, "Do it because you are supposed to do it!" They said, "Don't change the card I am supposed to get!" Sure, but this isn't one of those cases. In this case, they are complaining for no good reason. Well, I guess I haven't found a good reason for their reaction. The consensus in the replies here seems to be that their reaction was wrong. I am not trying to say you shouldn't enjoy your coffee rituals.
[-]hugh100

RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.

I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.

I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have ... (read more)

2MrHen
I agree with your comment and this part especially: Very true. I see a lot of behavior that matches this. This would be an excellent source of the complaint if it happened after they lost. My friends complained before they even picked up their cards.
-1gwern
That's what they say, I take it.
6orthonormal
To modify RobinZ's hypothesis: Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so? Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.
3MrHen
I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question. Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently? I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution. As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why? I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?
1[anonymous]
It might be because people conceive a loss more severely than a gain. There might be an evolutionary explanation for that. Because of that they would conceive their "lossed" card which they already thought would be theirs more severely than the card the "gained" after the cut. While you on the other hand might already be trained to think about it differently.
1JamesPfeiffer
Based on my friends, the care/don't care dichotomy cuts orthogonally to the math/no math dichotomy. Most people, whether good or bad at math, can understand that the chances are the same. It's some other independent aspect of your brain that determines whether it intensely matters to you to do things "the right way" or if you can accept the symmetry of the situation. I hereby nominate some OCD-like explanation. I'd be interested in seeing whether OCD correlated with your friends' behavior. As a data point, I am not OCD and don't care if you cut the deck.
2MrHen
I am more likely to be considered OCD than any of my friends in the example. I don't care if you cut the deck.
3rwallace
It's a side effect. Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.
4MrHen
So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation? I can understand your answer if the scenario was more like: "Hey! Don't do that!" "But it doesn't matter. See?" "Oh. Well, okay. But don't do it anyway because..." And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem. "Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong? I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments? How would you describe this heuristic in a few sentences?
6AdeleneDawner
I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic. Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".
5orthonormal
Because human beings often first have a reaction based on an evolved, unconscious heuristic, and only later form a conscious rationalization about it, which can end up looking irrational if you ask the right questions (e.g. the standard reactions to the incest thought experiment there). So, yes, they were probably unaware of the heuristic they were actually using. I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken. Given that rigorous probability theory didn't emerge until the later stages of human civilization, there's not much room for an additional heuristic saying "unless it doesn't change the odds" to have evolved; indeed, all of the agreed-upon random ways of selecting things (that I've ever heard of) work by obvious symmetry of chances rather than by abstract equality of odds†, and most of the times someone intentionally changed the process, they were probably in fact hoping to cheat the odds. † Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win. Now compute the odds (50-50, unless I made a dumb mistake), and then actually try it (in real life) with non-negligible stakes. I predict that you'll feel slightly more uneasy about the experience than you would be flipping a coin.
5MrHen
Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same. So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested. Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that... 1/3 + 2/3 * 1/4 = 1/3 + 2/12 = 4/12 + 2/12 = 6/12 = 1/2 Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair. If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using. I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.
2JGWeissman
An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.
0MrHen
I don't think this is relevant. I responded in more detail to RobinZ's comment.
1Jordan
As you note, regular poker and poker with an extra cut mid-deal are completely isomorphic. In a professional game you would obviously care, because the formality of the shuffle and deal are part of a tradition to instill trust that the deck isn't rigged. For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong. We have two different pointers pointing to the same thing, and they are complaining because the pointers aren't the same, even though all that matters is what those pointers point to. It would be like complaining if you tried to change the name of Poker to Wallaboo mid-deal.
4Violet
There are rules for the game that are perceived as fair. If one participant goes changing the rules in the middle of the game this 1) makes rule changing acceptable in the game, 2) forces other players to analyze the current (and future changes) to the game to ensure they are fair. Cutting the deck probably doesn't affect the probability distribution (unless you shuffled the deck in a "funny" way). Allowing it makes a case for allowing the next changes in the rules too. Thus you can end up analyzing a new game rather than having fun playing poker.
1MrHen
Sure, but the "wrong" in this case couldn't be shown to my friends. They perfectly understood probability. The problem wasn't in the math. So where were they wrong? Another way of saying this: * The territory said one thing * Their map said another thing * Their map understood probability * Where did their map go wrong? The answer has nothing to do with me cheating and has nothing to do with misunderstanding probability. There is some other problem here and I don't know what it is.
-2cousin_it
An argument isomorphic to yours can be used to demonstrate that spousal cheating is okay as long as there are no consequences and the spouse doesn't know. Maybe your concept of "valid objection" is overly narrow?
3MrHen
Rearranging the cards in a deck has no statistical consequence. Cheating on your spouse significantly alters the odds of certain things happening. If you add the restriction that there are no consequences, there wouldn't really be much point in doing it because its not like you get sex as a result. That would be a consequence. The idea that something immoral shouldn't be immoral if no one catches you and nothing bad happens as a result is an open problem as far as I know. Most people don't like such an idea but I hear the debate surface from time to time. (Usually by people trying to convince themselves that whatever they just did wasn't wrong.) In addition, cutting a deck of cards does have an obvious effect. There is no statistical consequence but obviously you are not going to get the card you were originally going to be dealt.

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

0Will_Newsome
If and only if you can explain UDT in text at least as clearly as you explained it to me in person; I don't think that would take a very long post.
1Alicorn
Maybe he should explain it again in person and someone should transcribe?

How important are 'the latest news'?

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.

What is your take on it?

  • Is it important to read up on the latest news each day?
  • If so, what are your sources, please share them.
  • What kind of news are important?

I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist s... (read more)

[-]Rain100

I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.

Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.

However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.

I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.

ETA: My feed reader contains the following:

... (read more)
3Morendil
Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts). Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession. * "expand and winnow": after finding an information source I really like (using the term "source" loosely, a blog, a forum, a site, etc.) I will often explore the surrounding "area", subscribe to related blogs or sources recommended by that source. In a second phase I will sort through which of these are worth following and which I should drop to reduce overload * "scorched earth": when I feel like I've learned enough about a topic, or that I'm truly overloaded, I will simply drop (almost) every subscription I have related to that topic, maybe keeping a major source to just monitor (skim titles and very occasionally read an item) * "independent confirmation": I do like to make sure I have a diversified set of sources of information, and see if there are any items (books, articles, movies) which come at me from more than one direction, especially if they are not "massively popular" items, e.g. I'd discard a recommendation to see Avatar, but I decided to dive into Jaynes when it was recommended on LW and my dad turned out to have liked it enough to have a hard copy of the PDF * "obsession": there typically is one thing I'm obsessed with (often the target of an expand and winnow operation); e.g. at various points in my life I've been obsessed with Agora Nomic, XML, Java VM implementation, Agile, personal development, Go, and currently whatever LW is about. An "obsessed" topic can be but isn't necessarily a professional interest, but it's what dominates my other curiosity and tends to color my other interests. For instance while obsessed with Go I pursued the topic both for its own sake and as a source of metaphors for understanding, say, project management or software dev
0h-H
yeah, news is usually a time/attention sink, I go to my bookmarked blogs etc whenever I feel like procrastinating. 15-20 minutes of looking at the main news sites/blogs should be enough to tell you what the biggest developments are, but really, I read them for entertainment value as much as for anything else. as a side note, antiwar is good site for world news.
[-]FrF80

"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/

Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."

5SoullessAutomaton
Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves. Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.
4hugh
I partially agree with this. Somewhere along the way, I learned how to learn. I still haven't really learned how to finish. I think these two features would have been dramatically enhanced had I not gone to school. I think a potential problem with self-educated learners (I know two adults who were unschooled) is that they get much better at fulfilling their own needs and tend to suffer when it comes to long-term projects that have value for others. The unschooled adults I know are both brilliant and creative, and ascribe those traits to their unconventional upbringing. But both of them work as freelance handymen. They like helping others, and would help other people more if they did something else, but short-term projects are all they can manage. They are polymaths that read textbooks and research papers, and one has even developed a machine learning technique that I've urged him to publish. However, when they get bored, they stop. The chance that writing up his results and releasing them would further research is not enough to get him past that obstacle of boredom. I have long thought that school, as currently practiced, is an abomination. I have yet to come up with a solution that I'm convinced solves its fundamental problems. For a while, I thought that unschooling was the solution, but these two acquaintances changed my mind. What is your opinion, on the right way to teach and learn?
0gwillen
As an interesting anecdote, I was schooled in a completely traditional fashion, and yet I never really learned to finish either. I did learn to learn, but I did it through a combination of schooling and self-teaching. But all the self-teaching was in addition to a completely standard course of American schooling, up through a Bachelor's degree in computer science.
0hugh
That's pretty much where I am; traditional school, up through college and grad school. I think my poor habits would have been intensified, however, if I had been unschooled.

It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.

6gwern
Well, there's still intermittent fasting. IF would get around and would also work well with the musings about variability and duration: (Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)
1Eliezer Yudkowsky
Where do you get that I thought I was wrong about CR? I'd like to lose weight but I had been aware for a while that the state of evidence on caloric restriction doing the purported job of extending lifespan in mammals was bad.
2AdeleneDawner
...huh. The last thing I remember hearing from you about it was that it looked promising, but that the cognitive side effects made it impractical, so you'd settled on just taking the risk (which would, with that set of beliefs and values, be right in some ways, and wrong in others, and more right than wrong). But, for some reason the search bar doesn't turn up any relevant conversations for "calorie restriction Eliezer" or "caloric restriction Eliezer", so I couldn't actually check my memory. Sorry about that.
-3timtyler
That's a dopey article. My council is to not get your diet advice from there.
2AdeleneDawner
"Dopey"?
1wedrifid
Suggest a better one?
1timtyler
http://www.crsociety.org/ is the best web resource relating to dietary energy restriction that I am aware of.
7AdeleneDawner
I'm not seeing anything at all on that site regarding scientific evidence that CR works, except links to news articles (meh) and uncited assertions that studies have been done that came to that conclusion - the latter of which, in light of the issues raised in the article I linked to, I want to know more about before I try to decide whether they're useful or not. Overall, both the site and the wiki seem to be much more focused on how to do CR than on making any kind of case that CR is a good idea; I don't think we're asking the same question, if you consider that site to give good answers.
1timtyler
That site is the biggest and most comprehensive resource on the topic available on the internet, AFAIK. Looking at what you say you are looking for, I don't think we're asking the same question either. The diet is not "a good idea" - e.g. see: http://cr.timtyler.org/disadvantages/ Rather, it is a tool - and whether or not it is for you depends on what your aims in life are.
1AdeleneDawner
Sorry, I thought the meaning of "a good idea" would be clear in context. I meant "likely to increase a user's chance of having a longer lifespan than they would otherwise". If that's the best resource there is, taking CR at all seriously sounds like privileging the hypothesis to me.
0wedrifid
It may be wrong but I don't think the flaw is that of privileging the hypothesis. If CR actually does work in, say, rats then thinking it may work in humans is at least a worthwhile hypothesis. The essay you found suggests that the evidence for the hypothesis is looking kinda shaky.
2AdeleneDawner
Noteworthy: CR is not a particular interest of mine, and I haven't researched it. If there are good, solid studies of CR in rats, why doesn't that site seem to have, or link to, information about them? If that's the site for CR, and given that it has a publicly editable (yes, I checked) wiki, I'd expect that someone would have added that information, and it's not there: I searched for both "study" and "studies" in the wiki; nothing about rat studies - or any other animal studies, except a mention of monkey studies - showed up. A google site search does turn up this, though.
0timtyler
Don't bother with the site's wiki. They have a reference to a mouse study on the front page of the site: Weindruch R, et al. (1986). "The retardation of aging in mice by dietary restriction: longevity, cancer, immunity and lifetime energy intake." Journal of Nutrition, April, 116(4), pages 641-54. For the evidence from the rat studies, perhaps start with this review article: Overview of caloric restriction and ageing. http://www.crsociety.org/archive/read.php?2,172427,172427
0timtyler
I think most in the field agree on that. e.g.: ""I'm positive that caloric restriction will work in humans to extend median life span," Fontana says." * http://pubs.acs.org/cen/science/87/8731sci2.html A summary from the site wiki: "The evidence that bears on the question of the applicability of CR to humans then, is at present indirect. There is nonetheless a great deal of such indirect evidence, enough that we can say with an extremely high degree of confidence that CR will work in humans." * http://en.wiki.calorierestriction.org/index.php/Will_CR_Work_in_Humans%3F
0wedrifid
Off the top of your head do you know what CR has been shown to work on thus far?
0Douglas_Knight
One of TT's links says CR works in "mice, hamsters, dogs, fish, invertebrate animals, and yeast."
[-][anonymous]70

Pick some reasonable priors and use them to answer the following question.

On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?

ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.

4Sniffnoy
In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?
1[anonymous]
The information I gave is the information you have. Don't make me make the problem more complicated. ETA: Let me expand on this before people start getting on my case. Rationality is about coming to the best conclusion you can given the information you have. If the information available to you is limited, you just have to deal with it. Besides, sometimes, having less information makes the problem easier. Suppose I give you the following physics problem: I throw a ball from a height of 4 feet; its maximum height is 10 feet. How long does it take from the time I throw it for it to hit the ground? This problem is pretty easy. Now, suppose I also tell you that the ball is a sphere, and I tell you its mass and radius, and the viscosity of the air. This means that I'm expecting you to take air resistance into account, and suddenly the problem becomes a lot harder. If you really want a problem where you have all the information, here: Every time period, input A (of type Boolean) is revealed, and then input B (also of type Boolean) is revealed. There are no other inputs. In time period 0, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 1, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 2, input A is revealed to be FALSE. What is the probability that input B will be revealed to be TRUE?
7Douglas_Knight
Having less information makes easier the problem of satisfying the teacher. It does not make easier the problem of determining when the ball hits the ground. Incidentally, I got the impression somehow that there are venues where physics teachers scold students for using too much information. ETA (months later): I do think it's a good exercise, I just think this is not why.
0[anonymous]
Here, though, the problem actually is simpler the less information you have. As an extreme example, if you know nothing, the probability is always 1/2 (or whatever your prior is).
-1RobinZ
I can say immediately that it is less than 50% - to be more rigorous would take a minute. Edit: Wait - no, I can't. If the variables are related, then that conclusion would appear, but it's not necessary that they be.
3orthonormal
Let * AN = "Grandma calls on Thursday of week N", * BN = "Grandma comes on Friday of week N". A toy version of my prior could be reasonably close to the following: P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r where * the distribution of p is uniform on [0,1] * the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say) * the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say) Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly. Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.
2orthonormal
I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution. Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!
3RobinZ
I fail to see how this question has a perceptibly rational answer - too much depends on the prior.
4[anonymous]
Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.
2ata
Does she come over unannounced on any days other than Friday?
0[anonymous]
I don't know.
1Richard_Kennaway
Using the information that she is my grandmother, I speculate on the reason why she did not call on Thursday. Perhaps it is because she does not intend to come on Friday: P(Friday) is lowered. Perhaps it is because she does intend to come but judges the regularity of the event to make calling in advance unnecessary unless she had decided not to come: P(Friday) is raised. Grandmothers tend to be old and consequently may be forgetful: perhaps she intends to come but has forgotten to call: P(Friday) is raised. Grandmothers tend to be old, and consequently may be frail: perhaps she has been taken unwell; perhaps she is even now lying on the floor of her home, having taken a fall, and no-one is there to help: P(Friday) is lowered, and perhaps I should phone her. My answer to the problem is therefore: I phone her to see how she is and ask if she is coming tomorrow. I know -- this is not an answer within the terms of the question. However, it is my answer. The more abstract version you later posted is a different problem. We have two observations of A and B occurring together, and that is all. Unlike the case of Grandma's visits, we have no information about any causal connection between A and B. (The sequence of revealing A before B does not affect anything.) What is then the best estimate of P(B|~A)? We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent. Therefore A can be ignored and the Laplace rule of succession applied to the two observations of B, giving 3/4. ETA: I originally had a far more verbose analysis of the second problem based on modelling it as an urn problem, which I then deleted. But the urn problem may be useful for the intuition anyway. You have an urn full of balls, each of which is either rough or smooth (A or ~A), and either black or white (B or ~B). You pick two balls which turn out to be both rough and black. You pick a third and feel that it is sm
3wnoise
Directly using the Laplace rule of succession on the sample space A \tensor B gives weights proportional to: (A,B): 3 (A, ~B): 1 (~A, B): 1 (~A, ~B): 1 Conditioning on ~A, P(B|~A) = 1/2. Assuming independence does make a significant difference on this little data.
3orthonormal
On the contrary, on two points. First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way— or, to put it another way, if that were your prior and you observed 100 cases and A and B agreed each time (sometimes true, sometimes false), you'd still assume they were independent. What you should have said, I think, is that a reasonable prior would have "A and B independent" as one of the most probable options for their relation, as it is one of the simplest. But it should also give some substantial weight to simple dependencies like "A and B identical" and "A and B opposite". Second, the sense in which we have no prior information about relations between A and B is not a sense that justifies ignoring A. We had no prior information before we observed them agreeing twice, which raises the probability of "A and B identical" while somewhat lowering that of "A and B independent".
0[anonymous]
It's true that the prior should not be "A and B are independent". But shouldn't symmetries of how they may be dependent give essentially the same result as assuming independence? Similar as to how any symmetric prior for how a coin is biased gives the same results for a prediction of probability of heads -- 1/2. I don't think independence is a good way to analyze things when the probabilities are near zero or one. Independence is just P[A] P[B] = P[AB]. If P[A] or P[B] are near zero or one, this is automatically "nearly true". Put another way, two observation of (A, B) give essentially no information about dependence by themselves. This is encoded into ratios between the four possibilities.
-2Richard_Kennaway
This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C. On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence. Almost everyone has two eyes and two legs, and therefore almost everyone has both two eyes and two legs, but it does not follow from those observations alone that possession of two eyes either is, or is not, independent of having two legs. For example, it is well-known (in some possible world) that the rare grey-green greasy Limpopo bore worm invariably attacks either the eyes, or the legs, but never both in the same patient, and thus observing someone walking on healthy legs conveys a tiny positive amount of probability that they have no eyes; while (in another possible world) the venom of the giant rattlesnake of Sumatra rapidly causes both the eyes and the legs of anyone it bites to fall off, with the opposite effect on the relationship between the two misfortunes. I can predict that someone has both two eyes and two legs from the fact that they are a human being. The extra information about their legs that I gain from examining their eyes could go either way. But that is just an intuitive ramble. What is needed here is a calculation, akin to the Laplace rule of succession, for observations in a 2x2 contingency table. Starting from an ignorance prior that the probabilities of A&B, A&~B, B&~A, and ~A&~B are each 1/4, and observing a, b, c, and d examples of each, what is the appropriate posterior? Then fill in the values 2, 0, 0, and 0. ETA: On reading the comments, I realise that the above is almost all wrong.
7jimrandomh
In order to have a probability distribution rather than just a probability, you need to ask a question that isn't boolean, ie one with more than two possible answers. If you ask "Will this coin come up heads on the next flip?", you get a probability, because there are only two possible answers. If you ask "How many times will this coin come up heads out of the next hundred flips?", then you get back a probability for each number from 0 to 100 - that is, a probability distribution. And if you ask "what kind of coin do I have in my pocket?", then you get a function that takes any possible description (from "copper" to "slightly worn 1980 American quarter") and returns a probability of matching that description.
4orthonormal
Depends on how you're doing this; if you have a continuous prior for the probability of C, with an expected value of 0.469, then no— and future evidence will continue to modify your probability distribution. If your prior for the probability of C consists of a delta mass at 0.469, then yes, your model perhaps should be criticized, as one might criticize Rosenkrantz for continuing to assume his coin is fair after 30 consecutive heads. A Bayesian reasoner actually would have a hierarchy of uncertainty about every aspect of ver model, but the simplicity weighting would give them all low probabilities unless they started correctly predicting some strong pattern. Independence has a specific meaning in probability theory, and it's a very delicate state of affairs. Many statisticians (and others) get themselves in trouble by assuming independence (because it's easier to calculate) for variables that are actually correlated. And depending on your reference class (things with human DNA? animals? macroscopic objects?), having 2 eyes is extremely well correlated with having 2 legs.
4FAWS
Even without any math It already tells you that they are not mutually exclusive. See wnoise's reply to the grandparent post for the Laplace rule equivalent.
3[anonymous]
I really like your urn formulation.
1Peter_de_Blanc
OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model. In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.
3[anonymous]
I beg your pardon?
0Douglas_Knight
Is this a standard model? Does it have a name? a reference? I see that the level 1 model is Laplace's rule of succession. Is there some clean statement about the level k model? Is this a bayesian update? You seem to be treating the string as being labeled by alternating Thursdays and Fridays, which have letters drawn from different alphabets. The model easily extends to this, but it was probably worth saying, particularly since the two alphabets happen to have the same size. I find it odd that almost everyone treated weeks as discrete events. In this problem, days seem like the much more natural unit to me. ata probably agrees with me, but he didn't reach a conclusion. With weeks, we have very few observations, so a lot depends on our model, like whether we use alphabets of size 2 for Thursday and Friday (Peter), or whether we use alphabets of size 4 for the whole week (wnoise). I'm going to allow calls and visits on each day and use an alphabet of size 4 for each day. I think it would be better to use a Peter-ish system of separating morning visits from evening calls, but with data indexed by days, we have a lot of data, so I don't think this matters so much. I'll run my weeks Sun-Sat. Weeks 1 and 2 are complete and week 3 is partial. Treating days as independent and having 4 outcomes: ([no]visit)x([no]call). I interpret the unspecified days as having no call and no visit. Using Laplace's rule of succession, we have 4/23 chance of visit, which sounds pretty reasonable to me. But if we use Peter's hierarchical model, I think our chance of a visit is 4/23*4/17*4/14*4/11*4/8*4/5 = 1/500. That is, since we've never seen a visit after a no-call/no-visit day, the only way to get a visit is from level 1 of the model, so we multiply the chance of falling through from level 2 to level 1, from level 3 to 2, etc. The chance of falling through from level n+1 to level n is 4/(4+c), where c is the number of times we've seen an n+1-gram that continues the last n days. So for n
[-][anonymous]60

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

New on arXiv:

David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?

In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory

... (read more)
2xamdam
In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law
0SilasBarta
Okay, now that I've read section 2 of the paper (where it gives the two decompositions), it doesn't seem so insightful. Here's my summary of the Wolpert/Benford argument: "There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega's decision g, and Realist, where Omega's decision causally influences yours. "Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y -> G. One-boxing is preferable. "Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G -> Y. Two-boxing is preferable." My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega's act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.
0SilasBarta
Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky's list of the ways that Newcomb's problem, as stated, constrains a Bayes net. For the non-link-clickers: * Must have nodes corresponding to logical uncertainty (Self-explanatory) * Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected) * Omega's act lies in the past. (ETA: Since nothing is simultaneous with Omega's act, then knowledge of Omega's act screens off the influence of everything before it; on the Bayes net, Omega's act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.) * Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.) * We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.) * Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)

Warning: Your reality is out of date

tl;dr:

There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)

Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the f... (read more)

0RobinZ
I notice the figure for cell phone connectivity is three years old. :P

Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.

2Kevin
I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos. They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.
1nazgulnarsil
throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.

Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?

I would include TakeOnIt in such a list. Any other examples?

[-][anonymous]60

When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!

I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.

5MrHen
I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works. EDIT: Apparently the book is on Google.
2[anonymous]
Today there's How Stuff Works.
1Nick_Tarleton
I also loved that book. It probably helped teach me reductionism, but it's hard to tell given my generally terrible memory for my childhood. (FWIW, my best guess for my biggest reductionist influence would be learning assembly language and other low-level CS details.)
1Jack
I think we had this in the house, but I don't remember it very well, except some of the part about pullies and levers. This book would be a nice starting point for that rebuilding civilization manual idea from a while back.
1Morendil
My favorite Macaulay is "Motel of the Mysteries". I read it as a kid and it definitely had an influence. ;)
0Nisan
I have fond childhood memories of many hours tracing the circuit diagram of the adding circuit : ) God, I was so nerdy. I wanted to know how a computer worked and that book helped me avoid a mysterious answer to a mysterious question. Learning, in detail, how a specific logic circuit works really drove home how much I had yet to learn about the rest of the workings of a computer.
0h-H
I was going to get that for me younger brother when I next see him :)

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probabili

... (read more)
9MrHen
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful. The consensus of the comments was that the correct answer is .5. Also of note is Bead Jar Guesses and its sequel.
7JGWeissman
If you truly have no clue, .5 yes and .5 no. Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".
9Alicorn
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
2orthonormal
All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
2Alicorn
Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
4orthonormal
Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question. Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
9Alicorn
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no". Basically, your prior should be that everything is almost certainly false!
3cousin_it
The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.
0thomblake
A random alien?
0bogdanb
No, just a random alien that (1) I encountered and (2) asked me a question. The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can't see a way of properly encoding this information as a prior about the answer to said question. [ETA:] Note that I don't necessarily accept cousin_it's assertion, I just state my interpretation of it.
0orthonormal
Well, let's say I ask you whether all "fnynznaqre"s are "nzcuvovna"s. Prior to using rot13 on this question (and hypothesizing that we hadn't had this particular conversation beforehand), would your prior really be as low as your previous comment implies? (Of course, it should probably still be under 50% for the reference class we're discussing, but not nearly that far under.)
1Alicorn
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all "fnynznaqre"s are "nzcuvovna"s. We're talking about aliens. Come on, now that it's occurred to you, wouldn't you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
0orthonormal
Yes, but I might as easily choose a question whose answer was "Yes" if I thought that a trick question might be too predictable of a strategy. 1/4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can't see why the likelihood of "Yes" should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
2Alicorn
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn't see a reason to think this was going on when the aliens were asking the question, though.
0orthonormal
Hmm. As you point out, questionspace is biased towards "No" when represented in human formalisms (if weighting by length, it's biased by nearly the length of the "not" symbol), and it would seem weird if it weren't so an an alien representation. Perhaps that's a reason to revise down and not up when taking information off the table. But it doesn't seem like it should be more than (say) a decibel's worth of evidence for "No". ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
2Alicorn
Isn't it awesome when that happens? :D
-1vinayak
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events - 1. Strigli wins the game. 2. It rains immediately after the match is over. 3. Strigli wins the game AND it rains immediately after the match is over. I am not sure if your scheme ensures that this does not happen. Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses? One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.
1bogdanb
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I'm asked each question. Then, when I'm asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I'm asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It's a fine point that seems easy to miss. On a somewhat related point, I've looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn't mean what the asker thinks it means. Take For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject. However, what it means is just that before you are asked the question, you don't have any information about it. (And I'm not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today. “Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship). So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one's lifetime, aliens exist, aliens have that travel technology,
0orthonormal
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2). On the second part: Yep, you need to start with some prior probabilities, and if you don't have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.) In practice, it's not possible to estimate hypothesis complexity with much precision, but it doesn't take all that much precision to judge in cases like Thor vs. Maxwell's Equations; and anyway, as long as your priors aren't too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes. ETA: Good to keep in mind: When (Not) To Use Probabilities
0JGWeissman
But it is true that you are not on a red couch. Negation is a one-to-one map between true and false propositions.
3Alicorn
Since you can understand the alien's question except for the nouns, presumably you'd be able to tell if there was a "not" in there?
3JGWeissman
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien's question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.) ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that's not the point.) This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
1gwern
That the aliens chose to translate their word as the English 'game' says, I think, a lot.
4Alicorn
"Game" is one of the most notorious words in the language for the virtual impossibility of providing a unified definition absent counterexamples.
1Richard_Kennaway
"A game is a voluntary attempt to overcome unnecessary obstacles."
3JohannesDahlstrom
This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn't classify hobbies such as computer programming or learning to play the piano as games.
2Richard_Kennaway
I wouldn't class most hobbies as attempts to overcome unnecessary obstacles either -- certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher's goal is to get as many "ticks" (sightings of different species) as possible, as falling within the definition, but for that very reason I'd regard it as being a game. One could argue that compulsory games at school are a counterexample to the "voluntary" part. On the other hand, Láadan has a word "rashida": "a non-game, a cruel "playing" that is a game only for the dominant "player" with the power to force others to participate [ra=non- + shida=game]". In the light of that concept, perhaps these are not really games for the children forced to participate. But whatever nits one can pick in Bernard Suits' definition, I still think it makes a pretty good counter to Wittgenstein's claims about the concept.
0JohannesDahlstrom
Oh, right. Reading "unnecessary" as "artificial", the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.
-1gwern
A family resemblance is still a resemblance.
0radical_negative_one
Could you include a source for this quote, please?
-3gwern
Googling it would've told you that it's from Wittgenstein's Philosophical Investigations.
1JGWeissman
Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.
-2gwern
/me bats the social signal into JGWeissman's court Omitting the citation, which wasn't really needed, sends the message that I don't wish to stand on Wittgenstein's authority but think the sentiment stands on its own.
2wedrifid
Then use your own words. Wittgenstein's are barely readable.
0[anonymous]
My words are barely readable? Did you mean Wittgenstein's words?
0[anonymous]
Pardon me I meant Wittgenstein.
1RobinZ
If it doesn't stand on its own, you shouldn't quote it at all - the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.
1JGWeissman
Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source. I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.
0RobinZ
You're right, of course.
-5gwern
1SoullessAutomaton
Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
6Alicorn
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.
2SoullessAutomaton
Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions. More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
1orthonormal
For #2, I don't see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up. In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the "aspiring rationalist" cluster and/or the "Bayesian" cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
1Jack
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
0[anonymous]
For number 1 you should weight "no" more highly. For the answer to be "yes" Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a "no" is significantly more likely to be right. 50% seems wrong to me.
0Kaj_Sotala
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible. Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?" 2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)
2vinayak
Yes, I should be more specific about 2. So let's say the following are the first three questions you ask and their answers - Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No. At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question. Q4. Do you believe in Modus Ponens? or in other words, Q4. Do you think that if A and A=>B are both true then B should also be true? If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well. Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well? And I can go on and on... So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.
1prase
I think that belief in modus ponens is a part of the definition of "rational", at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn't probably much helpful.

LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm

6Kevin
Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487
0Jack
So do we count this as additional evidence that some anthropic selection is in effect even though it is causally connected to the earlier breakdown?
2Richard_Kennaway
I like this quote from the director: "With a machine like the LHC, you only build one and you only build it once."

I've just finished reading Predictably Irrational by Dan Ariely.

I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).

It's a bit light compared to going straight to the studies, but it's also a quick read.

Good to give as gift to friends.

2Hook
I'm waiting for the revised edition to come out in May.
6Hook
Looking at that amazon link, has anyone considered automatically inserting a SIAI affiliate into amazon links? It appeared to work quite well for StackOverflow.
0MichaelGR
Is there a description of the changes somewhere?
0Hook
I didn't see any, but it is close to 100 pages longer.
0MichaelGR
Original hardcover was 244 pages long, so 100 pages is a significant addition. Probably worth waiting for.

Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.

1RobinZ
Interesting. Has this experiment actually been run, and does it change the percentages in the responses relative to the textbook version?
0Vladimir_Nesov
That would be scientific approach to Dark Arts.
0RobinZ
The linked post seemed to run far ahead of the presented evidence - and this is a kind of situation in which the scientific method is known to be quite powerful.
0Vladimir_Nesov
Sure. Dark arts don't stain the power of scientific approach, though probably defy the purpose.

Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.

2FAWS
The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?
1Kevin
Thanks, I was missing the drop down button on that page.
[-]h-H50

while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)

"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf

3wnoise
I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218 This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author. EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.
3Mitchell_Porter
I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way. ETA: As an example of not understanding contemporary developments, see his final section, where he says I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.
3wnoise
Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon. I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.
2Cyan
Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?
2wnoise
Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!). Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.
2Cyan
Petkov wrote: It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.
2arundelo
Neat find! I haven't read all of it yet, but I found this striking: This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:

I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.

I don't know whether I'm the only person who has this problem, but I think it's worth checking.

"Anti-logical rudeness" strikes me as a good bit better.

2RobinZ
It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.
2NancyLebovitz
When I was considering a better name for the problem, I couldn't find a word for the process of seeking truth, which is what's actually being derailed by logical rudeness. Unless I've missed something, the problem with logical rudeness isn't that there's no logical flaw in it. The fact that I've got 4 karma points suggests (but doesn't prove) that I'm not the only person who has a problem with the term "logical rudeness". I should have been clearer that "anti-logical rudeness" was just an attempt at an improvement, rather than a strong proposal for that particular change.
0RobinZ
I think you're complaining about the problem of people not updating on their evidence by using anti-epistemological techniques such as logical rudeness. I still don't see the need for changing the name, but I'll defer to the opinion of the crowd if need be.
0h-H
seconded, it's too benign for what it actually intends to convey.

Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.

Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)

I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.

I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?

0hugh
I don't have the book you're referring to. Are you essentially going to walk through a solution for this [pdf], or at least to talk about point #10? This is a Bayesian problem; the Frequentist answer is the same, just more convoluted because they have to say things like "in 95% of similar situations, the estimate of a and b are within d of the real position of the lighthouse". Alternately, a Frequentist, while always ignorant when starting a problem, never begins wrong. In this case, if the chose prior was very unsuitable, the Frequentist more quickly converges to a correct answer.
0wnoise
Yes, that was the plan. I thought Frequentists would not be willing to cede such, but insist that any problem has a perfectly good Frequentist solution. I want to see not just the Frequentist solution, but the derivation of the solution.