If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
New Comment
141 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

How can we spread rationality and good decision making to people who aren't inclined to it?

I recently chatted up a friendly doorman who I normally exchange brief pleasantries with. He told me that he was from a particularly rough part of town and that he works three jobs to support his family. He also told me not to worry because he has a new business that makes a lot of money, although had to borrow gas money to get to work that day. He said that he was part of a "travel club". I immediately felt bad because I had a gut feeling he was talking about some multi-level marketing scheme. I asked him if it was and he confirmed it was, but disagreed that it was a "scheme". He told me that he is trying to recruit his family, and that the business model encourages recruiting among family and friends. Skip 45 minutes of him selling it to me, I left him with a warning to be cautious because these things can be fly by night operations and that most people need to lose for a very few to win, and that he can win only if he is part of the very best promoters/sellers. I said that on purpose to gauge whether he is a true believer or a wolf in sheep's clothing but hi... (read more)

The problem with these kinds of schemes is they make people think they've found a clever way to make money and happily signal it. It's not just the money (they're not usually getting) they're getting from it. Convincing them they're stupid instead of clever will be extremely difficult. You can say they're not stupid, but irrational, but most people won't know the difference so good luck explaining them that. The weirdest brand of rampant irrationality is working your ass off to buy a lot of expensive stuff you really don't need and then wondering why you're poor. I haven't had success in convincing these people they don't need their stuff to be happy. Even some of my med school friends who you'd expect to be intelligent enough to notice the problem buy diamond coated gold watches, BMWs and live in expensive houses and then complain how much they have to work. Perhaps the complaining is a facade though and they actually know what they're doing. Some people need to signal they're richer than they actually are.
Or to signal that they enjoy work less than they do.
It's weird how this didn't even cross my mind. I think being a workaholic in these circles is more often admired than not.
The following advice is anecdotal and is a very clear example of "other optimizing". So don't take it with a grain of salt, take it with at least a table spoon. I've found that engaging people about their rationality habits is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you think is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. So you need to talk to people about the real world consequences of their biases and very specifically describe how acting in a less biased manner will improve their life and the lives of those around them. Anecdotally I've found this to be true in convincing people to donate money to the AMF. My friends will be happy to agree that they should do so, but unless prodded repeatedly and pointedly they will not actually take the next step of donating. I accept that my friends are not a good sample to generalize from (my social circle tends to include those who are already slightly more rational than the average bear to begin with). So if you want to convince someone to be more rational, bug them about it. Once a week for two months. Specificity is key here, talk about real life examples where their biases are causing problems. The more concrete the better since it allows them to have a clear picture of what improvement will look like.
Let me make just a small change... I've found that engaging people about their belief in Jesus is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you live is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. ... So if you want to convince someone to love Jesus, bug them about it. Do you have any reason to believe that people will react to the first better than to the second?
While there are many people who are annoyed by Christian Evangelicals, I feel that it is difficult to argue against their effectiveness. They exist because they are willing to talk to people again and again about their beliefs until those people convert. Do you have any reason to believe that Christian Evangelicals are ineffective at persuading people? Keep in mind that a 5% conversion rate is doing a pretty damn good job when it comes to changing people's minds.
Yes. Their mind share in the US is not increasing.
False, according to both the source you cited and http://www.gallup.com/poll/16519/us-evangelicals-how-many-walk-walk.aspx
False, really? So looking at the data in these two links you think you see a statistically significant trend? Don't forget that your (second) link is concerned with proxies for being an Evangelical...
The margin of sampling error is +- 3% while the difference the 1980 percentage and the 2005 percentage is 5%. I do think that a trend which has a p value less than .05 is statistically significant.
I am not sure which data you are looking at. My link shows the percentage of people who self-identify as Evangelicals. The data starts in 1991 and ends in 2005. The first values (1991-1993) are 41%, 42%, 46%, 44%, 43%, and the last values (2004-2005) are 42%, 39%, 42%, 47%, 40%. I see no trend. Your link shows the percentage of people who answer three proxy questions. The data starts in 1976 and ends in 2005. Over that time period one question goes up (47% to 52%), one goes down (38% to 32%) and the third goes up as well (35% to 48%). Do note that the survey says "When looking at the percentage of Americans who say yes to all three of these questions, slightly more than one in five (22%) American adults could be considered evangelical" and that's about *half* of the number of people who self-identify as such. Given all this, I see no evidence that the mind share of the Evangelicals in the US is increasing.
The proxy I am specifically looking at for evangelical Christianity is people who claim to have spread the "good news" about Jesus to someone. In other words, asking people whether they themselves have evangelized (the data on this is the fairly clear 47% to 52% upward trend). To me, it makes a lot of sense to call someone an Evangelical Christian if they have in fact evangelized for Christianity. And if we disagree on that definition, then there is really nothing more I can say.
The Pope would be surprised to hear that, I think. All Christians of all denominations are supposed to spread the Good Word. Christianity is an actively proselytizing religion and has always been one. The Roman Catholic Church, in particular, has been quite active on that front. As have been Mormons, Adventists, Jehova's Witnesses, etc. etc.
Then let me respecify what I should have stated originally, Christians who evangelize for Christianity are effective at persuading others to join the cause. I am concerned with how bugging people about a cause (aka evangelizing for it) will effect the number of people in that cause. The numbers shown suggest that if we consider evangelizing Christians to be a group, then they are growing as support of my hypothesis.
If it works regardless of what it is you're telling people to do, that makes it dark arts.
Oh, I'm well aware that this technique could be used to spread irrational and harmful memes. But if you're trying to persuade someone to rationality using techniques of argument which presume rationality, it's unlikely that you'll succeed. So you may have to get your rationalist hands dirty. Your call on what's the better outcome: successfully convincing someone to be more rational (but having their agency violated through irrational persuasion) or leaving that person in the dark. It's a nontrivial moral dilemma which should only be considered once rational persuasion has failed.
It's not clear to me that donating to AMF is a reliable sign of their increased rationality. How do you know you're not simply guilt tripping them?
Apologies, I should have been clearer in using donations to the AMF as an analogy to persuading people to be more rational and not a direct way to persuade people to be more rational. I don't claim that these people are more rational simply because they donate to the AMF. If we are really trying to persuade people, however, guilt tripping should be considered as an option. Logical arguments will only change the behavior of a very small segment of society while even self-professed rationalists can be persuaded with good emotional appeals.

Would there be interest in me writing a post, or a series of posts, summarizing Richard Feldman's Epistemology textbook? Feldman's textbook is widely used in philosophy classes, and contains some surprisingly reasonable views (given what you may have heard about mainstream philosophy).

I'm partly considering it because it might be a useful way to counteract some common myths about what all philosophers supposedly know about evidence, the problem of induction, and so on. But I seem to have given away my copy, and a replacement would be $40 for a volume that's under 200 pages. So I want to gauge interest first.

I would read it. I'm interested in there being more careful checking of LW-ideas against relevant mainstreams.
Another valuable service, if you (ChrisHallquist) decide to write the proposed article, is to provide a glossary translating between LW idiom and conventional terminology.
Honestly that might be difficult, the mapping would be far from perfect. That said, I might be able to do something. Any terminology in particular you care about? Would it be better to focus on LW terms --> conventional terminology, or vice versa, or both?

Question about EA and CFAR. I think I've heard some people express sentiments that CFAR might be a good place for EAs to donate, due to the whole "raising the sanity waterline" thing.

On its face, this seems silly to me. From the outside view, CFAR just looks like a small self-help organization, though probably better than most such organizations, and it seems unlikely that it'll affect any significant portion of the population.

I think CFAR is great; I went to minicamp, and I think it probably improved my life, although I suspect I'm not as enthusiastic about it as most people who went. But if I were to give CFAR any money, it would be because it helps me and people I know, not because I think it's actually likely to have a large impact on the world.

Are there people around here who believe CFAR is actually likely to have a large impact on the world? Could you explain your reasoning why?

8Ben Pace
CFAR is working to discover systematic training methods for increasing rationality in humans. If they discover said methods, and make them publicly available, that could massively increase the sanity waterline on a global scale. This will require much work, but I think that it's really important work.
The difference between CFAR and most self-help organisation is that CFAR is committed to publishing research about it's interventions. Published research into effective change work is important.
5Scott Garrabrant
Also this: http://rationality.org/2013/09/27/surprise-as-a-cue-to-probability-experiment-1/
Eliezer said so on facebook.
I think the self-help aspects are intended to be a first step preliminary to ideally getting CFAR concepts into schools and colleges. CFAR is still very new and small, and haven't done a lot apart from reaching out to a few smart kids in that direction but I believe that is the goal.
Luke has suggested that part of what CFAR does is the movement-building work that MIRI used to do. I'm not quite sure how to interpret this suggestion, but maybe idea is that CFAR is set up in such a way that spread "worrying about x-risk" memes ends up being an important side-effect of what they do. This is something I will probably start a thread on next time I have $ I want to donate to charity.
What does the size of the organization matter? Roughly speaking, if the value of sending a person to CFAR is the same, regardless of whether a hundred people go or a million go. If you are paying for a scholarship today, the benefit is largely about the effect on that person, regardless of future students. What is the alternative charity? If you spend to save a life, that's just one person, too. Here are two reasons why scale could matter. One is room for funding. If you think CFAR will never get big, then it will never consume that much money. So it wouldn't have room for a lot of funding. But the important question is whether it has room for your funding. Eventual size doesn't tell us much about that. Another reason is gains from scale. The value of sending the millionth person may be the same as the value of the hundredth, but the cost may be much smaller. Curriculum development is amortized across students. If the next bit of funding is going to pay for curriculum development and two people agree about the value of the curriculum for the average student, but may disagree about the total value because they disagree about how many students it will reach.

Longitudinal study of men and happiness

“At a time when many people around the world are living into their tenth decade, the longest longitudinal study of human development ever undertaken offers some welcome news for the new old age: our lives continue to evolve in our later years, and often become more fulfilling than before. Begun in 1938, the Grant Study of Adult Development charted the physical and emotional health of over 200 men, starting with their undergraduate days. The now-classic ‘Adaptation to Life’ reported on the men’s lives up to age 55 and helped us understand adult maturation. Now George Vaillant follows the men into their nineties, documenting for the first time what it is like to flourish far beyond conventional retirement. Reporting on all aspects of male life, including relationships, politics and religion, coping strategies, and alcohol use (its abuse being by far the greatest disruptor of health and happiness for the study’s subjects), ‘Triumphs of Experience’ shares a number of surprising findings. For example, the people who do well in old age did not necessarily do so well in midlife, and vice versa. While the study confirms that recovery from a lo

... (read more)
Sample bias warning: people who went to college in the 1930s constitute a highly atypical subset of humanity.
I don't see why this would be more biased than people who went to college in the 1990s (other than the fact that the latter make up a larger proportion of the current population). Edit: I misunderstood your comment. I thought you made a point about the 1930s in general, rather than going to college in the 1930s. I now agree.
That does change things... Post-1930 saw an incredible expansion of college going, democratizing to a large fraction of the population. The enrolled population is going to change since it was very far from a random sample in the first place.

Last night I found myself thinking, "Well, suppose there's no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That's gotta be a good thing, right?"

Then I realized this sounds like rationalization.

Which got me to thinking about what my concerns are about this stuff.

My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.

Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.

It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.

Current HFT systems have little to do with AI. They are basically statistical models of a very narrow slice of reality (specifically the dynamics of the market microstructure) that can forecast these dynamics to some extent.
I contend that those that exist are already a problem.
How? Because they took some money off other speculators? Because some of them went bankrupt?
Most likely because there have been some alarming failures of automated traders, such as the 2010 "Flash Crash" or the April Flash Crash caused by a Twitter hoax. From a layman's perspective, it seems like all the regular problems of speculation with the added benefit of trades taking place faster than any human regulator could react. So far there hasn't been any serious damage but it's not clear to me whether that's a point in the traders' favors or just blind luck. Of course, this isn't a Friendliness issue so much as a competence one and I'm fairly sure there isn't much of an existential risk involved in these programs undergoing an intelligence explosions. So it might not be what the other posters here were thinking of.
Speculators are good for a market - they smooth out price fluctuations and give fundamentals traders better prices. And when they screw up the effect is usually to give money to other people, as with the flash crash. So I don't see the problem.
You'll have fun reading Accelerando. The solar system gentrifies by essentially HFTs on steroid driving up rents in the prime real estate closest the sun and thus energy dense.
Accepting for the sake of comity that the endpoint of those trends is indeed an Ending, are there historical events that you would similarly class as an Ending, or would this Ending be in a class by itself?
One could argue that China's inward-turning, burn-the-boats collapse around 1500 was a result of similar wealth concentration? Though I don't know the history in any detail.
I'd compare it to some of the hypothetical sociopolitical risk scenarios in Bostrom's "Existential Risks". Bostrom specifically mentions a "misguided world government" (driven by "a fundamentalist religious or ecological movement") and a "repressive totalitarian global regime" (driven by "mistaken religious or ethical convictions"), but doesn't mention scenarios driven by business or financial forces.
I'm sorry... this appears to be my evening for just not being able to communicate questions clearly. What I meant by "historical events" is events that actually have occurred in our real history, as distinct from counterfactuals.
Oh. Well, no.

An interesting paper: http://www.econ.ucsb.edu/papers/wp01-12.pdf

tl;dr -- Abstract (emphasis mine)

"We document a lower bound for the control premium: agents’ willingness to pay to control their own payoff. Participants choose between an asset that will pay only if they later answer a particular quiz question correctly and one that pays only if their partner answers a different question correctly. However, they first estimate the likelihood that each asset will pay off. Participants are 20% more likely to choose to control their payoff than a group of payoff-maximizers with accurate beliefs. While some of this deviation is explained by overconfidence, 34% of it can only be explained by the control premium. The average participant expresses a control premium equivalent to 8% to 15% of the expected asset-earnings. Our resu lts show that even agents with accurate beliefs may incur costs to avoid delegating and suggest that to correctly infer beliefs from choices, one should account for the control premium."

How to make it easier to receive constructive criticism?

Typically finding out about the flaws in something that we did feels bad because we realize that our work was worse than we thought, so receiving the criticism feels like ending up in a worse state than we were in before. One way to avoid this feeling would be to reflect on the fact that the work was already flawed before we found out about it, so the criticism was a net improvement, allowing us to fix the flaws and create a better work.

But thinking about this once we've already received the criticism rarely helps that much, at least in my experience. It's better be to consciously remind yourself that your work is always going to have room for improvement, and that it is certain to have plenty of flaws you're ignorant of, before receiving the criticism. That way, your starting mental state will be "damn, this has all of these flaws that I'm ignorant about", and ending up in the post-criticism state where some of the flaws have been pointed out, will feel like a net improvement.

Another approach would be to take the criticism as evidence of the fact that you're working in a field where success is actually worth bein... (read more)

I think most difficulty with receiving criticism is knowing with certainty the intention behind it is constructive. If I'm sure I actually made a serious and relevant mistake, it's much easier to receive criticism.

Would it be fair to rephrase your question as "How can we make receiving constructive criticism feel good?" If so, then I endorse the first technique you mentioned. (My mantra for this is "bad news is good news," which reminds me that now I can do something about the problem.) I intend to try the second technique. I have a third tactic, which is to use my brain's virtue ethics module. I've convinced myself that good people appreciate receiving constructive criticism, so when it happens, I have an opportunity to demonstrate what a good and serious person I am. (This probably wouldn't work if I didn't surround myself with people who also think this is virtuous and who do, in fact, award me social points for being open to critique.) Admonymous has some good advice on giving and receiving criticism. Also, use Admonymous. Mine is here.
I have a strong intuition that making it feel good, or even just less bad, might take away some of its usefulness and make it less memorable. Actually, if it felt good instead of just less bad, wouldn't that incentivize you to make more mistakes? There are individual differences in sensitivity to criticism, so your advice should be mainly aimed at people who are oversensitive in this regard.
If I feel bad about a piece of criticism, I automatically become defensive and incapable of learning from it (until I can distance myself from the bad feeling and thus become less defensive). I doubt making mistakes on purpose would realistically be a problem, at least for me. Even if it did feel good, having done a great work and knowing that I'd done my best would still be even better.
I have this problem too, but the timespan is pretty short. I think receiving criticism in person has even a bigger problem, that is the critic senses I get hurt and tones it down too much. When directly asking for criticism I'm tempted to declare "I will look butthurt at first but keep going and later I'll be thankful for learning so much more." The best teachers I've had gave criticism regardless of my feelings. There's an important difference between making intentional mistakes, and becoming careless. By incentivization of mistakes I meant the latter.
Ah, that does sound more plausible. If I'm in an environment where I can trust others to catch my mistakes, and I don't feel bad about those mistakes being pointed out, then I could definitely see myself getting more sloppy and relying on others to catch the mistakes instead of looking for them myself. In fact, I'm pretty sure that I have done that on a few occasions... On the other hand, this might also make for a useful cure for perfectionism. It's not obvious that trying to catch every mistake yourself would be the optimal division of labor, assuming that you really are in an environment where you can trust on others to correct some of the mistakes. Of course, it could be a problem if you develop lazy habits and carry them over to an environment without that external assistance.
I agree that we could probably rely more on others to catch our mistakes in certain contexts where equal expertise can be assumed. The problem is, if you're writing an article or a book for example, you're usually the expert compared to your readership, so you can't really expect others to reliably correct your mistakes, and some of your mistakes get cluelessly adopted.
My usual version of this is "I don't like receiving criticism, and I don't promise to take it well, though I promise to make my best efforts to do so and I usually succeed. That said, still less do I like having earned criticism withheld from me, so my preference is to receive criticism where I've earned it. If you remind me of this, I will do my best to be grateful."
Well, one way to subvert this would be to also arrange to get praise for my successes, and make the praise-for-success noticably more rewarding than the criticism-for-failure. But if for some reason that's not possible, then sure. Are you deliberately implying a normative statement about how sensitive a person ought to be to criticism here, or is it accidental?
True. Note that failing is massively easier than succeeding. You don't really have to plan for it. Perhaps the problem doesn't arise if you feel worse for making the mistake than you feel good about receiving criticism for it. However, I strongly suspect we mostly feel bad about our mistakes precisely because of the social context. I'm pretty sure I wouldn't want to feel good about my mistakes. The normativity of such a statement depends on the values of the person in question. If those values are a known factor, I do believe there is an optimal range of sensitivity one should try to gauge.
Ah. So "people who are oversensitive," here, means people who are more sensitive to criticism than is optimal according to their own values? Fair enough... thanks for clarifying that.
Exactly. Admittedly there are a lot of people I would like to have different sensitivities to criticism than they have or even want to have, like psychopaths for example. Even that of course doesn't imply any universal normativity.
I don't think that the fact that you receive criticism means anything. A smart person can find criticism for anything. The goal isn't to create work that isn't criticised but works that achieves a purpose. Maybe work that sells. Maybe work that influences people. Not work that isn't criticised. If you get criticism, ask yourself whether that criticism is relevant to the goals that you want to achieve.

Some IRC discussion reminded me that LWers might enjoy a SF short story I wrote some time ago: "Men of Iron".

Orthodox statistics beware: Bayesian radicals spotted:

A group of international Bayesians was arrested today in the Rotterdam harbor. According to Dutch customs, they were attempting to smuggle over 1.5 million priors into the country, hidden between electronic equipment. The arrest represents the largest capture of priors in history.

“This is our biggest catch yet. Uniform priors, Gaussian priors, Dirichlet priors, even informative priors, it’s all here,” says customs officers Benjamin Roosken, responsible for the arrest. (…)

Sources suggest that the shipm

... (read more)

This seems like a big deal:


Basically, dude illustrates equivalence between p-values and Bayes factors and concludes that 17-25% of studies with a p-value acceptance threshold of 0.05 will be wrong. This implies that the lack of reproducibility in science isn't necessarily due to egregious misconduct, etc., but rather insufficiently strict statistical standards.

So is this new/interesting, or do I just naively think so because it's not my field?

Not a big deal. The estimate you're impressed by can be done from power and prior odds like in Ioannides's famous paper and are similar to Leek's estimates from p-value distributions, and the recommendations baffle me - increase alpha?! P-value hacking is part of how we got here in the first place!
Is there a lower hanging fruit you have in mind?
I don't know any easy solutions to the low replication rate of many areas right now. It seems to be fundamentally a systematic problem of incentives. Even the easiest and most basic remedies like clinical trial registries are not being enforced, so it's hopeless to expect reforms like making all studies well-powered. I do think that increasing alpha is unlikely to fix the problems and is likely to backfire by making things worse and rewarding cheaters & punishing honest researchers: the smaller the p-value required, the more you reward people who can run hundreds of analyses to get a p-value under the threshold and the more you punish honest researchers who did one analysis and stuck with it.
That's not what the dude concludes. To quote the article itself (emphasis mine), "Although it is difficult to assess the proportion of all tested null hypotheses that are actually true, if one assumes that this proportion is approximately one-half, then these results suggest that between 17% and 25% of marginally significant scientific findings are false."

I'm planning to do a series of posts of myself systematically reading the Sequences and commenting on them. Anyone did this before?

sequences rerun
Lukeprog did something like this on his blog.

Trying to find a link I saw about CFAR doing some publishing some preliminary research on rationality techniques, including a finding that a technique they expected to work didn't actually work. Does anyone know what I'm talking about? My Google fu is failing me, to the point that I'm wondering if I'm imagining it.

Surprise as a cue to probability: Experiment #1?
Ah, thanks, that was it.

There is an atheist argument, "Religious people are only religious because they want to control other people or are controlled by them. Religion is a system of authoritarian control."

There is a religious argument, "Atheists are only atheists because they want to rebel against God. Atheism is an act of rebellion."

Are these extensionally equivalent?

Are there other common arguments from opposed viewpoints that pair up like this?

The "control" argument predicts more specific things than the "rebellion" argument, and so is a more useful hypothesis. But then again, it's not the whole story at all (desire for community, actual belief, glaring cognitive biases), and once you start inserting caveats the testability goes way down. So I'd say neither argument is worth making.
Actually a rebellion argument also predicts something. It would predict that atheists also rebel against other social norms.
Because the competing hypothesis ("atheists are willing to state a true thing even when most of society disagrees") also predicts some degree of general rebelliousness, I think the prediction is more about pointless and self-destructive behaviors. And if atheists are just allowed to be tricked by the devil, then I don't know how that pans out into other behaviors.
I don't think its accurate to describe them as an opposite pair, more that they both share the same premise (people consider control important/motivating) and derive different conclusions. You could generate an arbitrarily large number of other predictions from that premise, e.g. Greens support geen policies because they want to control people, blues support blue policies because they don't like the idea of being controlled.
I think a lot of disagreement about religion isn't really about the metaphysical claims, but rather about whether pastors/priests should have more or less influence on people compared to teachers, scientists, writers ... so the people who tend to agree with the pastors' values worry about those values getting lost, and so fret about atheism and rebellion. Seen like that, the disagreement is about authoritarian control vs. rebellion, and the "does god exist" thing is just tribal flag-waving.

A scenario which occurred to me and I found strange at first glance: Consider a fair coin, and two people -- Alice who is 99.9% sure the coin is fair and who can update on evidence like a fine Bayesian, and Bob who says he's perfectly sure the coin is biased to show heads and does not update on the evidence at all.

Nonetheless the perfectly correct Alice (who effectively needs choose randomly and might as well always say 'heads') and the perfectly incorrect Bob (who always says 'heads' because he's always certain that'll be the correct answer) have the same... (read more)

Doesn't seem very strange to me. For any (realistic) situation, there are any number of irrelevant false beliefs that you could have while still managing to predict the result correctly. Or even relevant false beliefs that nonetheless produced the right prediction: e.g. a tribe that believed in spirits might believe that sexual intercourse attracted a disembodied spirit into a woman's body and caused it to grow a new body for itself, which would be false but still lead to the correct prediction of (intercourse -> pregnancy).

The case of a fair coin seems particularly bad for Alice, being as it were maximally entropic.
The difference between them becomes apparent once they start betting on other things, like the number of tails in a series of 10 coinflips. The question is: what is special about betting on heads vs. tails of a fair coin that doesn't allow Alice to do any better than Bob?
A fair coin is maximally entropic. There is no skill that will let you do anything with sheer chaos.
I think it is better to say that bet on offer is fair. It is not a property of just the coin, but also of the bet. We do not notice that there is a choice of bet because it is even odds (which corresponds to max ent), but for any weighted coin there is a corresponding fair bet. Fair bets do have lots of special properties, but we would have the same situation if a correct choice of tails paid 1 and a correct choice of heads paid 2: Alice and Bob would both always bet H. (except in the 1/1000 chance that we start with 10 Ts and Alice updates wrongly; but the asymptotics are the same)
I think you're assuming that Alice has to pick H or T randomly and then ask the third party if it's correct. But she doesn't have to do that. She can just ask the third party whether it's H, each time. Over time it will be confirmed that the coin is fair.
Yes, but my point was that her knowledge that the coin is fair doesn't help her improve her guesswork on the text toss over Bob, and someone judging her on the basis of her toss-by-toss successes wouldn't be able to ascertain that she has more accurate beliefs than Bob...

Even less important than it was last week, but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments again, please feel free to do so.

What's your probability estimate on that happening?
A whole 5-10%.

How did you get this number? Is it lower or higher than Laplace's rule of succession would suggest? Have you ever seen such a comment work?

I have once seen such a comment produce an admission, but I don't think it was very productive. In fact, I think the two people disagreed on what happened.

added: maybe you are distinguishing between your explicitly asking for the person from everyone else complaining to the general public, with your method a priori better, but untested. From my observation so once working out of fifty, Laplace tells me 4%, compatible with your higher 5-10%.

I also PMed some people which I thought would increase the chance of them coming out by maybe 1-2%.

Not sure what to label or call this thinking error I had recently, but it seems as if it might come up more often, although I can not come up with another example for now.

I know someone who signed up for a marathon and I was to be part of the coordination of getting her to the official transportation area. After signing up online she was able to choose from a set of departure times for the bus take runners to the start line. Her wave is set to start at 10:15 a.m. She wanted to select the latest possible departure to avoid idle time standing around tryin... (read more)

Well, you applied a heuristic (assuming that everyone else had the same experience you did), and the heuristic turned out to be wrong in this case. One of the following must be true: * This heuristic isn't a good heuristic to follow in general. * This heuristic is good to follow in general, but there's something that should have indicated it would be wrong here. * This heuristic is good to follow in general, and there isn't something that should have indicated it would be wrong here. I'd start by figuring out which one it is.
Why wouldn't they want to cram everyone into their last slot? It's the same number of transported people per time, only the starting and stopping of the transportations changes, and everyone has to wait 2 hours in the cold or be relatively rewarded for being late.
Presumably because there is a maximum amount of people that could be transported per given slot. 10k+ of people per wave could not possibly fit into three departure times. And as the slots filled up your choices would be further restricted to available but undesirable earlier departures.

I'm starting practice drills for stenographic typing. The software (Plover) and the theory/typing drills (I'm using http://qwertysteno.com/Home/) are available for free, and the hardware is cheap (and I already have it).

What I'm really curious about, though, is the value you can get out of roughly doubling my typing speed from 80 WPM to 160. There's the time saved, but that's offset by the time spent learning steno. Really the big benefit is time-shifting the work of typing out English words, from "in the middle of having a thought" to the stenot... (read more)


Personally, I estimate the value of learning to type faster at approximately zero, because I can type faster (about 70 WPM) than I can decide what I want to type. How much time do you spend wishing you were able to type faster, because your fingers aren't keeping up with your brain?

It's less a question of average composition (deciding what to write) speed, and more a question of how much I'm keeping in memory. With a slower typing speed, I have to keep more in memory about how I want to finish the thoughts I'm having, and have more difficulty and frustration involved in the process. In other words, composition isn't a marathon, but a series of sprints. Each sprint is a race to get the thoughts you have out of short term memory and into storage. You'd probaply find your composition speed increase with your typing speed, as you can focus on the next thing to write rather than remembering what you have decided to write. And I just thought of another way to estimate it - think of the difference in your willingness and ability to write things on a phone or tablet (40 wpm) versus a keyboard (80 wpm) and extrapolate.
You can already talk at that speed or faster. Why not invest in a speech recognition program? Even if you think speech recognition isn't up to par yet, it will be in a few years. You could at least test if you get any benefit from increasing your speed by speech recognition, before you invest time in learning stenotyping. I'm a doctor, so I dictate a lot. The main advantage is quickly recording information I already have. I don't think there's much speed gain when you're recording and coming up with stuff at the same time.
I'm an extremely visual thinker, and have a strong preference for communicating by typing. Very, very visual - to the point where I notice myself having difficulty expressing myself verbally. I do much better without the pressure to keep a verbal continuity going, and allowing myself to backtrack and edit as I go without mucking with how the communication turns out in the end.
I'm extremely visual too. Learning to dictate effectively was a weird experience and was significantly slower at first than typing (80 WPM). A five minute dictation could take half an hour the first few times. It took a few dozens of dictations before I got the gist of it. I bet it was still easier to learn than stenotyping. These days I roughly visualize the text in my head while I'm dictating. Corrections can't be as easily made on the fly because the text is produced afterwards by a human and not in real time by a computer. If you're using a dictation program, you can quickly edit the text on the fly and combine typing and dictation, so the problems you're imagining might be more surpassable than you think. Of course, there are other downsides to dictation like nonprivacy and straining your voice, but being able to move freely is a nice upside. Would you like to be able to express yourself better verbally? You could see this as a chance to learn.
I use a text expander, a little program called PhraseExpress (basic version is free for non-commercial use). It lets you type a few characters and expands them into a long word or phrase, or corrects typos (like Word's autocorrect, except everywhere - and it can import Word's autocorrect list). It's also very handy for typing special characters. Depending on what you're typing, it could save you a lot of time.
Aside: What did you do to reach 80 wpm?
Start with (mostly) correct typing habits, was encouraged to start touch-typing while in elementary school, and used a computer often to do things (video games, forums, etc). I didn't have to put much deliberate work into trying to learn how to type faster - it was more a byproduct of being on the computer all the time.
I got to 80wpm in a weekend by switching to dvorak in software but not hardware, forcing myself to touch type correctly.
What was your speed with qwerty? One weekend to learn a new layout sounds insanely fast. How did you train?
I was 60wpm on qwerty; I'd taken a couple of classes several years before, but I hadn't done any practice drills or anything since, just normal typing. I didn't do any specific training; I just typed a lot (it could easily have been nanowrimo or similar, I don't remember), alt-tabbing back and forth with an onscreen layout diagram when I needed to. I agree that it sounds insanely fast, but that's how I remember it going.

While having heard of AutoHotkey a long time ago I just started using it and it's extremly useful.

One example would be opening wikipedia with the clipboard content as search string. It just takes 3 lines to assign that task to 'Windows Key + W'. I can't grasp why they didn't recommend us student at computer science to get proficient with it. It"s useful for automating common tasks.

It"s much easier to get results for learning programming through automating task with autohotkey than it's through learning it with simple python programs that serve ... (read more)

It's great that there are already so many useful scripts written by other people that you could use.

I have a constant feeling that I had a great idea or an important thought just now or just a few minutes ago. I know I have recurring thoughts - not of the bad kind, mind you - that I deem quite useful but I am never sure that this feeling of forgetting is with respect to those recurring ideas or something new. Does this kind of thing ring a bell?

I occasionally get the feeling that I had an important thought just now/a few minutes ago. More often than not, I can remember it by thinking about it for a few seconds.
Well that is the usual case of it, retrace my steps or think about something else for a few seconds, but in the described cases I can't for the life of me figure it out again.
I used to get it while drowsy and slipping in and out of sleep. I attributed it to maybe falling into REM for a few seconds and then getting pulled out.

So the latest patches from Microsoft on Tuesday crashes my internet browsers. I'm sure something happens to someone every time, this is a reminder to make sure you have adequate space for system restore points. I didn't for some reason.

I've announced a meetup but got the day and year wrong (it should be December 14, 2013). Can someone tell me how to fix it, please? I can't figure it out.

[insert obvious joke about meetup topic]

On the page of your announcement there got to be link "Edit meetup". It will let you edit anithing you need.
Thank you. Problem solved.

PSA: Sign up for Medfusion (or your region's equivalent) if your doctor offers it.

Yesterday I asked my doctor's nurse a question electronically. I had a symptom and I was unsure if it required a visit to the practice. The nurse responded the next day saying the symptom was benign and would go away. This saved me a copayment and a trip outside.

Which Medfusion? Google finds several organizations by that name, and all seem like implausible referents to me.

Is there a LW consensus on the merits of Bitcoin? Namely, is it the optimal place to invest money, especially in regards to mining equipment?

I think the value is liable to increase fairly dramatically over time, and that buying/mining Bitcoins will prove incredibly profitable, but I'd like the input of this community before I decide whether or not to put money forth for this venture.


My general impression about mining is that right now it's a horrible idea to get involved in it as the necessary investment/expertise keeps increasing and there seems to be a problem where there's a big pipeline of already-paid-for ASICs which cannot justify their purchase cost but where the least lossy strategy is to run them and recoup as much of the loss as possible (which pushes up the difficulty massively and makes additional capital investments awful ideas). If one wants exposure, buying bitcoins seems like the best approach right now.

Here are some previous discussions of Bitcoin on LW. There doesn't seem to be a clear consensus. Personally, I find this argument to be a compelling reason for optimism. I put a toy amount of money into bitcoin several months ago, and I am quite pleased with that experiment, and I'm considering putting some more money in. Incidentally, does anyone know if there is a good prediction market site for bitcoins? I know of a few, but I've heard bad things about them.
As far as I know, there is not. betsofbitcoin is completely screwed up, and Predictious is low-volume and a limited number of contracts.
That's disappointing. If the main problem with Predictious is low-volume, it might be worth using anyway, but the limited contracts really puts a damper on its utility.
If you're interested in simply making some bitcoin, Predictious might be a good idea because the low volume implies mispriced contracts. (Similarly, I think if one carefully studies Betsofbitcoin in detail, it may be possible to make steady profits off it: the rules are so bizarre that there must be inefficiencies.) Another advantage of Predictious is that it's operated by Pixode, which seems to be a reasonably legitimate company (more than one can say for most things in the Bitcoin space).
For bitcoin being an efficient online currency, the transaction fees make it impractical. Ripple provides a much better way of doing micropayments. If one would want to build a way to make a router provide payed access to anyone who comes along, Ripple is a better technology. The same goes for renting VPN on demand and similar tasks. It much easier to imagine that some third world country shifts from using prepayed mobile cards for distant currency transfers to using Ripple than that they shift to using bitcoins. Ripple allows an entity in the country to play bank and issue currency. That means that a village in Africa where everyone has a smart phone could just decide that the village government issues currency and demands that taxes get payed in that currency. The village can issue enough currency that the whole economy of the village runs in the currency. On the other hand a village in Africa can't simply switch to bitcoin, because they would have to buy them expensively and they don't have money to do so. Ripple also has the advantage of payments clearing much faster than bitcoin payments Ripple allows to make payments in Dollar or in Euro if you want to do so without having any risk of fluctuating exchange risks that you have with bitcoin. Maybe another process will even improve on Ripple but I think Ripple is superior to bitcoin for most purposes, so Bitcoin won't stand a chance over the long run. Ripple has also the advantage that it has a business model that can pay for developers so I would expect it to get more development hours than bitcoin.
I have not seen a significant LW consensus. My view is that it is unreasonable to expect one can time the market effectively, and so one should invest in Bitcoin based on the long term prospects, which are either $0 per coin or hundreds of thousands / a million per coin, in which case the probability of hitting the upper end is the primary factor of interest. Unfortunately, my estimate of the probability that it'll take off is roughly linear in the price, which means I don't consider price shifts very informative, which means it's always a hard decision to be in or out.