I have an idea I'd like to discuss that might perhaps be good enough for my first top-level post once it's developed a bit further, but I'd first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I'll post a rough outline here as a request for comments.
It's about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It's a matter of practical interest to most people, where it's basically impossible not to have an opinion. So people have strong opinions, and you basically can't avoid forming one too.
The available hard scientific evidence doesn't say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of c
It seems a common bias to me and worth exploring.
Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.
Actually, now you've nudged my mind in the right direction! Let's consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That's certainly what I had thought until I read a recent paper titled "Ordeals" by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don't feel like going through the math of the original paper.]
I can't say with absolute confidence if Leeson's arguments are correct or not, but they sound highly plausible to me, and certainly can't be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the poss...
I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:
A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).
Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.
At the same time, there has been a traditional belief that if you eat X, you'll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn't tell if it's true or just a superstition that came as a side-effect of the religious taboo. You'd see that poor folks who eat it do get sick more often, but
Do you have a citation for that?
As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice - they give them in overwhelming force to eradicate the bacteria completely. For example, they'll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren't deliberately scrimping on the antibiotics in an individual context.
The "give as few antibiotics as possible" rule mostly applies to giving them to as few patients as possible. If there's a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.
The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.
I'm doing an MSc in Computer Forensics and have stumbled into doing a large project using Bayesian reasoning for guessing at what data is (machine code, ascii, C code, HTML etc). This has caused me to think again about what problems you encounter when trying to actually apply bayesian reasoning to large problems.
I'll probably cover this in my write up; are people interested in it? The math won't be anything special, but a concrete problem might show the problems better than abstract reasoning,
It also could serve as a precursor to some vaguely AI-ish topics I am interested in. More insect and simple creature stuff than full human level though.
Any given goal that I have tends to require an enormous amount of "administrative support" in the form of homeostasis, chores, transportation, and relationship maintenance. I estimate that the ratio may be as high as 7:1 in favor of what my conscious mind experiences as administrative bullshit, even for relatively simple tasks.
For example, suppose I want to go kayaking with friends. My desire to go kayaking is not strong enough to override my desire for food, water, or comfortable clothing, so I will usually make sure to acquire and pack enough of these things to keep me in good supply while I'm out and about. I might be out of snack bars, so I bike to the store to get more. Some of the clothing I want is probably dirty, so I have to clean it. I have to drive to the nearest river; this means I have to book a Zipcar and walk to the Zipcar first. If I didn't rent, I'd have to spend some time on car maintenance. When I get to the river, I have to rent a kayak; again, if I didn't rent, I'd have to spend some time loading and unloading and cleaning the kayak. After I wait in line and rent the kayak, I have to ride upstream in a bus to get to the drop-off point.
Of cours...
General question on UDT/TDT, now that they've come up again: I know Eliezer said that UDT fixes some of the problems with TDT; I know he's also said that TDT also handles logical uncertainty whereas UDT doesn't. I'm aware Eliezer has not published the details of TDT, but did he and Wei Dai ever synthesize these into something that extends both of them? Or try to, and fail? Or what?
Since I'm going to be a dad soon, I started a blog on parenting from a rationalist perspective, where I jot down notes on interesting info when I find it.
I'd like to focus on "practical advice backed by deep theories". I'm open to suggestions on resources, recommended articles, etc. Some of the topics could probably make good discussions on LessWrong!
ETA: This scheme is done. All three donations have been made and matched by me.
I want to give $180 to the Singularity Institute, but I'm looking for three people to match my donation by giving at least $60 each. If this scheme works, the Singularity Institute will get $360.
If you want to become one of the three matchers, I would be very grateful, and here's how I think we should do it:
You donate using this link. Reply to this thread saying how much you are donating. Feel free to give more than $60 if you can spare it, but that won't affect how much I give.
In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.
I will do the same. (Or if you're the first matching donor, then I already have -- see directly below.)
To show that I'm serious, I'm donating my first $60...
So I'm trying to find myself some cryo insurance. I went to a State Farm guy today and he mentioned that they'd want a saliva sample. That's fine; I asked for a list of all the things they'll do with it. He didn't have one on hand and sent me home promising to e-mail me the list.
Apparently the underwriting company will not provide this information except for the explicitly incomplete list I got from the insurance guy in the first place (HIV, liver and kidney function, drugs, alcohol, tobacco, and "no genetic or DNA testing").
Is it just me or is it outrageous that I can't get this information? Can anyone tell me an agency that will give me this kind of thing when I ask?
If they were explicit about exactly what tests they planned to do they would open themselves up to gaming. Better to be non-specific and reserve the freedom to adapt. For similar reasons bodies trying to prevent and detect doping in sports will generally not want to publicize exactly what tests they perform.
Is LessWrong undergoing a surge in popularity the last two months? What does everyone make of this:
http://siteanalytics.compete.com/overcomingbias.com+lesswrong.com/
Possibly a variation on the attribution bias: Wildly underestimating how hard it is for other people to change.
While I believe that both attribution bias and my unnamed bias are extremely common, they contradict each other.
Attribution bias includes believing that people have stable character traits as shown by their actions. This "people should be what I want-- immediately!" bias assumes that those character traits will go away, leading to improved behavior, after a single rebuke or possibly as the result of inspiration.
The combination of attribu...
Gawande on checklists and medicine
Checklists are literally life-savers in ICUs-- there's just too much crucial which needs to be done, and too many interruptions, to avoid serious mistakes without offloading some of the work of memory onto an system.
However, checklists are low status.
...Something like this is going on in medicine. We have the means to make some of the most complex and dangerous work we do—in surgery, emergency care, and I.C.U. medicine—more effective than we ever thought possible. But the prospect pushes against the traditional culture of m
Morendil:
That analysis would be inconsistent with my understanding of how checklists have been adopted in, say, civilian aviation: extensive analysis of the rare disaster leading to the creation of new procedures.
One relevant difference is that the medical profession is at liberty to self-regulate more than probably any other, which is itself an artifact of their status. Observe how e.g. truckers are rigorously regulated because it's perceived as dangerous if they drive tired and sleep-deprived, but patients are routinely treated by medical residents working under the regime of 100+ hour weeks and 36-hour shifts.
Even the recent initiatives for regulatory limits on the residents' work hours are presented as a measure that the medical profession has gracefully decided to undertake in its wisdom and benevolence -- not by any means as an external government imposition to eradicate harmful misbehavior, which is the way politicians normally talk about regulation. (Just remember how they speak when regulation of e.g. oil or finance industries is in order.)
...Why (other than the OB-inherited obsession of the LW readership with "status") does this hypothesis seem favored at t
From an article about the athletes' brains:
Unsurprisingly, most of the article is about elite athlete's brains being more efficient in using their skills and better at making predictions about playing, but then....
...n February 2009 Krakauer and Pablo Celnik of Johns Hopkins offered a glimpse of what those interventions might look like. The scientists had volunteers move a cursor horizontally across a screen by pinching a device called a force transducer between thumb and index finger. The harder each subject squeezed, the faster the cursor moved. Each play
Craig Venter et al. have succeeded in creating the first functional synthetic bacterial genome.
http://www.sciencemag.org/cgi/content/full/328/5981/958 http://www.sciencemag.org/cgi/content/abstract/science.1190719 http://arstechnica.com/science/news/2010/05/first-functional-synthetic-bacterial-genome-announced.ars http://www.jcvi.org/cms/research/projects/first-self-replicating-synthetic-bacterial-cell/overview/
I wrote up a post yesterday, but I found I was unable to post it, except as a draft, since I lack the necessary karma. I thought it might be an interesting thing to discuss, however, since lots of folks here have deeper knowledge than I do about markets and game theory
I've been working recently for an auction house that deals in things like fine art, etc. I've noticed, by observing many auctions, that certain behaviors are pretty reliable, and I wonder if the system isn't "game-able" to produce more desirable outcomes for the different parties ...
The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.