Your example (friday night 30-year-old stroke victim) does not show a bias, but merely that going with the most likely hypothesis is no guarantee to get the correct result every time (as opposed to most of the time). You can be wrong without having reasoned wrongly.
If you're treating with the maxim of "pretend it's the worst possible eventuality, no matter how unlikely", you'll cause a lot more harm than good. When does it stop? "CT (nevermind the radiation exposure) doesn't show a stroke? Well, could still be one anyways, let's treat for the worst case scenario." Nevermind that a priori a stroke is already supremely unlikely in your example scenario, and that stroke treatments are dangerous in their own right.
There are so many obscure diseases that share symptoms with e.g. a common cold ...
But looking for one piece of evidence that could have disproved the initial hypothesis would have prevented this, and not be such a time-waster. Just perform a simple test that's checking whether the patient is really drunk, for example: "Does their breath smell like alcohol?" Just a quick glance into the dark would've been enough here.
I would recommend the even simpler test of asking the patient how much they've been drinking.
I wouldn't be surprised if the reason the patient made it through triage to get the attention of the doctor is that the nurse had ruled out drunkenness.
A quick glance, in this case? Maybe. Someone coming in with a head concussion? On average (at a nearby university clinic) a hundred cranial CTs to find one abnormality. Many of those turning out harmless (similar to e.g. enlarged prostates, or calcified breast ducts). The biopsies to confirm, however, will cause a percentage of severe damage, which is why many screening tests are now postponed (e.g. recent media storm over breast screening recommendations being reduced, similar with more lax PSA watch and wait approaches). The number needed to treat (NNT) to save one life can sometimes be measured in dozens of unnecessary incontinence cases, later "unrelated" cancer (often eluding the statistics), thromboses after biopsies, etcetera.
Even when non-invasive tests are amenable to the situation at hand, you'd be surprised how many quick glances it takes to further rule out various unlikely hypotheses, or how long such quick glances can take in actuality. A short neural exam - and the mandatory documentation to go with it? Say 10 minutes, per patient. Rule out some additional unlikely hypotheses? Explain that to the gurneys filling the waiting room hallways. ER's are often crowded as is, additional waiting time will also kill patients. Efficient resource utilisation, using the resouces at hand, is crucial.
There's a famous med school saying that goes "when you hear hooves, think horses, not zebras". Much of the deviation from that rule is based on defensive medicine, which aims at avoiding costly lawsuits, a very poor surrogate marker for saving lives.
Pick your poison, but beware that there'll be "sob stories" either way, that could be avoided with other approaches.
Confirmatory "glances" only make sense to reach a certain certainty threshold in the main hypothesis. In the case of a girl coming in from a party on a friday night with slurred speech, I'd expect there to be thousands of (documented) "smell tests" - or breath alcohol measurements, to hold up in court, to catch that one case. And that's with a simple test available.
I'm not calling for tests to confirm some rare condition, I'm calling for tests to possibly disprove (or, since we're dealing with probabilities, make a whole lot less likely) the currently hypothesized condition.
It's exactly this which confirmation bias is all about: It's not that you should be trying to prove anything, it's that you should search for signs that might disprove your currently upheld hypothesis.
Suppose there are a hundred possible conditions to which all have the symptoms X. is your current most likely hypothesis (P>0.5), and the others are equally unlikely. Then the test that you want to perform isn't 99 tests to such that for test , ) is high, you want to conduct only one test s.t. ) is very low, while )) is moderate to high.
I didn't do the actual math here, so am not sufficiently certain (though I am quite confident) that such a test can actually exist (given the priors for contitions and symptoms), but checking for breath alcohol probably is such a test.
Also, please note that in the original post, it hasn't been indicated whether or not the girl has been to a party.
TL;DR: When you hear hooves, think horses, not zebras, but take a breath to check whether they have another feature all horses share, which other animals don't (in this case, perhaps neighing? Might not exclude zebras, but other animals).
Yes; the way I've seen this work out in practice is that the tests whose results do best at disproving your hypothesis are incidentally often the ones actively testing for the best alternative hypothesis. This is certainly domain specific. That effect arises out of the fuzziness of medical conditions and the availability of tests; e.g. low breath alcohol would not rule out (/yield a lower posterior for) intoxication in general, while a hemi-paresis would.
Even then, once your certainty in your main hypothesis reaches a certain level, it's hard to get any action-justifiable cost-benefit analysis for even breath alcohol testing (which due to legal reason would probably not be permissive anyways: "so you suspected it may not have been intoxication? and you didn't go with the lege artis test? also, you didn't document your worries on form A412").
Not to argue that specific story, it may well be that the best course of action hinges on precisely that additional "party" information.
Potatoe potahtoh. Point being, when you have a highly favored hypothesis, forgoing glances into the dark may be the rational course of action when epistemological resources are in short supply..
True, the example I gave didn't specifically illustrate any particular bias. However, I think there was a little bit of anchoring and confirmation bias involved. He expected to see an alcohol-OD patient. He saw a lot of symptoms that fit the diagnosis. I don't know her specific case, if there were symptoms he missed or disregarded, but it's probably a safe assumption.
The thing is - yes, alcoholism is the most likely hypothesis. However, anyone could say that alcoholism was the most likely hypothesis; it's the doctor's job to also consider the unlikely ones (especially the potentially fatal ones). That concept gets drilled into our head constantly over here. You're right - "pretending it's the worst-case-scenario" is wrong, but seriously considering the worst-case scenario is essential. A CT would have been wrong, but there are other tests (i.e. finding problems with one side of the body but not the other is a dead-giveaway).
I don't want to rag on this doc - this patient was coming from a party, and I don't know if her specific case could easily be distinguished from excessive alcohol use. But it did help drive home the importance to keep my eyes open.
Also, is there some sort of reasonable threshold? It's not as though strokes are extremely rare, though they are rare compared to getting drunk on Friday night.
Good question, To make a list of criteria for what is "reasonable threshold" for each disease, given each symptom, and each test, such a thing would probably be more trouble than it's worth for the simple in-the-room tests, but I'm sure they exist for expensive/harmful things like biopsies or CT scans. In this case, I think the presentation exceeds the threshold to consider a stroke, but not enough to do costly tests.
In general, we're drilled with the general algorithm: 1) a long list of "triggers" that says, "if you see this/these symptom(s), you should immediately put dangerous diseases X, Y, and Z on your differential." e.g. disorientation and slurred speech, the word "stroke" should AT LEAST enter your mind temporarily. 2) Then, rule-out X Y and Z with cheap and easy tests, which is usually something like: Y and Z are unlikely because he lacks (certain other traits or symptoms) I can rule out X with a quick check, like a 2-minute neurological exam. 3) Think horses, not zebras.
They suggest she sleep it off.
This is an interesting (and sad) story. I'd like to hear more if you can remember them.
I.e. I know one doctor told who had a typical Friday night
I think you accidentally a word here or something.
(While I'm at it: "i.e." means "that is". Use it when you're restating something you just said in different words. When you're giving an example of something that you just mentioned, use "e.g." [which means "for example"].)
"I think you accidentally a word here or something." - whoops! Thanks.
Re: "i.e." Wow, thanks! I never knew that. Mind = blown.
Yesterday in medical school, we had a lecture on common mistakes doctors make. I saw this slide:
Attribution Errors
Confirmation Bias
Commission Bias
Omission Bias
Anchoring