People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner”
Ah yes, bedside manner, that magical trait which only humans can ever possess. As if interacting with an overworked, time-pressured doctor who's seen forty patients today and just wants to get through the queue is the pinnacle of experience and connection. The warmth and presence people romanticize about is an ideal, not a reality.
Meanwhile, setting aside capabilities, I would take interacting with claude over interacting with any of the doctors I've encountered in my entire life. And I've generally had good experiences with the medical system! Trans medicine is infamously terrible but the guy who prescribes my HRT is great, he's nice and knowledgeable and compliant. I would say he's probably top 10% as far as trans doctors go, based on what I've heard from my friends.
But no human can compete with a mind that's almost infinitely kind and patient. No human can compete with a mind without time pressure, without ego, without bad days, and no instinct to play social games. Even if the human doctor knew everything that the AI does (a questionable assumption even today), the process of explaining and teaching a complete novice is taxing for humans in ways that don't apply to AI.
An AI can sit with you for four hours while you meander around being afraid of injecting yourself with semaglutide. It can talk you through other administration routes, explain why injections are ideal, ask you enough questions to figure out that you think injection === IM, explain that subq injections (which are far less painful/dangerous and use much smaller needles) are a thing, and reassure you that it's going to be okay until you actually believe. And afterwards, it can share in your delight and joy and adrenaline as you're jittering from the stress of having done the injection and realized that yeah, it was no big deal.
An AI can spend half a day with you going through the random symptoms you're having in one of your eyes that you're terrified might mean you're going blind. It can research and lay out what the actual probabilities of various outcomes, while managing your emotions so you step back from "panicking" to a more calibrated "I should probably get this scanned by a machine". It can find you an in-person doctor, walk you through the process of scheduling an appointment, manage whatever random anxieties and fears crop up, talk to you and keep you calm on the bus ride over, and then go over the scan results with you when you decide that maybe the human doctor missed something.
These are real examples from my life and my girlfriend's. And this isn't even getting into the enormous mountain of emotional labor, vaguely therapist-y conversations we've both had about non-medical things with Claude. All for the price of $100 dollars a month; less than a single doctor's visit.
AI is absolutely capable of outperforming humans on "bedside manner".
The NYT article Your A.I. Radiologist Will Not Be With You Soon reports, “Leaders at OpenAI, Anthropic and other companies in Silicon Valley now predict that A.I. will eclipse humans in most cognitive tasks within a few years… The predicted extinction of radiologists provides a telling case study. So far, A.I. is proving to be a powerful medical tool to increase efficiency and magnify human abilities, rather than take anyone’s job.”[1]
I disagree that this is a “telling case study.”[2] Radiology has several attributes which make it hard to generalize to other jobs:
Takeaways from this incident I endorse:[6]
Offhand remarks from ML researchers aren’t reliable economic forecasts
People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner” and “regulatory capture”
If you have a job where a) your customers are legally prohibited from hiring someone other than you, b) even if an enterprising competitor decides to run the legal risk of replacing you they still have to pay you, and c) anyone who replaces you is likely to be sued, you probably have reasonable job security
The following products were included in my random sample:
| Product | Legally usable by patients? | Notes |
| Viz.AI Contact | No | |
| Aidoc | No | |
| HeartFlow FFRct | No | |
| Arterys Cardio DL | No | |
| QuantX | No | |
| ProFound AI for Digital Breast Tomosynthesis | No | |
| OsteoDetect | No | |
| Lunit INSIGHT CXR Triage | No | |
| Caption Guidance | No | Not intended to assist radiologists; intended to assist ultrasound techs. |
| SubtlePET | No |
I asked GPT 5.1 to randomly sample products and record whether they were legally usable by patients. Transcript here. I then manually verified each product.
Note that, because the supply of radiologists is artificially limited, a drop in demand needn’t actually cause a change in the number of radiologists employed. It would be expected to decrease their wages though. In the rest of this post, I will respond to a steelman of the NYT which is talking about a decrease in the wage of radiologists, not a decrease in the number employed.
I get vague vibes from the NYT article like “predictions of job loss from AI automation aren’t trustworthy”, but they don’t make a very clear argument, so it’s possible that I am misunderstanding their point. My apologies if so. Thanks to Yarrow for this point.
I randomly sampled 10 AI radiology products and found that patients are legally allowed to purchase 0 of them. See appendix.
Medical billing is complex, but, roughly, providers are reimbursed for the labor they put in to seeing the patient, not the patient’s improved outcomes. In my sample of 10 AI products, only 1 of the ten had a CPT code (meaning that providers can’t bill even $0.01 more for using those products than for using a non-AI tool) and that one which did could only be billed in combination with human labor.
Possibly at some point in the future, juries will acknowledge the supremacy of AI systems, but I doubt a present day jury would be very sympathetic to a hospital that replaced human radiologists with an AI that made a mistake. Some insurers have a blanket exclusion for AI-caused malpractice. Radiology has one of the highest rates of malpractice lawsuits. Thanks to Jason for this point.
Works in Progress has an article which goes into more detail about the state of radiology automation, and is helpful for better understanding the current state, though I think they are underselling the regulatory barriers