Wiki Contributions

Comments

You switch positions throughout the essay, sometimes in the same sentence!

"Completely remove efficacy testing requirements" (Motte) "... making the FDA a non-binding consumer protection and labeling agency" (Bailey)

"Restrict the FDA's mandatory authority to labeling" logically implies they can't regulate drug safety, and can't order recalls of dangerous products. Bailey! "... and make their efficacy testing completely non-binding" back to Motte again.

"Pharmaceutical manufactures can go through the FDA testing process and get the official “approved’ label if insurers, doctors, or patients demand it, but its not necessary to sell their treatment." Again implies the FDA has no safety regulatory powers.

"Scott’s proposal is reasonable and would be an improvement over the status quo, but it’s not better than the more hardline proposal to strip the FDA of its regulatory powers." Bailey again!

This is a Motte and Bailey argument.

The Motte is 'remove the FDAs ability to regulate drugs for efficacy'

The Bailey is 'remove the FDAs ability to regulate drugs at all'

The FDA doesn't just regulate drugs for efficacy, it regulates them for safety too. This undercuts your arguments about off-label prescriptions, which were still approved for use by the FDA as safe.

Relatedly, I'll note you did not address Scott's point on factory safety.

If you actually want to make the hardline position convincing, you need to clearly state and defend that the FDA should not regulate drugs for safety.

The differentiation between CDT as a decision theory and FDT as a policy theory is very helpful at dispelling confusion. Well done.

However, why do you consider EDT a policy theory? It's just picking actions with the highest conditional utility. It does not model a 'policy' in the optimization equation.

Also, the ladder analogy here is unintuitive.

This doesn't make sense to me. Why am I not allowed to update on still being in the game?

I noticed that in your problem setup you deliberately removed n=6 from being in the prior distribution. That feels like cheating to me - it seems like a perfectly valid hypothesis.

After seeing the first chamber come up empty, that should definitively update me away from n=6. Why can't I update away from n=5 ?

Counterpoint, robotaxis already exist: https://www.nytimes.com/2023/08/10/technology/driverless-cars-san-francisco.html

You should probably update your priors.

Nope.

According to the CDC pulse survey you linked (https://www.cdc.gov/nchs/covid19/pulse/long-covid.htm) the metrics for long covid are trending down. This includes: currently experiencing, any limitations, and significant limitations categories.

How is this in the wrong place?

Nice. This also matches my earlier observation that the epestemic failure is of not anticipating one's change in value. If you do anticipate it, you won't agree to this money pump.

I agree that the type of rationalization you've described is often practically rational. And it's at most a minor crime against epestemic rationality. If anything, the epestemic crime here is not anticipating that your preferences will change after you've made a choice.

However, I don't think this case is what people have in mind when they critique rationalization.

The more central case is when we rationalize decisions that affect other people; for example, Alice might make a decision that maximizes her preferences and disregards Bob's, but after the fact she'll invent reasons that make her decision appear less callous: "I thought Bob would want me to do it!"

While this behavior might be practically rational from Alice's selfish perspective, she's being epestemically unvirtuous by lying to Bob, degrading his ability to predict her future behavior.

Maybe you can use specific terminology to differentiate your case from the more central one, maybe "preference rationalization"?

I can use a laptop to hammer in a nail, but it's probably not the fastest or most reliable way to do so.

Load More