Posts

Sorted by New

Wiki Contributions

Comments

My understanding is that off-label often means that the potential patient is not within the bounds of the clique of patients included in the approved clinical trials. We don’t usually perform clinical trials on children or pregnant women, for instance.  Alternatively, strong scientific evidence is found that a drug works on a related disease to the actual target. It may well make sense to use drugs off label where the clinician can be comfortable that the benefits out-way the possible harms. In other cases, of course, it would be extremely poor medicine. In any case, having statistically significant and validated evidence that a drug actual does something useful, is non-negotiable IMO.

It is true that most pharma companies concentrate on indications that supply returns to offset the cost of development. The FDA does have a mechanism for Orphan Drug approval, for rare diseases, where the registration requirements are significantly lowered.  According to this site 41 orphan drug approvals were made in 2023. Whether this mechanism is good enough allow the promototion of  rare disease in the larger pharmaceutical industry is a good question. I wonder how many of these drugs, or their precursors, originated in academic labs,, and were then spun out to a start-up or sold on?

Two things that happen in the pharmaceutical industry today despite the FDA.

  1. Many drug candidates (compounds with IND status sanctioned by the FDA ) are pushed into clinical investigation prematurely by venture capital funded biotech, that more established and careful pharma companies would stay away from. These have a high rate of failure in the clinic. This is not fraud, by the way, it is usually a combination of hubris, inexperience, and a response to the necessity of rapid returns.
  2. Marketing wins over clinical efficacy, unless the difference is large. Tagamet was the first drug for stomach ulcers released in the late ‘70s.It was rapidly overtaken by Zantac, in the ’80s, through aggressive marketing, despite minimal clinical benefit.    Today there is a large industry of medical writers sponsored by the pharmaceutical industry, whose job it is to present and summarise the clinical findings on a new drug in the most favourable way possible without straying into actual falsehood.

The scientists working at the sharp end of drug discovery, who fervently believe that what they do benefits mankind (this is, I believe, a gratifyingly large proportion of them) generally respect the job the FDA do. This is despite the hoops they force us to go through. Without the FDA keeping us honest, the medicines market would be swimming with highly marketed but inadequately tested products with dubious medicinal value.  Investors would be less choosy about following respected well thought-out science, when placing their money. True innovation would actually be stifled because true innovation in drug  discovery only shows its value once you’ve done the hard (and expensive) yards to prove medical benefit over existing treatments. Honest and well enforced regulation forces us to do the hard yards and take no short cuts.

In 2023 55 new drugs were approved by the FDA, hardly a sign that innovation is slacking. Without regulation the figure might be ten times higher but clinicians would be left swimming in a morass of claims and counter claims without good guidance (currently generally provided by the FDA) of what treatments should be applied in which situation.


Poorly regulated health orientated companies selling products that have little or no value? Seems unlikely.. Oh wait, what about Theranos?

A thought provoking post. Regarding peer reviewed science, I can offer the perspective that anonymous peer review is quite often not nice at all. But, having said that, unless a paper is extremely poor,  adversarial reviews are rarely needed. A good critical constructive review can point out severe problems without raising the hackles of the author(s) unnecessarily and is more likely to get them dealt with properly than an overly adversarial review. This works so long as the process is private, the reviewer is truly anonymous, and the reviewer has the power to prevent bad work being published, even if from a respected figure in the field.  Of these three criteria it is the last that I’d have most doubts about, even In well edited journals.

I’m not claiming this view to be particularly well informed, but it seems a reasonable hypothesis that the industrial revolution required the development, dispersement and application of new methods of applied mathematics. For this to happen there needed to be an easy-to-use number system with a zero and a decimal point. Use of calculus would seem to be an almost essential mathematical aid as well. Last but not least there needed to be a sizeable collaborative, communicative and practically minded scientific community who could discuss, criticise and disseminate applied mathematical ideas and apply them in physical experiments. All these three items were extant in Britain in the late 17th century, the latter being exemplified by the Royal Society. These, combined with the geologically bestowed gifts of coal and iron ore, would set Britain up to be in the best position to initiate the Industrial Revolution.

Now, can a proper historian of science critique this and show how this view is incorrect?

Anecdotal, but in the UK, in 1986, as a just graduated PhD I bought a 3 bedroom house for less than 4 times my salary. At present a similar house in a similar location, will cost  roughly 10 times a starting PhD salary. House ownership for most young people in the UK is becoming a distant and ever delayed dream.

“Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can't flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.”

But directed evolution of a polymeric macromolecule (E.g. repurposing an existing enzyme to process a new substrate) is so much easier practically speaking than designing and making a bespoke macromolecule  to do the same job. Synthesis and testing of many evolutionary candidates is quick and easy, so many design/make/test cycles can be run quickly. This is what is happening at the forefront of the artificial enzyme field. 

So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering. 

Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely  has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons.  The point is that there is no reason an analogous architecture could not  be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus.  The open question is whether such an architecture could possibly arise incidentally,  or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.

Load More