I've been thinking about this a lot lately, so it was interesting to stumble across this.
Science seems to have built itself a huge reductionism bias, which honestly has served it very well at solving many many problems, it's a fantastic and useful tool.
The problem is that (1) this has left huge gaps and (2) the sciences in general are blind to their own reductionist biasses (3) for many people reductionist science is seen as the only viable tool.
When you combine these compounding effects, this leaves a huge opportunity for independent researchers like us to address the gaping holes that are inevitably left all over the sciences.
Thats a moral judgement not science.
Who says your assessment of adequacy of grounds is accurate? Who says that holding a position others dont see makes it wrong? One persons vision is anothers delusion. In fact, this is almost exactly how the science commuity treats outliers eg Einstein.
Agree there should be more discussion of intelligence in general. This is the core of my research.
However, describing "a corporation" as an intelligence is stretching the concept a bit, debateable but its a can of worms. Why not instead talk about humans v animals, that would be a more accurate yet similar analogy, without the contentiousness.
LLM text is the result of human-AI hybrid thought. So it is not based on nothing.
I'm pretty sure that your example doesn't fit the definition of psychosis becuase psychosis must be very clearly incorrect by definition, whereas some of the things you're discussing may be correct, and at the very least possibly worthy of exploration. It's all very well to say that frontier science is "delusional", but so was superstring theory until it was proven. It's imaginative work and requires an open mind. Of course you might argue that your example is not frontier science, and that's fine but I think you'll find that when you come to define what is delusional and what is not you're on very shaky ground with that example. Guess what Im saying is that when doing this kind of work be careful that cultural bias/ moral judgement/ general closed mindedness bias in your test design is ruled out as solidly as possible if you want your results to be considered valid.
Yes, writing should be optimal but also, the algorithm is just not very good (mostly).
Result: The internet has utterly failed at educating us (if you rely on it's algos).
Which is sad, it's such an easily solvable problem if they had the right skillsets doing the algos, with the right motivations, things could be so much better. Just another type of enshittification I guess, except this time the ultimate driver seems to be that organisations struggle with complex soft problems with medium term horizons and complex measurement, even when the impact is potentially business-destroying ...