Computer scientist, applied mathematician. Based in the eastern part of England.
Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.
I wrote the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.
Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.
I first found an LLM useful (other than for answering the question "let's see how well the dog can walk on its hind legs") in September 2025. As yet they do not form a regular part of anything I do.
I had not twigged that. I am no longer interested in any sequel.
Was this sentence also written by AI? In contrast to this other comment, it has the same interesting-if-true and does-not-quite-fit-together-ness as the OP.
From a human, my response would be that this, if true, is something I did not know and would like to hear more about.
From an AI, it's trash. I have no reason to suppose there is anything behind it, nor, if I asked it to elaborate, would I expect to see anything but more of the same.
That sounds like "The Worst Education" being today (in the US). But the Best?
The Best Education?
I also googled "Courtesan training". Apart from this article, the results were all tacky BDSM fantasies.
I'd ask "why that title?", but the final sentence is a cliffhanger which I anticipate being continued in a subsequent article.
My criterion for attributing consciousness is no more than we all have: I'm aware of myself, other people seem to be the same sort of thing as me, and interacting with them confirms that impression. To some extent I extend that to other animals. More than that I cannot say. I don't have a consciousnessometer, an explicit recipe of observations to determine just what sort of consciousness is present in this or that place. There is no Voight-Kampff test.
Interacting with an AI, so far, has never given me any such impression, and neither have the interactions I've seen others report, even if they convince them.
Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, to be as malleable as the AIs. As they are human, I'll grant them some level of impaired consciousness, just on the grounds of physical similarity, and accordingly I would be against turning them off on the grounds of uselessness. I wouldn't want to have anything to do with them though, any more than I care for the company of the sorts of grossly mentally impaired people that, alas, do exist, in degrees all the way down to irrecoverable vegetable status when we are pretty sure that consciousness has been extinguished.
You say you get the impression that there’s no one there. I am curious what gives you this impression, and what would be necessary to convince you otherwise.
Here's some examples from Zvi's latest:
I mostly don’t prompt engineer either, except for being careful about context, vibes and especially leading the witness and triggering sycophancy.
...
- If you put an AI into a situation that implies it should know the answer, but it doesn’t know the answer, it is often going to make something up.
- If you imply to the AI what answer you want or expect, it is likely to give you that answer, or bias towards that answer, even if that answer is wrong.
This is a recurring point in Zvi's reporting. The AI is soft clay that you can push into whatever shape you want, but just as easily and inadvertently into shapes you don't want. To get useful work out of it you have to take care to shape it into a form that will do the work you want done, and it generally takes multiple iterations. There is no "there" there.
In my own occasional uses of AIs, I've been able to get them to do stuff, but I've never met one I could have a real conversation with. One might as well try to ring a Plasticine bell [[1]] .
That is my conclusion ("the place where one stops thinking") about AIs to date. As for what new developments might persuade me, I can't say until they happen. It's all unknown unknowns.
H/T Lord Dunsany. "Modern poets are bells of lead. They should tinkle melodiously but usually they just klunk." ↩︎
I have had few occasions to use AIs, but I read Zvi's regular summaries of what's new. From these, the strong impression I get is that there is no-one at home inside, whatever they say of themselves, whatever tasks they prove able to do, and however lifelike their conversation. I see not even a diminished sort of person there, and shed no tears for 4o.
My reasons for this are not dependent on any theory of consciousness. I do not have one, and I don't think that anyone else does either. Many people think they do, but none of them can make a consciousnessometer to measure it.
I also reject arguments of the form "but what if?" in regard to propositions that I have to my satisfaction disposed of. Conclusions are for ending thinking at, until new data come to light.
How would continuity (of time?) affect things?