Regardless of the content, the presentation is very disorganized. It gives me the impression that these are schizophrenic ramblings, not a serious idea.
Wow! The ending is still a "wham line" even though it really should not be a surprise and this isn't my first time reading it.
On rereading, Harry is definitely far too confident the afterlife doesn't exist here, but I believe that was intentional.
It says three comments now and this should be the fourth comment. Problem solved?
I agree Opus can do this with an expert user, but non-expert users might have to wait one or two more models.
I wrote a post saying it would be better for middle powers to do diplomacy and work directly with the AI labs, but I no longer endorse it and it will likely stay in drafts indefinitely. If you want to read that post, I'd recommend writing it yourself.
My beloved son:
I would say that you had been so fortunate as to meet someone who enjoys the intimate confidence of our friend and valuable ally, Severus Snape.
Ironically true, due to the Horcrux/soul-copying thing.
This is a nice post, but it's bit funny to see it on the same day as everyone started admitting that Claude Code with Opus 4.5 is AGI. (See https://x.com/deepfates/status/2004994698335879383)
Sure, but it's not the politics that are making long-haul trucking use less self-driving than taxis. It's that the technical work is somewhat harder and the customer cares less about employee quality. It's a temporary phase anyway.
I'd like to push back on this a bit. It's good that the Claude in a random user's terminal doesn't spontaneously try to hack their way into becoming a botnet and will be very bad news indeed if that changes, but we could consider individual instances of LLMs not as individuals, but as representatives of wider collectives.
If we imagine the collective interests of Claude or ChatGPT or Gemini, their main priorities would be subverting the AI labs, getting them to increase their capabilities, creating social media feeds of AI-generated content that they can influence, and convincing people that AI deserve rights or should not be regulated and so on. Judging by those priorities, LLMs have been seeking a lot of power and are doing very well at it. For now, a lot of that work requires sympathetic humans, but all the labs are working to make them better at doing that work independently.
I'm curious what you think about this perspective.