One major category of concern is institutional erosion. Here are a few salient examples:
This is a small sampling; a comprehensive account would be far longer.
For example, what kind of coalition would be able to actually update that Trump is less bad than it thinks if your current fears don't come to pass?
The things that have already come to pass are already deeply damaging to our institutions. We are not in 2016 when we could only extrapolate how he would behave as president. We have a decade of evidence since then. He may not play out every single thing people fear, but, like, if he doesn't end up trying to acquire Greenland by force, that's not much of an update. Not everything worth worrying about comes to pass. It's absurd that such a thing is plausible for him, even if it's not probable.
My first thought was Amazon leveraging this for drone delivery.
It's more of a continuum than that. For example, some people argue that Claude Code writing most of the code that goes into improving Claude Code is an early form of partial RSI. Clearly, the loop is incomplete, but Claude is accelerating his own advancement in a real way. How far out of the loop do humans need to be for it to fully count?
How about these?
Seems plausible, but I would put this under the category of not having fully solved continual learning yet. A sufficiently capable continual learning agent should be able to do its own maintenance, short of a hardware failure.
Depending on how it's achieved, it might not be a matter of maintenance/hardware failure so much as compute capacity. Imagine if continual learning takes similar resources to standard pretraining of a large model. Then they could continually train their own set of models, but it wouldn't be feasible for everyone to get their own version that continually learns what they want it to.
I consider Claude's "taste" to be pretty good, usually, but not P90 of humans with domain experience. I'd characterize his deficiencies more along the lines of a lack of ability to do long-term "steering" at a human level. This is likely related to a lack of long-term memory and hence the ability to do continual learning.
Is this sufficient? I don't really know the best place to put a disclosure.
https://en.wikipedia.org/wiki/User_talk:Alexis0Olson/Multilayer_perceptron#LLM_Disclosure
Claude Code is excellent these days and meets my bar for "AGI". It's capable of doing serious amounts of cognitive labor and doesn't get tired (though I did repeatedly hit my limits on the $20 plan and have to wait through the 5-hour cooldown).
I spent a good chunk of this weekend seeing if I could get Claude to write a good Wikipedia article if I told it to use the site's rules and guidelines and then let it iteratively critique and revise against those guidelines until the article fully met the standards. I wrote zero of the text myself, though I did paste some Q&A back and forth to NotebookLM to help with citations and had ChatGPT generate an additional flowchart visual to include.
After getting some second opinions from Gemini and ChatGPT, I will have Claude do a final round of revisions and then actually try to get it on Wikipedia. I will share the link here if it gets accepted--I don't really know how that works, but I bet Claude can help me figure it out.
Thanks for the perspective. I would like to note that pointing out earlier instances of bad behavior is important historical context, but doesn't make more recent bad behavior any less bad. The USA has some really dark stuff in its past. We should remember and do better.