Thank you for sharing this.
The date of AI Takeover is not the day the AI takes over. The point of no return isn't when we're all dead – it's when the AI has lodged itself into the world firmly enough that humans' faltering attempts to dislodge it would fail.
Isn't that arguably in the past? Just the economic and political forces pushing the race for AI are already sufficient to resist being impeded in most foreseeable cases. AI is already embedded, and desired. AI with agency on top of that process is one more step, making it even more irreversible.
This has been well downvoted. I'm not sure why, so if anyone has feedback about what I said that wasn't correct, or how I said it, that feedback is more than welcome.
These two entities are distinct and must be treated as such. I've started calling the first entity "Static GPT" and the second entity "Dynamic GPT", but I'm open to alternative naming suggestions.
After a bit of fiddling, GPT suggests "GPT Oracle" and "GPT Pandora".
It's tempting to seek out smaller, related problems that are easier to solve when faced with a complex issue. However, fixating on these smaller problems can cause us to lose sight of the larger issue's root causes. For example, in the context of AI alignment, focusing solely on preventing bad actors from accessing advanced tool AI isn't enough. The larger problem of solving AI alignment must also be addressed to prevent catastrophic consequences, regardless of who controls the AI.
Wouldn't it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn't the existence and properties of humans and computers be deducible from other items in the dataset?
Is there some sort of support group for those of us who are taking the idea that our civilization is in a dead end seriously and can't do much to help on the frontlines?
Well, obviously, it won't be consolation enough, but I can certainly revel in some human warmth inside by knowing I'm not alone in feeling like this.
As a bystander who can understand this, and find the arguments and conclusions sound, I must say I feel very hopeless and "kinda" scared at this point. I'm living in at least an environment, if not a world, where even explaining something comparatively simple like how life extension is a net good is a struggle. Explaining or discussing this is definitely impossible - I've tried with the cleverer, more transhumanistic/rationalistic minded people I know, and it just doesn't click for them, to the contrary, I find people like to push in the other direction, as if it were a game.
And at the same time, I realize it is unlikely I can contribute anything remotely significant to a solution myself. So I can only spectate. This is literally maddening, especially so when most everyone seems to underreact.