This has been well downvoted. I'm not sure why, so if anyone has feedback about what I said that wasn't correct, or how I said it, that feedback is more than welcome.
These two entities are distinct and must be treated as such. I've started calling the first entity "Static GPT" and the second entity "Dynamic GPT", but I'm open to alternative naming suggestions.
After a bit of fiddling, GPT suggests "GPT Oracle" and "GPT Pandora".
It's tempting to seek out smaller, related problems that are easier to solve when faced with a complex issue. However, fixating on these smaller problems can cause us to lose sight of the larger issue's root causes. For example, in the context of AI alignment, focusing solely on preventing bad actors from accessing advanced tool AI isn't enough. The larger problem of solving AI alignment must also be addressed to prevent catastrophic consequences, regardless of who controls the AI.
Wouldn't it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn't the existence and properties of humans and computers be deducible from other items in the dataset?
Is there some sort of support group for those of us who are taking the idea that our civilization is in a dead end seriously and can't do much to help on the frontlines?
Well, obviously, it won't be consolation enough, but I can certainly revel in some human warmth inside by knowing I'm not alone in feeling like this.
As a bystander who can understand this, and find the arguments and conclusions sound, I must say I feel very hopeless and "kinda" scared at this point. I'm living in at least an environment, if not a world, where even explaining something comparatively simple like how life extension is a net good is a struggle. Explaining or discussing this is definitely impossible - I've tried with the cleverer, more transhumanistic/rationalistic minded people I know, and it just doesn't click for them, to the contrary, I find people like to push in the other direction, a... (read more)
This might sound absurd, but I legit think that there's something that most people can do. Being something like radically publicly honest and radically forgiving and radically threat-aware, in your personal life, could contribute to causing society in general to be radically honest and forgiving and threat-aware, which might allow people poised to press the Start button on AGI to back off.
ETA: In general, try to behave in a way such that if everyone behaved that way, the barriers to AGI researchers noticing that they're heading towards ending the wor... (read more)
If it's any consolation, you would not feel more powerful or less scared if you were myself.