Notice that you can't create your feared scenario without "it" and "itself". That is, the AI must not simply be accomplishing a process, but must also have a sense of self - that this process, run this way, is "me", and "I" am accomplishing "my" goals, and so "I" can copy "myself" for safety. No matter how many tasks can be done to a super-human level when the model is executed, the "unboxing" we're all afraid of relies totally on "myself" arising ex nihilo, right? Has that actually changed in any appreciable way with this news? If so, how?
I think people in these sort of spheres have an anti-anthrocentrism argument that goes... (read more)
Notice that you can't create your feared scenario without "it" and "itself". That is, the AI must not simply be accomplishing a process, but must also have a sense of self - that this process, run this way, is "me", and "I" am accomplishing "my" goals, and so "I" can copy "myself" for safety. No matter how many tasks can be done to a super-human level when the model is executed, the "unboxing" we're all afraid of relies totally on "myself" arising ex nihilo, right? Has that actually changed in any appreciable way with this news? If so, how?
I think people in these sort of spheres have an anti-anthrocentrism argument that goes... (read more)