I realised that the level of suffering and the fidelity of the simulation don't need to be correlated, but I didn't make an explicit distinction.
Most think that you need dedicated cognitive structures to generate a subjective I, if that's so, then there's no room for conscious simulacra that feel things that the simulator doesn't.
The point that you brought up seemed to rest a lot on Hinton's claims, so it seems that his opinions on timelines and AI progress should be quite important
Do you have any recent source on his claims about AI progress?
So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that 'just' keep up with the scaling?
Specifically, do you think that self-reflective thought already emerges from adding those?
Can you quote any source that provides evidence for that conclusion?
The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that's one reason why I said "apparent complexity", from Yudkowsky:
- "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
Yeah, but I would need more specificity than just giving an example of a brain with a different design.
But you've generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness.
You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There's no appeal to a phenomenal perspective there.
The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.
Your claims about the limitations on knowing about consciousness and free will based on the primitivity of perspective seem to me pretty random.
The perspective that we are taking is a primitive, but I don't understand why you connect that with consciousness given that the perspective is completely independent on any claims about it being conscious. I don't see how to link both non-arbitrarily, the mechanisms of consciousness exist regardless of the perspective taken. The epistemic limitations come from facts about brains not from an underlying notion of perspective.
And in the case of free will, there's no reason why we cannot have a third-person account of what we mean by free will. There's no problematic loop.
People turn these things into agents easily already, and they already contain goal-driven subagent processes.
Sorry, what is this referring to exactly?
There's not much context to this claim made by Yoshua Bengio, but while searching in Google News I found a Spanish online newspaper that has an article* in which he claims that:
*https://www.larazon.es/sociedad/20221121/5jbb65kocvgkto5hssftdqe7uy.html