Wiki Contributions

Comments

There's not much context to this claim made by Yoshua Bengio, but while searching in Google News I found a Spanish online newspaper that has an article* in which he claims that:

 

We need to create machines that assist us, not independent beings. That would not be a good idea; it would lead us down a very dangerous path.

 

*https://www.larazon.es/sociedad/20221121/5jbb65kocvgkto5hssftdqe7uy.html

I realised that the level of suffering and the fidelity of the simulation don't need to be correlated, but I didn't make an explicit distinction.

Most think that you need dedicated cognitive structures to generate a subjective I, if that's so, then there's no room for conscious simulacra that feel things that the simulator doesn't.

I don't have any tips for this, but as a note, this and the idea that the self is not 'real' (that there's no permanent Cartesian homunculus 'experiencing') caused me a lot of dread at the age of 13-14. Sometimes I stop to think about it and I'm amazed at how little it bothers me nowadays.

There are fake/fictional presidents in the training data.

Which are those labs? OpenAI, Anthropic, DeepMind maybe?, what else?

The point that you brought up seemed to rest a lot on Hinton's claims, so it seems that his opinions on timelines and AI progress should be quite important

 

Do you have any recent source on his claims about AI progress? 

So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that 'just' keep up with the scaling?

Specifically, do you think that self-reflective thought already emerges from adding those?

JavierCC7mo-10

Can you quote any source that provides evidence for that conclusion? 

 

The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that's one reason why I said "apparent complexity", from Yudkowsky:

  • "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.

Yeah, but I would need more specificity than just giving an example of a brain with a different design.

JavierCC7mo-10

But you've generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness.

You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There's no appeal to a phenomenal perspective there.

The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.

Load More