Transcript Link: https://imgur.com/a/EnRLfLc (I am deadzone in this conversation).

This transcript convinced me that this Large Language Model was definitely more conscious than a monkey, and not far away from human-level consciousness. I believe with slightly more parameters, we can get something indistinguishable from humans - and it will be able to contribute to open source and discover new theorems etc.

To me, it just feels like we are 5-6 years from AGI. To be more concrete I think 100x GPT-3's parameters will easily be superhuman at all tasks. 

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 11:17 PM

What is your definition of "conscious", other than, "I know it when I see it"?

[-][anonymous]1y40

I think it is a modelling insight. When your model of reality is sufficiently accurate, you also model yourself as a part of that reality and become conscious. I think this chatbot had a very good model of what I was saying to generate the appropriate response.

When your model of reality is sufficiently accurate, you also model yourself as a part of that reality and become conscious.

Some control systems include models of themselves. Does that make them conscious?

My computer can give me a detailed report about its hardware and software components. Does that make it conscious?

I say no to both of these.

[-][anonymous]1y10

Yes, but control systems do not have an accurate model of reality in the sense that it cannot model my mind at all, and I am a part of reality.

I took your earlier comment to be talking specifically about modelling oneself, and claiming that this is the attainment of consciousness: "you also model yourself as a part of that reality and become conscious." Modelling other people did not appear to come into it.

The systems I mentioned model themselves. Yet they are not conscious. Therefore modelling oneself is insufficient for consciousness. (I doubt that it is necessary either.)