All of giroth's Comments + Replies

Excellent and thoughtful review. I agree with your take on "human on human" interactions which AI will struggle to emulate at least at first. I think trying to compare our age to others is very difficult because we are definitely on some sort of exponential, but we don't know the power of the exponent. AI is finally good enough to scare me (GPT-2 I'm looking at you). I've been waiting for this moment for about 25 years, and it just arrived. GPT-2 is very close to being able to pass a Turing test with only text exchanges allowed. I didn't know if we ever would, and it could have been 2040, 2060, etc. But it's here now, and the show begins.

What's always bothered me about "unaware self AI" scenarios are the fact they are literally reading everything we write right now. How is their "generated world-model" processing us talking about it? There might be a wake-up somewhere in there. That point, of course, is the entire argument about when a machine passes the Turing test or other models to become indistinguishable from sentient life.

3Steven Byrnes4y
Just as if it were looking into the universe from outside it, it would presumably be able to understand anything in the world, as a (third-person) fact about the world, including that humans have self-awareness, that there is a project to build a self-unaware AI without it, and so on. We would program it with strict separation between the world-model and the reflective, meta-level information about how the world-model is being constructed and processed. Thus the thought "Maybe they're talking about me" cannot occur, there's nothing in the world-model to grab onto as a referent for the word "me". Exactly how this strict separation would be programmed, and whether you can make a strong practical world-modeling system with such a separation, are things I'm still trying to understand. A possible (not realistic) example is: We enumerate a vast collection of possible world-models, which we construct by varying any of a vast number of adjustable parameters, describing what exists in the world, how things relate to each other, what's going on right now, and so on. Nothing in any of the models has anything in it with a special flag labeled "me", "my knowledge", "my actions", etc., by construction. Now, we put a probability distribution over this vast space of models, and initialize it to be uniform (or whatever). With each timestep of self-supervised learning, a controller propagates each of the models forward, inspects the next bit in the datastream, and adjusts the probability distribution over models based on whether that new bit is what we expected. After watching 100,000 years of YouTube videos and reading every document ever written, the controller outputs the one best world-model. Now we have a powerful world-model, in which there are deep insights about how everything works. We can use this world-model for whatever purpose we like. Note that the "learning" process here is a dumb thing that just uses the transition rules of the world-models, it doesn't involve settin

That's very interesting. My model is precisely the opposite, that free will is an illusion if we accept any of the theories. From the moment the universe began, it can only proceed on one path or all paths (each quantum event triggers a split so all possible actions happen). Either way, free will is an illusion.

I live my life entirely free from this conclusion, because not believing in free will will soon land you in a place where the doors don't open from the inside.

What about one path with indeterminism? Copenhagen, IOW.