Gunnar_Zarncke

Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)

Wiki Contributions

Comments

When discussing the GPT-4o model, my son (20) said that it leads to a higher bandwidth of communication with LLMs and he said: "a symbiosis." We discussed that there are further stages than this, like Neuralink. I think there is a small chance that this (a close interaction between a human and a model) can be extended in such a way that it gets aligned in a way a human is internally aligned, as follows:

This assumes some background about Thought Generator, Thought Assessor, and Steering System from brain-like AGI.

The model is already the Though Generator. The human already has a Steering System, albeit it is not accessible, but plausibly, it can be reverse-engineered. What is missing is the Thought Assessor, something that learns to predict how well the model satisfies the Steering System. 

Staying closer to the human may be better than finding global solutions. Or it may allow smaller-scale optimization and iteration. 

Now, I don't think this is automatically safe. The human Steering System is running already outside its specs and a powerful model can find the breaking points (same as global commerce can find the appetite breaking points). But these are problems we already have and it provides a "scale model" or working on them. 

The Big Rule Adjustment

This section is really the timeless part of the post and comes much too late or maybe rather deserves its own post. I'd like to see a link to where Zvi has "said this before."

I recommend making this into a full link-post. I agree about the relevance for AI alignment. 

For what it's worth, I think we will soon see "robots" or LLMs or some such systems that have meta-consciousness or self-consciousness. There are reports of LLMs passing the mirror test and if they can do that and argue the case - and I have seen pretty advanced arguments about reflection too - then you have meta-consciousness also.

Sentience: This feels like a continuous thing that gets less and less sophisticated as we go up the information history. In each generation, the code gets a little better at using the laws of physics and chemistry to preserve itself.

I think that getting better at using the laws of physics to reproduce, is some stage before sentience. Sentience as defined by Singer is about responses to pleasure and pain stimuli - which is a specific adaptation that requires specific neural pathways that are not present, e.g., in bacteria. I'm fine with adding another layer before sentience, let's call it reproduction, and maybe that one is continuous as you suggest, but it stretches what people call conscious. Sure, you can define consciousness to include that layer, and maybe that is what people call panpsychism, but to me, that seems more like expanding a definition by applying an affect heuristic. 

But the last two still feel too strong. I will think more about it.

I'm not sure what "the last two". :confused:

Answer by Gunnar_Zarncke40

I like the idea! If you have it, the question is when does it start? Let's look at it for different aspects of consciousness:

  • Sentience (Bentham, Singer): Behavioral responses to pleasure or pain stimuli and physiological measures. This is observable across animal species, from mammals to some invertebrates and it should be known when responses to such stimuli start in the embryo.

  • Wakefulness: Measureable in virtually all animals with a central nervous system by physiological indicators such as EEG, REM, and muscle tone. The fetus is known to have a sleep wake rhythm, but I don't know when it starts.

  • Dennet's Intentionality: Treating living beings as if they have beliefs and desires makes good predictions for many animal species, esp. social, like primates, cetaceans, and birds. Infants show goal directed behaviors right after birth. I remember ultrasound photos that show babies suckling their thumb. I think we can identify when the nervous system is first capable of goal direction.

  • Dehaene's Phenomenal Consciousness: A perception or thought is conscious if you can report on it. As this requires language or measuring neural patterns that are similar to humans during comparable reports, I think this starts when communicable representstions of perceptions first form, for toddlers around age one at the earliest with baby sign language.

  • Gallup's Self-Consciousness: Recognition of oneself e.g. in a mirror. Requires sufficient sensual resolution and intelligence for a self-model. Dito.

  • Rosenthal's Meta-Consciousness: This is investigated through introspective reports on self-awareness of cognitive processes or self-reflective behaviors. Requires more abstraction. Maybe at age five?

Hm. You could make quizzes yourself, but that was some effort. It seems the paiq quizzes are standardized and easy to make. Nice. Many Okcupid tests were more like MBTI tests. Here is where people are discussing one of the bigger ones. 

People try new dating platforms all the time. It's what Y Combinator calls a tarpit. The problem sounds solvable, but the solution is elusive.

As I have said elsewhere: Dating apps are broken because the incentives of the usual core approach don't work.

On the supplier side: Misaligned incentives (keep users on the platform) and opaque algorithms lead to bad matches. 

On the demand side: Misaligned incentives (first impressions, low cost to exit) and no plausible deniability lead to predators being favored.

People start dating portals all the time. If you start with a targetted group that takes high value from it, you could plausibly do it in terms of network effect. Otherwise, you couldn't start any network app or the biggest one would automatically win. So I think your argument proves too much.

The quizzes sounds is something Okcupid also used to have. Also everything that reduces the need for first impressions. I hope they keep it. 

Load More