LESSWRONG
LW

2542
ScienceBall
4040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Neuroscience of human social instincts: a sketch
ScienceBall3mo30

About the example in section 6.1.3: Do you have an idea of how the Steering Subsystem can tell that Zoe is trying to get your attention with her speech? It seems to me like that requires both (a) identifying that the speech is trying to get someone's attention, and (b) identifying that the speech is directed at you. (Well, I guess (b) implies (a) if you weren't visibly paying attention to her beforehand.)

About (a): If the Steering Subsystem doesn't know the meaning of words, then how can it tell that Zoe is trying to get someone's attention? Is there some way to tell from the sound of the voice? Or is it enough to know that there were no voices before and Zoe has just started talking now, so she's probably trying to get someone's attention to talk to them? (But that doesn't cover all cases when Zoe would try to get someone's attention.)

About (b): If you were facing Zoe, then you could tell if she was talking to you. If she said your name, then maybe the Steering Subsystem might recognize your name (having used interpretability to get it from the Learning Subsystem?) and know she was talking to you? Are there any other ways the Steering Subsystem could tell if she was talking to you?

I'm not sure how many false positives vs. false negatives evolution will "accept" here, so I'm not sure how precise a check to expect.

Reply
adamzerner's Shortform
ScienceBall4mo12

I couldn't click into it from the front page if I tried to click on the zone where the text content would normally go, but I was able to click into this from the front page if I clicked on the reply-count icon in the top-right corner. (But that wouldn't have worked when there were zero replies.)

Reply
Slopworld 2035: The dangers of mediocre AI
ScienceBall5mo10

The UK government also heavily used AI chatbots to generate diagrams and citations for a report on the impact of AI on the labour market, some of which were hallucinated.

This link is broken.

Reply
[Intuitive self-models] 2. Conscious Awareness
ScienceBall7mo30

Thank you for writing this series.

I have a couple of questions about conscious awareness, and a question about intuitive self-models in general. They might be out-of-scope for this series, though.

Questions 1 and 2 are just for my curiosity. Question 3 seems more important to me, but I can imagine that it might be a dangerous capabilities question, so I acknowledge you might not want to answer it for that reason.

  1. In 2.4.2, you say that things can only get stored in episodic memory if they were in conscious awareness. People can sometimes remember events from their dreams. Does that mean that people have conscious awareness during (at least some of) their dreams?
  2. Is there anything you can say about what unconsciousness is? i.e. Why is there nothing in conscious awareness during this state? - Is the cortex not thinking any (coherent?) thoughts? (I have not studied unconsciousness.)
  3. About the predictive learning algorithm in the human brain - what types of incoming data does it have access to? What types of incoming data is it building models to predict? I understand that it would be predicting data from your senses of vision, hearing, and touch, etc. But when it comes to build an intuitive self-model, does it also have data that directly represents what the brain algorithm is doing (at some level)? Or does it have to infer the brain algorithm from its effect on the external sense data (e.g. motor control to change what you're looking at)?
    1. In the case of conscious awareness, does the predictive algorithm receive "the thought currently active in the cortex" as an input to predict? Or does it have to infer the thought when trying to predict something else?
Reply
No posts to display.
No wikitag contributions to display.