I'm getting the impression that "consciousness" is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is "conscious".
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?
That makes sense—everything in context. I wouldn't want to go around assuming that I can just tease anyone who is experiencing psychological distress, but I think I do have a sense of specific circumstances where it feels appropriate. And hey, I cannot remember the last time I looked like an asshole, so I'm probably overdue anyway.
Reading this has made something click for me, I think.
The other day a friend of mine had what he felt was an extremely embarrassing moment—although really it was not nearly as bad as he felt like it was. I kind of had this blog series in mind when we were assuring him that it was fine, and it didn't quite connect with him, and I knew it wouldn't, but I also didn't really know what to do so I felt kind of awkward even though I worried that feeling awkward would make things worse.
Part of it is that I was hiding information, in that I actually found the situation interesting and slightly fun but I didn't feel secure in demonstrating that because I was afraid of standing out, failing the bid and making myself look like an asshole. But now I'm realising that going all-in on how I really felt and approaching it with a sense of playfulness would have both been more honest and probably would have defused the situation better.
I have frequently had the experience of wanting to console someone who is experiencing an emotional difficulty, but something about my attempt feels performative and effortful even though I do actually care. In hindsight I think that I am hiding some of my authentic experience because I feel like I'm supposed to Take Their Emotions Seriously, and that any positivity or playfulness would come across as being dismissive. I think I'm starting to understand where the disconnect is and how I could better handle these situations.
Another excellent post. This particular post has clarified the framework for me enough that I could imagine it impacting my interactions with people.
It seems like this is formalising things that people tend to gain a partial intuition for through social interaction.
How much of this is perscriptive vs descriptive? I could use music theory to explain why a song sounds good, but in most cases music theory works better as a post-facto explanation than an instruction for how to write good music. Do you think this framework is useful for learning how to change people's expectations/beliefs/attention/etc. or is it a description of something that could be learnt just as well without the framework?
I very much enjoyed this short story.
Before I read the spoiler text at the end, I was confused about the postscript. While the main story has a very clear metaphor and intention, the postscript completely diverges from that and instead sets up the intro to a cliche YA science fantasy action novel; it could have been written by James Patterson. I wonder if modern LLMs would do any better.
Edit: I tried it with ChatGPT. It gave a more realistic opener that matches the text better, but it was too explicit in calling back to phrases directly used in the text, like someone trying to show off how much they remember. Plausibly this could be fixed with the right prompting.
I'm very much enjoying the series so far.
I find this topic very interesting, but it's hard for me to tell what these techniques would look like in practice. It might help if you have more examples of what these looks like. Or maybe it's the kind of thing that I would need to experience personally to get it?
How do you know how much respect the other person is giving you, so that you can successfully bid for attention? Is this just a matter of experience?
Very interesting post. I'll be interested to see how this fits in with other psychological frameworks that have been posited on this forum, like Chipmonk's and Steven Byrnes'.
Some of what you've said so far resonates with me—I have had the experience of a particular instance of suffering dissolving pretty quickly once I noticed that the thing I was observing and the suffering I felt from it are distinct and can be separated from each other. Some of this seems unlike anything I've heard before (like the Attention-Respect-Security model) and I'm curious to see how this works in practice.
I think the complement sandwich can be useful as a stepping stone to good communication. That said, I think of it as a narrow formulation of a more general (and less precisely defined) approach to conversation that I might point to with phrases like "work with people where they are at" and "be aware of the emotions that your words induce in other people". There was an article on LessWrong that I can't find, arguing that clear communication is worded to pre-emptively avoid likely misunderstandings and misconceptions. The idea I'm pointing to is like that, but concerning the emotional interpretation of your words rather than the literal meaning. I think this can apply just as much to the rationalist community as to any other community (although I haven't had any conversations with rationalists so I don't know for sure).
Like literary and conversational techniques in general, if they are followed as a hard rule then they risk coming across as formulaic and hence inauthentic. However I can imagine that it might be useful to adopt the complement sandwich as a rule until you gain a deeper understanding of the underlying mechanics.
I very much appreciate this post, because it strongly resonates with my own experience of laziness and willpower. Reading this post feels less like learning something new and more like an important reminder.
I suspect that "enlightenment" is probably a bundle of different things rather than one discrete thing, and maybe what it means depends on the culture and even how an individual relates to the world. This is based on the heuristic that when you dig into the nature of mental states, they tend to not fall into neat categories that are the same from person to person.
However, there are people existing today who claim to be "awakened" who were certainly self-aware, and still describe a dramatic change in their perception of the world. The descriptions tend to fall along similar lines, and include:
This sounds like there's something more going on than gaining consciousness, and in some ways points in the opposite direction. It is often described as more of an "unlearning" than a learning.