«Membranes/Boundaries» enthusiast! Everything on my account relates to this!
Hot stuff coming soon!
But I don't get why we don't view an observation of events we caused, which another person doesn't like, as an attack.
I think for an individual who is skilled enough about what stuff is theirs (what stuff they (and only they) can observe; what stuff they (and only they) can control), and what stuff isn't theirs, then they would not register this as an attack.
In practice, though, I think many people are bad at this skill, and so when they hear someone say "(You're doing a bad thing!)", they interpret this not as just a remark that is coming from somewhere outside of themselves, but more as something coming from inside of them— as a statement already has merit.
Meanwhile, if you were to instead say "(You've been late : objective fact)", you haven't tried to make their judgement for them (as you would've in the previous example). Then they can step through the logic themselves and be like "Have I been late? Yes. Ok, what does that mean? …".
(Praise can be harmful for the same reason.)
I'm writing a post about what I think is the generator and compressed representation of things like NVC, subscribe to my posts on my profile to get notified when i post it.
Thank you for compiling this! I've been working with a counselor for the past year using basically this method with great results, though I've only found this book and post today. Great to see support for the idea
Related (H/T @Roman Leventov - comment):
Dalton Sakthivadivel showed here that boundaries (i.e., sparse couplings) do exist and are "ubiqutuous" in high-dimensional (i.e., complex) systems.
I think that boundaries […] is an undeniably important concept that is usable for inferring ethical behaviour. But I don't think a simple "winning" deontology is derivable from this concept.
I see
I'm currently preparing an article where I describe that from the AI engineering perspective, deontology, virtue ethics, and consequentialism
please lmk when you post this. i've subscribed to your lw posts too
FWIW, I don't think the examples given necessarily break «membranes» as a "winning" deontological theory.
A surgeon intruding into the boundaries of a patient is an ethical thing to do.
If the patient has consented, there is no conflict.
(Important note: consent does not always nullify membrane violations. In this case it does, but there are many cases where it doesn't.)
If AI automated the entire economy, then waited until humanity completely loses the ability to run the civilisation on their own, and then suddenly stopped any maintenance of the automated systems that support the lives of humans and sees how humans die out because they cannot support themselves would be "respecting humans' boundaries", but would also be an evil treacherous turn.
I think a way to properly understand this might be.. If Alice makes a promise to Bob, she is essentially giving Bob a piece of herself, and that changes how he plans for the future and whatnot. If she revokes that by terms not part of the original agreement, she has stolen something from Bob, and that is a violation of membranes. ?
If the AI promises to support humans under an agreement, then breaks that agreement, that is theft.
Messing with Hitler's boundaries (i.e., killing him) in 1940 would be an ethical action from the perspective of most systems that may care about that (individual humans, organisations, countries, communities).
In a case like this I wonder if the theory would also need something like "minimize net boundary violations", kind of like how some deontologies make murder okay sometimes.
But then this gets really close to utilitarianism and that's gross imo. So I'm not sure. Maybe there's another way to address this? Maybe I see what you mean
Okay, I'll try to summarize your main points. Please let me know if this is right
Have I missed anything? I'll respond after you confirm.
Also, would you please share any key example(s) of #2?
Do you think this relates to «Boundaries» for formalizing a bare-bones morality ?
this world where people successfully adapt to superintelligent AI services is a totalitarian police state
& Davidad's Night Watchman
Some number of the examples in this post don't make sense to me. For example, where is the membrane in "work/life balance"? Or, where is the membrane in "personal space" (see Duncan's post, which is linked).
I think there's a thing that is "social boundaries", which is like preferences— and there's also a thing like "informational or physical membranes", which happens to use the same word "boundaries", but is much more universal than preferences. Personally, I think these two things are worth regarding as separate concepts.
Personally, I like to think about membranes as a predominantly homeostatic autopoietic thing. Agents maintain their membranes. They do not "set" boundaries, they ARE boundaries.
[I explain this disagreement a bit more in this post.]
IME the process outlined in this book is absolutely right. However, one piece of the framework seems weird to me: it seems to suggests that emotions are the cause of schemas/beliefs:
This doesn't match my intuitions… imo, more like this: beliefs are the cause of emotions. Where do 'intense emotions' come from?
Maybe emotions are best thought as doing 'prioritization' or serving other essential function, but I don't think they're the bottom.