"Membranes" enthusiast! Everything on my account relates to this!
I am currently in Prague at Epistea Residency for rationality research continuing my philosophy (rationality + AI safety) research on membranes. I will be back in the US in December. (I might be in the bay area then but that's entirely dependent on getting funding.)
I have some drafts I've been working on for several months that I hope to post.. eventually
I agree with this. What if it was actually possible to formalize morality? (Cf «Boundaries» for formalizing an MVP morality.) Inner alignment seems like it would be a lot easier with a good outer alignment function!
So it seems to me that the trauma comes from believing that you have been violated, because authority figures or parents or society tell you that you have - not from being violated, itself
need to think about this more, gee
I like this as a partial explanation for the root cause of unenlightenment/un-rationality (though I can think of other factors that probably have larger effects towards that)
Also, have you read The Courage to be Disliked? Separation of Concerns, Gordian Knot, ... very similar.
I'm surprised that there aren't many organizations hire people who are incubating new/weird/unusual agendas. I think FAR does this, but they seem pretty small.
Otherwise for independent research it's kinda just LTFF (?)
As for how well LTFF does this , well, I applied a few months ago and finally they got back recently (about 10 weeks later) and rejected my application— no feedback, no questions. I'm not even sure they understood what my agenda is.
Rather, it's in terms of the grasshopper (an object/physical system), it's physiological properties (age, point in the circadian cycle, level of hunger, temperature, etc.) and its distances and exposures to the relevant stimuli, such as the topology of light and shade around it, gradients of certain attractive smells, and so on. So, under this level of coarse graining, we treat these as the "grasshopper's properties".
hm… how would an AI system locate these properties to begin with?
I haven't written the following on LW before, but I'm interested in finding the setup that minimizes conflict.
(An agent can create conflict for itself when it perceives its sense of self as in some regard too large, and/or as in some regard too small.)
Also I must say: I really don't think it's (straightforwardly) subjective.
Related: "Membranes" is better terminology than "boundaries" alone
Thanks for writing this. I also think what we want from psuedokindness is captured from membranes/boundaries.
Why is this post not tagged Front Page?