As a nonaccomodationist vegan (who hasn't heard that term before. It probably applies to me but there are plain readings of it which wouldn't), I think you're right that we do tend to be crazier. It's a fringe view, and people get there for a whole host of reasons, many of which come with "baggage" in some form or other. Many have trauma which impacts them deeply in a lot of ways, some healthy (intolerance of harm), some unhealthy (see all the negative side effects of trauma). Others are simply contrarians who like being edgy or fringe. Others are looking for some extreme with which to view the world where they're the hero/"good guy" and everyone else is evil. Those are the main crazy nonaccomodationist vegan archetypes I've seen and unpacked, but I'm sure there are others.
That doesn't make it ok to exploit animals though.
Note that the reference is specifically in regards to older patients and for diagnosing a specific form of "crazy". Does it generalize to all forms of "crazy"? I don't know, I haven't looked into it at all. I just was curious and wanted to read the citation, and thought it was worth noting.
From the conclusion: "Disorientation to time is a useful guide to the presence and severity of dementia or delirium in older hospital patients."
"I was rolling my eyes about how they'd now found a new way of being the story's subject"
That reads to me like it's still rolling eyes at a status overreach, just a slightly different one than the one most people would roll their eyes at
Unfortunately I don't have a super fleshed out perspective on this, and I stopped reading after looking at the causal graph so I could be totally missing something important (although I did skim the rest and search for the text "influen" to find things related to this thought). I know that's not the best way to engage, feel free to downvote, take this with a grain of salt, and sorry in general for a lower effort comment.
The step in the graph from "I get higher reward on this training episode" -> "I have influence through deployment" doesn't really sit right with me. Is it only true for the final model (or few models) that actually get selected and deployed? It gives me a vibe from neutral connotation, to something with a more negative connotation. It definitely seems like one possible outcome from the selection process, but for some reason it just sits weird in my head and sounds to me like something someone who already thinks AI is going to turn power seeking would come up with even though it's kind of a jump. I wish I could articulate this better, but I'm not trying to spend a ton of time teasing out my thoughts, I wasn't gonna comment but I didn't see anyone else saying something along these lines and it seemed worth putting it out there.
Obviously I could well be wrong, or missed something obvious, or I'm just biased the other way towards "llms are not that malicious", but idk it just gave me a tiny note of discord
This reminds me of https://www.lesswrong.com/posts/5FAnfAStc7birapMx/the-hostile-telepaths-problem
I haven't read this yet, but the general idea reminds me of the living fossils https://thelivingfossils.substack.com/p/the-fossil-record-so-far
I didn't know there was (going to be?) an epilogue to planecrash, but it didn't leave me nearly as thirsty for more as hpmor did. With hpmor, I wanted to see what everyone would do next, as they're still pretty young, whereas with planecrash, it felt like everything I was curious about was explored to my satisfaction. Sure, we don't get a lot of specifics on the new society(s) on golarion, but that's pretty fine with me. It would be interesting to see maybe what The Future holds, or where the language guy ends up, but the former feels right as a mystery, while the latter seemed pretty well foreshadowed
See also: https://en.m.wikipedia.org/wiki/XY_problem
Yeah it's strange. I wouldn't be surprised if people attracted to lesswrong tend to have less robust theory of mind, like if we tend to be more autistic (I'm probably dipping my toes on the spectrum but haven't been diagnosed. Many of my closest friends tend to be autistic), which then leads to a theory-of-mind version of the breakfast question (which to be clear looks like it's a pretty racist meme, I've only seen it from that know your meme page, and I think the ties to race are gross. The point I'm trying to make is not race related at all), where if you ask "How would you feel if you were someone else?" people say "what do you mean? I'm not them, I'm me."
I also posted it on the EA forum, and it did a lot better there https://forum.effectivealtruism.org/posts/fcM7nshyCiKCadiGi/how-the-veil-of-ignorance-grounds-sentientism
a superintelligent AI would change its utility function to the simplest one, as described here. But I don't see why it shouldn't do that. What do you think about this?
I don't think a superintelligent AI would change it's utility function as you describe, I think the constraints of it's existing utility function would be way too ingrained, and it would not want to change it in those ways. While I think the idea you're putting forward makes sense and gets us closer to an "objective" morality, I think that you're on the same path as Eliezer's "big mistake" of thinking that a super intelligent ai would just want to have an ideal ethics, which isn't a given (I think he talks about it somewhere in here https://www.readthesequences.com/Book-V-Mere-Goodness). For example, the current path of LLM ai is essentially just a conglomeration of human ethics based on what we've written and passed in to the training data, it tends not to be more ethical than us, and in fact early ai bots that learned from people interacting with them could easily become very racist.
By the way, have I convinced you to accept total hedonistic utilitarianism?
Well, I already thought that suffering is what roots ethics at the most basic level, so in a sense yes, but also I think that we do better at that using higher level heuristics rather than trying to calculate everything out, so in that sense I don't think so?
Watching Okja was a key part of me realizing how important animal rights are.
Knowing at the back of my mind that factory is farming is bad? I sleep
Watching a cgi mythical pig-like creature go through it? Real shit (aka look into things more and get a deeper understanding of the truth, inspect my values, then act accordingly)