Anirandis

Wiki Contributions

Comments

Sorted by

I'm interested in arguments surrounding energy-efficiency (and maximum intensity, if they're not the same thing) of pain and pleasure. I'm looking for any considerations or links regarding (1) the suitability of "H=D" (equal efficiency and possibly intensity) as a prior; (2) whether, given this prior, we have good a posteriori reasons to expect a skew in either the positive or negative direction; and (3) the conceivability of modifying human minds' faculties to experience "super-bliss" commensurate with the badness of the worst-possible outcome, such that the possible intensities of human experience hinge on these considerations.

 

Picturing extreme torture - or even reading accounts of much less extreme suffering - pushes me towards suffering-focused ethics. But I don't hold a particularly strong normative intuition here and I feel that it stems primarily from the differences in perceived intensities, which of course I have to be careful with. I'd be greatly interested if anyone has any insights here, even brief intuition-pumps, that I wouldn't already be familiar with.

 

Stuff I've read so far:

Are pain and pleasure equally energy-efficient?

Simon Knutsson's reply

Hedonic Asymmetries

A brief comment chain with a suffering-focused EA on EA forum, where some arguments for negative skew were made that I'm uncertain about
 

I don't think misaligned AI drives the majority of s-risk (I'm not even sure that s-risk is higher conditioned on misaligned AI), so I'm not convinced that it's a super relevant communication consideration here. 

I'm curious what does, in that case; and what proportion affects humans (and currently-existing people or future minds)? Things like spite threat commitments from a misaligned AI warring with humanity seem like a substantial source of s-risk to me.

What does the distribution of these non-death dystopias look like? There’s an enormous difference between 1984 and maximally efficient torture; for example, do you have a rough guess of what the probability distribution looks like if you condition on an irreversibly messed up but non-death future?

I'm a little confused by the agreement votes with this comment - it seems to me that the consensus around here is that s-risks in which currently-existing humans suffer maximally are very unlikely to occur. This seems an important practical question; could the people who agreement-upvoted elaborate on why they find this kind of thing plausible?

 

The examples discussed in e.g. the Kaj Sotala interview linked later down the chain tend to regard things like "suffering subroutines", for example.

I have a disturbing feeling that arguing to future AI to "preserve humanity for pascals-mugging-type-reasons" trades off X-risk for S-risk. I'm not sure that any of these aforementioned cases encourage AI to maintain lives worth living.

 

Because you're imagining AGI keeping us in a box? Or that there's a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?

Presumably it'd take less manpower to review each article that the AI's written (i.e. read the citations & make sure the article accurately describes the subjects) than it would to write articles from scratch. I'd guess this is the case even if the claims seem plausible & fact-checking requires a somewhat detailed reading through of the sources.

Cheers for the reply! :)

 

integrate these ideas into your mind and it's complaining loudly that you're going to fast (although it doesn't say it quite that way, I think this is a useful framing). Stepping away, focusing on other things for a while, and slowly coming back to the ideas is probably the best way to be able to engage with them in a psychologically healthy way that doesn't overwhelm you

I do try! When thinking about this stuff starts to overwhelm me I can try to put it all on ice, usually some booze is required to be able to do that TBH.

But of course it's also plausible that destructive conflict between aggressive civilizations leads to horrifying outcomes for us

 

Also, wouldn't you expect s-risks from this to be very unlikely by virtue of (1) civilizations like this being very unlikely to have substantial measure over the universe's resources, (2) transparency making bargaining far easier, and (3) few technologically advanced civilizations would care about humans suffering in particular as opposed to e.g. an adversary running emulations of their own species?

Load More