Wiki Contributions

Comments

"Things we can't talk about" in a relationship is another form of technical debt

I was a negative utilitarian for two weeks because of a math error

So I was like, 

If the neuroscience of human hedonics is such that we experience pleasure at about a 1 valence and suffering at about a 2.5 valence, 

And therefore an AI building a glorious transhuman utopia would get us to 1 gigapleasure, and an endless S-risk hellscape would get us to 2.5 gigapain, 

And we don’t know what our future holds, 

And, although the most likely AI outcome is still overwhelmingly “paperclips”, 

If our odds are 1:1 between ending up in Friendship Is Optimal heaven versus UNSONG hell,

You should kill yourself (and everyone else) swiftly to avoid that EV-negative bet.

 

(noting the mistake is left as an exercise to the reader) 
 

Where did you hear that TTS inspired dath ilan's Governance? 

Move Over, Insect Suffering: The Insect Vibing Hypothesis

I’m pretty bullish on “insect suffering” actually being the hedonic safe-haven for the planet’s moral portfolio

So as a K-selected species, our lives are pretty valuable, in terms of parental investment, time-to-reproductive-fruition, and how long we expect to live.  As such, the neuroscience of human motivation is heavily tilted towards avoiding-harm; I think the studies say that people feel gains/losses at about +1/-2.5 valences; so, loss is felt much more sharply. (And maybe the average human life is hedonically net-negative for this reason; I go back and forth on that)

But for an R-selected species, we see all these so-reckless-they’re-suicidal behaviors. A fly is hellbent on landing on our food despite how huge and menacing we are.  It really wants that food! A single opportunity for food is huge, in the fly’s expected lifespan, and if the well-fed fly can go breed, then it’s gonna pop out a thousand kids. Evolutionary jackpot! 

But how much must the fly enjoy that food; and how little must it fear death, for us to see the behaviors we see?

I suspect the R-selected species are actually experiencing hedonically positive lives, and, serendipitously, outnumber us a bajillion to one. 

Earth is a happy glowing ball of joyously screwing insects, and no sad apes can push that into the negative.

Is the average human life experientially negative, such that buying three more years of existence for the planet is ethically net-negative?

(I predict that would help with AI safety, in that it would swiftly provide useful examples of reward hacking and misaligned incentives)

I imagine that WW3 would be an incredibly strong pressure, akin to WW2, which causes governments to finally sit up and take notice of AI.

And then spend several trillion dollars running Manhattan Project Two: Manhattan Harder, racing each other to be the first to get AI. 

And then we die even faster, and instead of being converted into paperclips, we're converted into tiny American/Chinese flags

I suspect that some people would be reassured by hearing bluntly, "Even though we've given up hope, we're not giving up the fight."

Not sarcastically! I wanted to have a Hard Mode available for those whose fasting was going well. 

Vavilov et al certainly did it with seeds available.

I propose we surround ourselves in edible seeds, too. 

Load More