Wiki Contributions

Comments

Ratios1mo122

S-risks are barely discussed in LW, is that because:

  • People think they are so improbable that it's not worth mentioning.
  • People are scared to discuss them.
  • Avoiding creating hypersititous textual attractors
  • Other reasons?
Ratios1mo5-5

Damn, reading Connor's letter to Roon had a psychoactive influence on me; I got Ayahuasca flashbacks. There are some terrifying and deep truths lurking there.

Ratios2mo40

It's not related to the post's main point, but the U-shape happiness finding seems to be questionable. It looks more like it just goes lower with age by other analyses in general this type of research shouldn't be trusted

The U-shaped happiness curve is wrong: many people do not get happier as they get older (theconversation.com)

Ratios7mo55

Oh, come on, it's clear that the Yudkowsky post was downvoted because it was bashing Yudkowsky and not because the arguments were dismissed as "dumb." 

Ratios7mo21

Thank you for your response, Caerulea. Many of the emotions and thoughts you mentioned resonate with me. I truly hope you find peace and a sense of belonging. For myself, I've found solace in understanding that my happiness isn't really determined by external factors, and I'm not to blame or responsible for the way the world is. It's possible to find happiness in your own bubble, provided you have the necessary resources – which can sometimes be a challenge

Ratios7mo10

Because you have a pretty significant data point (That spans millions of years) on Earth, and nothing else is going on (to the best of our knowledge), now the question is, how much weight do you want to give to this data point? Reserving judgment means almost ignoring it. For me, it seems more reasonable to update towards a net-negative universe.

Ratios7mo30

I agree that looking at reality honestly is probably quite detrimental to happiness or mental health. That's why many people opt out of these conversations using methods like downvoting, sneering, or denying basic facts about reality. Their aim is likely to avoid the realization that we might be living in a world that is somewhat hellish. I've seen this avoidance many times, even in rationalist spaces. Although rationalists are generally better at facing it than others, and some like Brian Tomasik and Nate Soares even address it directly.

I've spent a lot of time thinking about these issues – not necessarily a wise choice. I'd humbly advise you to reconsider going down this rabbit hole. I haven't penned down my conclusions yet, which are a bit idiosyncratic (I don't strictly identify as a negative utilitarian). But to summarize, if you believe that conscious experience is paramount and that pain and suffering are inherently bad, then our world is probably net negative. This perspective isn't just about humans; it's about foundational principles like the laws of physics and evolution.

It doesn't seem much of a stretch to argue that things are already way beyond the threshold and that it is too late to salvage the situation?

Interestingly, I still harbor hope. Maybe, for consciousness to emerge from nothing, life had to endure the brutal phase of Darwinian Evolution. But the future could be so bright that all the preceding suffering might be viewed as a worthy sacrifice, not a tragedy. Think of the pain a mother experiences during childbirth as a metaphor (but this birth has lasted millions of years). Alternatively, consciousness might vanish, or the world could become truly hellish, even more than its current state. The outcome isn't clear, but I wouldn't exclude any of these options.

Ratios7mo10

You don't need a moral universe; you just need one where the joy is higher than suffering for conscious beings ("agents"); There are many ways in which it can happen:

  1. Starting from a mostly hostile world but converging quickly towards a benevolent reality created by the agents.
  2. Existing in a world where the distribution of bad vs. good external things that the agent can encounter is similar.
  3. Existing in a hostile world, but in which the winning strategy is leeching into a specific resource (which will grant internal satisfaction once reached)

I'm sure you can think of many other examples. Again, it's not clear to me intuitively that the existence of these worlds is as improbable as you claim.

Ratios7mo32

You're right about my misunderstanding. Thanks for the clarification.

I don't think the median moment is the Correct KPI if the distribution has high variance, and I believe this is the case with pain and pleasure experiences. Extreme suffering is so bad that most people will need a lot of "normal" time to compensate for it. I would think that most people will not trade torture to extend their lives in 1:1 and probably not even in 1:10 ratios. (E.g. you get tortured for X time and get your life extended by aX time in return)

see for example:
A Happy Life Afterward Doesn't Make Up for Torture - The Washington Post

Load More