Sorted by New

Wiki Contributions


I am sceptical about the role of alcohol you describe and dynamics around it as a form of lie detector, but I know there's a range of social dynamics I haven't necessarily been exposed to in my culture.

I have been in various groups that heavily drink on occasion, but I've never seen any evidence of people being viewed as having something to hide were they not to drink.

I think alcohol might make people more honest but I think it's usually things they already wanted to divulge but for lack of some courage or sense of emotional intimacy that alcohol can provide. It's hard for me to imagine alcohol playing a similar role as a lie detector for significant factual information people strongly want to hide.

Could you offer any examples of where a real lie detector would be valuable in friendships or potential friendships?

A lot of the things I might want to know seem challenging to address via a lie detector. "Will you do anything violent or steal or intentionally damage my property," People likely to do those things might honestly intend not to.

I could see it potentially being useful for people having sex more on the casual side.


high-trust friend groups

I'm having a hard time imagining a scenario in which I would find this valuable in my friend groups. If I were ever unsure whether I could trust the word of a friend on an important matter, I'd think that would represent deeper issues than a mere lack of information a scan of their brain could provide. Perhaps I'm nieve or particular in some way in how I filter people.

Do you have examples for how this would aid friendships? Or the other domains you mentioned?

I could see it being very valuable but I also find the idea very frightening, and I am not someone who lies.

And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time)

I don't think it's really all that complicated, I suspect that you haven't experienced a certain extent of negative valence which would be sufficient to update you towards understanding how bad suffering can get.

It would be like if you've never smelled anything worse than a fart, and you're trying to gauge the mass of value of positive smells against the mass of value of negative smells. If you were trying to estimate what it would be like in a small room full of decaying dead animals and ammonia, or how long you'd willingly stay in that room, your intuitions would completely fail you.

but only minor changes in value toward the tails.

I have experienced qualia that is just slightly net negative, feeling like non-existence would be preferable all else equal. Then I've experienced states of qualia that are immensely worse than that. The distance between those two states is certainly far greater than the distance between neutral and extreme pleasure/fulfillment/euphoria etc. Suffering can just keep getting worse and worse far beyond the point at which all you can desire is to cease existing.


I think one reason I don't like that sort of thing is there's more ambiguity in "what it took to win the game"

It's hard to know whether an artificial advantage is proportional to the skill gap. If I win, I won't know the extent to which I should attribute that win to good play (that I ought to be proud of, and that will impress others), VS attributing the win to a potentially greater than 1/N chance of winning(that I came by artificially).

If the greater skill is the absolute advantage that leads me to a win , I will discount the achievement on account of having an absolute advantage, but I'll still feel satisfied that I have achieved a relatively higher skill level.

If an improperly calibrated handicap is the absolute advantage that leads me to a win, it's a win I'd discount on account of there being an absolute advantage, but in this case I'd garner no satisfaction from having an (artificial) absolute advantage.

Morestill the win might feel insulting or condescending if I was given a disproportionately large advantage due to my friends/competitors underestimation of my expected quality of play.

My win will also not necessarily give my competitors an update as to whether they underestimated my expected quality of play.

If the expectation is that I will win 1/N times, they won't update on my skill level if I win. (Maybe very slightly, and eventually as you play more games)

If I win when the odds are against me, people update significantly on my expected quality of play.

It feels good to know people are updating favourably on my expected quality of play.

Interesting. It is an abstract hypothetical, but I do think it's useful, and it reveals something about how far apart we are in our intuitions/priors.

I wouldn't choose to live a year in the worst possible hell for 1000 years in the greatest possible heaven. I don't think I would even take the deal in exchange for an infinite amount of time in the greatest possible heaven.

I would conclude that the experience of certain kinds of suffering reveals something significant about the nature of consciousness that can't be easily inferred, if it can be inferred at all.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile. So in that sense it's difficult for me to consider that question.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important

I think even if they're abstract semi-coherant questions they're very revealing, and I think they're very relevant to prioritization of s-risks, allocating resources, and issues such as animal welfare.

It makes it easier for me to understand how otherwise reasonable seeming people can display a kind of indifference to the state of animal agriculture. If someone isn't aware of the extent of possible suffering, I can see why they might not view the issue with the same urgency.

Would you spend a year in the worst possible hell in exchange for a year in the greatest possible heaven?

I think this is a good summary of a lot of the arguments for increased population, even if my view is different.

I think most of the benefits you're describing flow from a very tiny fraction of all humans.

Given the returns to specialization, traditionally populations must grow in order to support the efforts of that tiny fraction. However it's not necessarily the case that in the coming years increasing population is the only way to increase the amount of specialized individuals producing massive value.

Automation will make it easier to specialize.

The rate of suicide is really quite low. You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.

I think the expected value of the rest of my life is positive (I am currently pretty happy), especially considering impacts external to my own consciousness. If that stops being the case, I have the option.

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.

It's hard to know exactly what this is implying. Sure it's based on personal experience that's difficult to extrapolate and aggregate etc. But I think it's a very important question. Potentially the most important question. Worth some serious consideration.

People are constantly making decisions based on the their marginal valuations of suffering and wellbeing, and the respective depths and heights of each end of the spectrum. These decisions can/do have massive ramifications.

So I can try to understand your view better, would you choose to spend one year in the worst possible hell if it meant you got to spend the next year in the greatest possible heaven?

Given my understanding of your expressed views, you would accept this offer. If I'm wrong about that, knowing that would help with my understanding of the topic. If you think it's an incoherent question, that would also improve my understanding.

Feel free to disengage, I just find limited opportunities to discuss this. If anyone else has anything to contribute I'd be happy to hear it.

Thanks for answering. I would personally expect this intuition and introspection to be sensitive to contingent factors like the range of experiences you've had, would you agree?

Personally my view leans more in the other direction, although it's possible I'm losing something in misunderstanding the complexity variable.

If my life experience leads me the view that 'suffering is worse than wellbeing is good', and your life experiences lend towards the opposite view, should those two data points be given equal weight? I personally would give more weight to accounts of the badness of suffering, because I see a fundamental asymmetry there, but would you say that's a product of bias given to my set of experiences?

If I were to be offered 300 years of overwhelmingly positive complex life in exchange for another ten years of severe anhedonic depression, I would not accept that offer. It wouldn't even be a difficult choice.

Assuming you would accept that offer for yourself, would you accept that offer on behalf of someone else?

Load More