wallowinmaya

wallowinmaya's Comments

Melatonin: Much More Than You Wanted To Know

Regarding how melatonin might cause more vivid dreams. I found the theory put forward here quite plausible:

There are user reports that melatonin causes vivid dreams. Actually, all sleep aids appear to some users to produce more vivid dreams.

What is most likely happening is that the drug modifies the sleep cycle so the person emerges from REM sleep (when dreams are most vivid) to waking quickly – more quickly that when no drug is used. The user subjectively reports the drug as producing vivid dreams.

Anti-tribalism and positive mental health as high-value cause areas

Great that you're thinking about this issue! A few sketchy thoughts below:

I) As you say, autistic people seem to be more resilient with regards to tribalism. And autistic tendencies and following rationality communities arguably correlates as well. So intuitively, it seems that something like higher rationality and awareness of biases could be useful for reducing tribalism. Or is there another way of making people "more autistic"?

Given this and other observations (e.g., autistic people seem to have lower mental health, on average), it seems a bit hasty to focus on increasing general mental health as the most effective intervention for reducing tribalism.

II) Given our high uncertainty of what causes tribalism and how to most effectively reduce it, it seems that more research in this area could be one of the most effective cause areas.

I see at least two avenues for such research:

A) More "historical" and correlational research. First, we might want to operationalize 'tribalism' or identify some proxies for it (any ideas?). Then we could do some historical studies and find potential correlates. It would be interesting to study to what extent increasing economic inequality, the advent of social media, and other forces have historically correlated with the extent of tribalism.

B) Potentially more promising would be experimental psychological research aimed to identify causal factors and mediators of tribalism. For example, one could present subjects with various interventions and then see which intervention reduce (or increase!) tribalism. Potential interventions include i) changing people's mood (e.g., presenting them with happy videos), ii) increasing the engagement of controlled cognitive processes (system 2) (e.g. by priming them with the CRT), iii) or decreases the engagement of such processes (e.g. via cognitive load), iv) using de-biasing techniques, v) decreasing or increasing their sense of general security (by e.g. presenting them with threatening or scary images or scenarios). There are many more possible interventions.

C) Another method would be correlational psychological research. Roughly, one could give subjects a variety of personality tests and other psychological scales (e.g. Big Five, CRT, etc.) and examine what correlates with tribalistic tendencies.

D) Another idea would be to develop some sort of "tribalism scale" which could lay the groundwork for further psychological research.

Of course, first one should do a more thorough literature review on this topic. It seems likely that there already exists some good work in this area.

--------

Even more sketchy thoughts:

III) Could it be that some forms of higher mental health actually increase tribalism? Tribalism also goes along with a feeling of belonging to a "good" group/tribe that fights against the bad tribe. Although at times frustrating this might contribute to a sense of certainty and "having a mission or purpose". Personally, I feel quite depressed and frustrated by not being able to wholeheartedly identify with any major political force because they currently all seem pretty irrational in many areas. Of course, higher mental health will probably reduce your need to belong to a group and thus might still reduce tribalism.

IV) Studies (there was another one which I can't find at the moment) seem to indicate that social media posts (e.g. on Twitter or Facebook) involving anger or outrage spread more easily than posts involving all other emotions like sadness, joy, etc. So maybe altering the architecture of Facebook or twitter would be particularly effective (e.g. tweaking the news feed algorithm such that posts with a lot of anger reactions get less traction). Of course, this is pretty unlikely to be implemented. It also has disadvantages in the case of justified outrage. Maybe encouraging people to create new social networking sites that somehow alleviate those problems would be useful but that seems pretty far-fetched.

A Step-by-step Guide to Finding a (Good!) Therapist

Can one use the service reflect also if one is not located in the Bay Area? Or do you happen to know of similar services for outside the Bay Area or US? Thanks a lot in advance.

LW 2.0 Open Beta Live

The open beta will end with a vote of users with over a thousand karma on whether we should switch the lesswrong.com URL to point to the new code and database

How will you alert these users? (I'm asking because I have over 1000 karma but I don't know where I should vote.)

S-risks: Why they are the worst existential risks, and how to prevent them

One of the more crucial points, I think, is that positive utility is – for most humans – complex and its creation is conjunctive. Disutility, in contrast, is disjunctive. Consequently, the probability of creating the former is smaller than the latter – all else being equal (of course, all else is not equal).

In other words, the scenarios leading towards the creation of (large amounts of) positive human value are conjunctive: to create a highly positive future, we have to eliminate (or at least substantially reduce) physical pain and boredom and injustice and loneliness and inequality (at least certain forms of it) and death, etc. etc. etc. (You might argue that getting "FAI" and "CEV" right would accomplish all those things at once (true) but getting FAI and CEV right is, of course, a highly conjunctive task in itself.)

In contrast, disutility is much more easily created and essentially disjunctive. Many roads lead towards dystopia: sadistic programmers or failing AI safety wholesale (or "only" value-loading or extrapolating, or stable self-modification), or some totalitarian regime takes over, etc. etc.

It's also not a coincidence that even the most untalented writer with the most limited imagination can conjure up a convincing dystopian society. Envisioning a true utopia in concrete detail, on the other hand, is nigh impossible for most human minds.

Footnote 10 of the above mentioned s-risk-static makes a related point (emphasis mine):

"[...] human intuitions about what is valuable are often complex and fragile (Yudkowsky, 2011), taking up only a small area in the space of all possible values. In other words, the number of possible configurations of matter constituting anything we would value highly (under reflection) is arguably smaller than the number of possible configurations that constitute some sort of strong suffering or disvalue, making the incidental creation of the latter ceteris paribus more likely."

Consequently, UFAIs such as paperclippers are more likely to create large amounts of disutility than utility (factoring out acausal considerations) incidentally (e.g. because creating simulations is instrumentally useful for them).

Generally, I like how you put it in your comment here:

In terms of utility, the landscape of possible human-built superintelligences might look like a big flat plain (paperclippers and other things that kill everyone without fuss), with a tall sharp peak (FAI) surrounded by a pit that's astronomically deeper (many almost-FAIs and other designs that sound natural to humans). The pit needs to be compared to the peak, not the plain. If the pit is more likely, I'd rather have the plain.

Yeah. In a nutshell, supporting generic x-risk-reduction (which also reduces extinction risks) is in one's best interest, if and only if one's own normative trade-ratio of suffering vs. happiness is less suffering-focused than one's estimate of the ratio of expected future happiness to suffering (feel free to replace "happiness" with utility and "suffering" with disutility). If one is more pessimistic about the future or if one needs large amounts of happiness to trade-off small amounts of suffering, one should rather focus on s-risk-reduction instead. Of course, this simplistic analysis leaves out issues like cooperation with others, neglectedness, tractability, moral uncertainty, acausal considerations, etc.

Do you think that makes sense?

S-risks: Why they are the worst existential risks, and how to prevent them

The article that introduced the term "s-risk" was shared on LessWrong in October 2016. The content of the article and the talk seem similar.

Did you simply not come across it or did the article just (catastrophically) fail to explain the concept of s-risks and its relevance?

Requesting Questions For A 2017 LessWrong Survey

Here is another question that would be very interesting, IMO:

“For what value of X would you be indifferent about the choice between A) creating a utopia that lasts for one-hundred years and whose X inhabitants are all extremely happy, cultured, intelligent, fair, just, benevolent, etc. and lead rich, meaningful lives, and B) preventing one average human from being horribly tortured for one month?"

Requesting Questions For A 2017 LessWrong Survey

I think it's great that you're doing this survey!

I would like to suggest two possible questions about acausal thinking/superrationality:

1)

Newcomb’s problem: one box or two boxes?

  • Accept: two boxes
  • Lean toward: two boxes
  • Accept: one box
  • Lean toward: one box
  • Other

(This is the formulation used in the famous PhilPapers survey.)

2)

Would you cooperate or defect against other community members in a one-shot Prisoner’s Dilemma?

  • Definitely cooperate
  • Leaning toward: cooperate
  • Leaning toward: defect
  • Definitely defect
  • Other

I think that these questions are not only interesting in and of itself, but that they are also highly important for further research I'd like to conduct. (I can go into more detail if necessary.)

Net Utility and Planetary Biocide

First of all, I don't think that morality is objective as I'm a proponent of moral anti-realism. That means that I don't believe that there is such a thing as "objective utility" that you could objectively measure.

But, to use your terms, I also believe that there currently exists more "disutility" than "utility" in the world. I'd formulating it this way: I think there exists more suffering (disutility, disvalue, etc.) than happiness (utility, value, etc.) in the world today. Note that this is just a consequence of my own personal values, in particular my "exchange rate" or "trade ratio" between happiness and suffering: I'm (roughly) utilitarian but I give more weight to suffering than to happiness. But this doesn't mean that there is "objectively" more disutility than utility in the world.

For example, I would not push a button that creates a city with 1000 extremely happy beings but where 10 people are being tortured. But a utilitarian with a more positive-leaning trade ratio might want to push the button because the happiness of the 1000 outweighs the suffering of the 10. Although we might disagree, neither of us is "wrong".

Similar reasoning applies with regards to the "expected value" of the future. Or to use a less confusing term: The ratio of expected happiness to suffering of the future. Crucially, this question has both an empirical as well as a normative component. The expected value (EV) of the future for a person will both depend on her normative trade ratio as well as her empirical beliefs about the future.

I want to emphasize, however, that even if one thinks that the EV of the future is negative, one should not try to destroy the world! There are many reasons for this so I'll just pick a few: First of all, it's extremely unlikely that you will succeed and will probably only cause more suffering in the process. Secondly, planetary biocide is one of the worst possible things one can do according to many value systems. I think it's extremely important to be nice to other value systems and promote cooperation among their proponents. If you attempted to implement planetary biocide you would cause distrust, probably violence and the breakdown of cooperation, which will only increase future suffering, hurting everyone in expectation.

Below, I list several more relevant essays that expand on what I've written here and which I can highly recommend. Most of these link to the Foundational Research Institute (FRI) which is not a coincidence as FRI's mission is to identify cooperative and effective strategies to reduce future suffering.

I. Regarding the empirical side of future suffering

II. On the benefits of cooperation

III. On ethics

The Library of Scott Alexandria

Great list!

IMO, one should add Prescriptions, Paradoxes, and Perversities to the list. Maybe to the section "Medicine, Therapy, and Human Enhancement".

Load More