Sherrinford's Shortform

by Sherrinford2nd May 202019 comments
19 comments, sorted by Highlighting new comments since Today at 6:28 AM
New Comment

It would be great if people first did some literature research before presenting their theory of life, universe and everything. If they did not find any literature, they should say so.

I considered looking for any studies or documentation about whether blog and website posts are improved by prior research or references.  But then I got distracted, so I just wrote this comment instead.

At least you didnt write a long longform post :)

Currently reading Fooled by Randomness, almost 20 years after it was published. By now I have read about a third of it. Up to now, it seems neither very insightful nor dense; all the insights (or observations) seem to be what you can read in the (relatively short) wikipedia article. It is also not extremely entertaining.

I wonder whether it was a revealing, revolutionary book back in the days, or whether it is different to people with a certain background (or lack thereof), such that my impression is, in some sense, biased. I also wonder whether the other books by Taleb are better, but given the praise that FbR seems to have received, I guess it is not likely that the Black Swan would be fundamentally different from FbR.

I read Black Swan early in my introduction to heuristics and biases, in my teens. I remember that the book was quite illuminating for me, though I disliked Taleb's narcissism and his disrespect for the truth. I don't think it was so much "insightful" as helping me internalize a few big insights. The book's content definitely overlaps a lot with beginner rationality, so you might not find it worthwhile after all. I read a bit of FbR and about half of Antifragile as well, but I found those much less interesting.

An aside: Taleb talks about general topics. It's hard to say new things in that market (it's saturated), and the best parts of his new insights have already become part of the common lexicon.

New results published in Cell suggest that Sars-Cov 2 gets into the body via the nasal mucosa and then gets into deep parts of the lung via body fluids, and possibly into the brain. A second part of the same study suggests that there may be a partial immunity against Sars-Cov 2 of people who had Sars or Mers. (Disclaimer: I only read a newspaper summary.)

The results of Bob Jacob's LessWrong survey are quite interesting. It's a pity the sample is so small.

The visualized results (link in his post) are univariate, but I would like to highlight some things:

49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison: https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation)
1 out of 55 working in a "blue collar" profession
Most people identify as "left of center" in some sense. At the same time, 30 out of 55 identify as "libertarian", but there were multiple answers allowed.
31 of 59 respondents think they are at least "upper middle class"; 22 of 59 think the family they were raised in was "upper middle class". (Background: In social science surveys, wealthy people usually underestimate their position, and poor people overestimate it but to a lesser extent.)

I would not have guessed the left-of-center identification, and I would have slightly underestimated the share of male (cisgender).

I would not have guessed the left-of-center identification

If you have 9 people who identify as left-wing and 1 person who identifies as right-wing, many people will hysterically denounce the entire group as "extreme right", based on the fact that the 1 person wasn't banned.

Furthermore, if you have people who identify as left-wing, but don't fully buy the current Twitter left-wing orthodoxy, they too will be denounced by some as "extreme right".

This skews the perception.

I don't think that fits what I am talking about:

  1. The survey was non-binary. Your first claim does not distinguish extremes and moderates.
  2. The survey was anonymous. You cannot ban anonymous people.
  3. I see no reason why people should have overstated their leftishness.
  4. If your statement is meant to explain why my perception differs from the result, it does not fit. My perception based on posts and comments would have been relatively more rightwing, less liberal / social democratic / green etc.
  5. I don't see where leftwing lesswrongers are denounced as rightwing extremists. In particular, I don't see where this explains people identifying as leftwing in the survey.

My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme. And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

(Note that I am not discussing here why this is so. Maybe the left-wing is inherently correct. Or maybe the intelligent people are just more likely to attend universities where they get brainwashed by the establishment. I am not discussing the cause here, merely observing the outcome.)

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

I don't see where leftwing lesswrongers are denounced as rightwing extremists.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

"My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme."

I agree. (I assume that by political spectrum you refer to something "objective"?)

And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

Given the whole Bay-area thing, I would have expected a higher share. In the survey, 37 out of 60 say they are residing in the US.

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

Having been in this forum for a while, my impressions based on posts and comments led me to believe that less than 50% of people on lessrong would say of themselves that they are on values 1-5 of 1-10 scale from left-wing to right-wing. In fact, 41/56 did so.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated. I don't agree on the "intended outcome", or rather, I do not see why I should believe that.

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated.

It's not true, though! The article claims: "The neoreactionary movement first grew on LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology".

I mean, okay, it's true that we've had discussions on eugenics and evolutionary psychology, and it's true that a few of the contrarian nerds who enthusiastically read Overcoming Bias back in the late 'aughts were also a few of the contrarian nerds who enthusiastically read Unqualified Reservations. But "first grew" (Wikipedia) and "originated" (your comment) really doesn't seem like a fair summary of that kind of minor overlap in readership. No one was doing neoreactionary political theorizing on this website. Okay, I don't have a exact formalization of what I mean by "no one" in the previous sentence because I haven't personally read and remembered every post in our archives; maybe there are nonzero posts with nonnegative karma that could be construed to match this description. Still, in essence, you can only make the claim "true" by gerrymandering the construal of those words.

And yet the characterization will remain in Wikipedia's view of us—glancing at the talk page, I don't expect to win an edit war with David Gerard.

Interesting. I had maybe read the Wikipedia article a long time ago, but it did not leave any impression in my memory. Now rereading it, I did not find it dramatic, but I see your point.

Tbh, I stilĺ do not fully understand how Wikipedia works (that is, I do not have a model who determines how an article develops). And the "originated" (ok maybe that is only almost and not fully identical to "first grew") is just what I got from the article. The problem with the association is that it is hard to definitely determine what even makes things mentionable, but once somebody publibly has to distance himself from something, this indicates a public kind of association.

Further reading the article, my impression is that it indeed cites things that in Wikipedia count as sources for its claims. If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications? Or maybe it is all just selective citing, but then it should be possible to cite other things.

In theory, Wikipedia strives to be impartial. In practice, the rules are always only as good as the judges who uphold them. (All legal systems involve some degree of human judgment somewhere in the loop, because it is impossible to write a set of rules that covers everything and doesn't allow some clever abuse. That's why we talk about the letter and the spirit of the law.)

How to become a Wikipedia admin? You need to spend a lot of time editing Wikipedia in a way other admins consider helpful, and you need to be interested in getting the role. (Probably a few more technical details I forgot.) The good thing is that by doing a lot of useful work you send a costly signal that you care about Wikipedia. The bad thing is that if certain political opinion becomes dominant among the existing admins, there is no mechanism to fix this bias; it's actually the other way round, because edits disagreeing with the consensus would be judged as harmful, and would probably disqualify their author from becoming an admin in the future.

I don't assume bad faith from most of Wikipedia editors. Being wrong about something feels the same from inside as being right; and if other people agree with you, that is usually a good sign. But if you have a few bad actors who can play it smart, who can pretend that their personal grudges are how they actually see the world... considering that other admins already see them as part of the same team, and the same political bias means they already roughly agree on who are the good guys and who are the bad guys... it is not difficult to defend their decisions in front of jury of their peers. An outsider has no chance in this fight, because the insider is fluent with local lingo. Whatever they want to argue, they can find a wiki-rule pointing in that direction; of course it would be just as easy for them to find a wiki-rule pointing in the opposite direction (e.g. if you want to edit an article about something you are personally involved with, you have a "conflict of interest", which is a bad thing; if I want to do the same thing, my personal involvement makes me a "subject-matter expert", which is a good thing; your repetitive editing of the article to make your point is "vandalism", my repetitive editing of the article to make an opposite point is "reverting vandalism"); and then the other admins will nod and say: "of course, if this is what the wiki-rules say, our job is to obey them".

The specific admin that is so obsessed with Less Wrong is David Gerard from RationalWiki. He keeps a grudge for almost a decade, when he added Less Wrong to his website as an example of pseudoscience, mostly because of the quantum physics sequence. After being explained that actually "many worlds" is one of the mainstream interpretations among the scientists, he failed to say oops, and continued in the spirit of: well, maybe I was technically wrong about the quantum thing, but still... and spent the last decade trying to find and document everything that is wrong with Less Wrong. (Roko's Basilisk -- a controversial comment that was posted on LW once, deleted by Eliezer along with the whole thread, then posted on RationalWiki as "this is what people at Less Wrong actually believe". Because the fact that it was deleted is somehow a proof that deep inside we actually agree with it, but we don't want the world to know. Neoreaction -- a small group of people who enjoyed debating their edgy beliefs on Less Wrong, were considered entertaining for a while, then became boring and were kicked out. Again, the fact that they were not kicked out sooner is evidence of something dark.) Now if you look who makes most edits on the Wikipedia page about Less Wrong: it's David Gerard. If you go through the edit history and look at the individual changes, most of them are small and innocent, but they are all in the same direction: the basilisk and neoreaction must remain in the article, no matter how minuscule they are from perspective of someone who actually reads Less Wrong; on the other hand, mentions of effective altruism must be kept as short as possible. All of this is technically true and defensible, but... I'd argue that the Less Wrong described by the Wikipedia article does not resemble the Less Wrong its readers know, and that we have David Gerard and his decade-long work to thank for this fact.

If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications?

True, but most of the information in media originates from RationalWiki, where it was written by David Gerard. A decade ago, RationalWiki used to be quite high in google rankings, if I remember correctly; any journalist who did a simple background check would find it. Then he or she would ask about the juicy things in the interview, and regardless of the answer, the juicy things would be mentioned in the article. Which means that the next journalist would now find them both at RationalWiki and in the previous article, which means that he or she would again make a part of the interview about it, reinforcing the connection. It is hard to find an article about Less Wrong that does not mention Roko's Basilisk, despite the fact that it is discussed here rarely, and usually in the context of "guys, I have read about this thing called Roko's Basilisk in the media, and I can't find anything about it here, could you please explain me what this is about?"

Part of this is the clickbait nature of media: given the choice between debating neoreaction and debating technical details of the latest decision theory, it doesn't matter which topic is more relevant to Less Wrong per se, they know that their audience doesn't care about the latter. And part of the problem with Wikipedia is that it is downstream of the clickbait journalism. They try to use more serious sources, but sometimes there is simply no other source on the topic.

Thanks for the history overview! Very interesting. Concerning the wikipedia dynamics, I agree that this is plausible, as it is a plausible development of nearly every volunteer organization, in particular if they try to be grassroots-democratic. The wikipedia-media problem is known (https://xkcd.com/978/) though in this particular case I was a bit surprised about the "original research" and "reliable source" distinction. Many articles there did not seem very "serious". On the other hand, during this whole "lost in hyperspace", I also found "A frequent poster to LessWrong was Michael Anissimov, who was MIRI’s media director until 2013." (https://splinternews.com/the-strange-and-conflicting-world-views-of-silicon-vall-1793857715) which was news to me. In internet years, all this is so long ago that I did not have any such associations. (I would rather have expected lesswrong to be notable for demanding the dissolution of the WHO, but probably that is not yet clickbaity enough.)

You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.

I like the word "institurions". Some mix of institutions, intuitions, and centurions, and I agree that they're completely broken.

:-) Thanks. But I corrected it.

Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.

CATF's work is  introduced as follows:

"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)

On p. 88, you will read:

"Do they have a a good track record? CATF have conceived of and led several successful advocacy campaigns in the US, which have had very large public health and environmental benefits. According to our rough model, through their past work, they have averted a tonne of CO 2 e for around $1.

Is their future work cost- - effective? Going forward, CATF plans to continue its work on power plant regulation and to advocate for policy support for innovative but neglected low carbon technologies.

Given their track record and the nature of their future projects, we think it is likely that a donation to CATF would avert a tonne of CO 2 e for $0.10-$1."

On p. 91:

"CATF was founded in 1996 to advocate for regulation of the damaging air pollution produced by the US coal fleet, initially focusing on sulphur dioxide (SO 2 ) and nitrogen oxide (NO x ). They later advocated for controls on mercury emissions. The theory of change was that the cost of emission controls for conventional pollutants and mercury would result in the retirement or curtailment of coal plant operation resulting in reductions in CO 2 (and other) emissions. CATF conceived of the campaign goal, designed the strategy, and led the campaign, in turn drawing in philanthropic support and recruiting other environmental NGOs to the campaign."

How does the evaluation work? A spreadsheet with an evaluation shows benefits of the policy impact.

Where do the numbers come from? The spreadsheet states "subjective input" in several cells. The "Climate Change Cause Area Report" by Founders Pledge (p. 129--) states that "CATF is typical of research and policy advocacy organisations in that it has worked on heterogeneous projects. This makes it difficult to evaluate all of CATF’s past work, as this would require us to assess their counterfactual impact in a range of different contexts in which numerous actors are pushing for the same outcome." The report then asks e.g. how much CATF "brought the relevant regulation forward", and the answers seem to rely strongly on assessment by CATF. Nonetheless, it makes assessments like "Our very rough realistic estimate is therefore that CATF brought the relevant regulation forward by 12 months. The 90% confidence interval around this estimate is 6 months to 2 years." On p. 91 you can read: "Through each of these mechanisms, CATF increased the probability that regulation was introduced earlier in time. Our highly uncertain realistic estimate is that through their work, CATF brought regulation on US coal plants forward by 18 months, with a lower bound of 9 months and a higher bound of 4 years. CATF believe this to be a major underestimate, and have told us that they could have brought the relevant regulation forward by ten years."

While of course it's fine to give subjective estimates, they should be taken with a grain of salt. It seems the comparison is much more reliant on such subjectivity than when you evaluate charities with concrete, repeatedly applied health interventions.

What, if anything, could be biased?

Additional to the (probably unavoidable) reliance on self-information, the following paragraph made me wonder:

"CATF have told us that at the time the campaign was conceived, major environmental organisations were opposed to reopening the question of plant emissions after the Clean Act Amendments of 1990, as they feared the possibility that legislative debate would unravel other parts of the Act. 216 This is based on conversations at the time with the American Lung Association, Environmental Defense Fund, and the Natural Resources Defense Council."

How can we know whether such fears were justified ex ante? How do we guard against survivorship or hindsight bias?