Open thread, Mar. 20 - Mar. 26, 2017

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

208 comments, sorted by
magical algorithm
Highlighting new comments since Today at 7:55 PM
Select new highlight date

Okay, so I recently made this joke about future Wikipedia article about Less Wrong:

[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

A few days later I actually looked at the Wikipedia article about Less Wrong:

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, calling it "stupid". Discussion of Roko's basilisk was banned on LessWrong for several years before the ban was lifted in October 2015.

The majority of the LessWrong userbase identifies as atheist, consequentialist, white and male.

The neoreactionary movement is associated with LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology. In the 2014 self-selected user survey, 29 users representing 1.9% of survey respondents identified as "neoreactionary". Yudkowsky has strongly repudiated neoreaction.

Well... technically, the article admit that at least Yudkowsky considers the basilisk stupid, and disagrees with neoreaction. Connotationally, it suggests that basilisk and neoreaction are 50% of what is worth mentioning about LW, because that's the fraction of the article these topics got.

Oh, and David Gerard is actively editing this page. Why am I so completely unsurprised? His contributions include:

  • making a link to a separate article for Roko's basilisk (link), which luckily didn't materialize;
  • removing suggested headers "Rationality", "Cognitive bias", "Heuristic", "Effective altruism", "Machine Intelligence Research Institute" (link) saying that "all of these are already in the body text"; but...
  • adding a header for Roko's basilisk (link);
  • shortening a paragraph on LW's connection to effective altruism (link) -- by the way, the paragraph is completely missing from the current version of the article;
  • an edit war emphasising that it is finally okay to talk on LW about the basilisk (link, link, link, link, link);
  • restoring the deleted section on basilisk (link) saying that it's "far and away the single thing it's most famous for";
  • adding neoreaction as one of the topics discussed on LW (link), later removing other topics competing for attention (link), and adding a quote that LW "attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls" (link);

...in summary, removing or shortening mentions of cognitive biases and effective altruism, and adding or developing mentions of basilisk and neoreaction.

Sigh.

EDIT: So, looking back at my prediction that...

Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games.

...I'd say I was (1) right about the basilisk; (2) partially right about the white supremacism, which at this moment is not mentioned explicitly (yet! growth mindset), but the article says that the userbase is mostly white and male, and discusses eugenics; and (3) wrong about the computer games. 50% success rate!

Yikes. The current version of the WP article is a lot less balanced than the RW one!

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Problem is, this is probably not a good news for LW. Tomorrow, the RB section will most likely be back, possibly with a warning on the talk page that the evil cultists from LW are trying to hide their scandals.

can we fix this please?

Edit: I will work on it.

I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.

For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.

Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he is not allowed to remove my reaction from the talk page, unless I make a mistake and break some other rule. That means, even if I lose the battle, people editing the article in the future will be able to see my reaction. This is a meta move: the goal is not to change the article, but to convince the impartial Wikipedia editors that it should be changed. If I succeed to convince them, I don't have to do the edit myself; someone else will. On the other hand, if I fail to convince them, any edit would likely be reverted by David, and I have neither time nor will to play wiki wars.

What would be the content of the reaction? Let's start with the assumption that on Wikipedia no one gives a fuck about Less Wrong, rationality, AI, Eliezer, etc.; to most people this is just an annoying noise. By drawing their attention to the topic, you are annoying them even more. And they don't really care about who is right, only who is technically correct. That's the bad news. The good news is that they equally don't give a fuck about RationalWiki or David. What they do care about is Wikipedia, and following the rules of Wikipedia. Therefore the core of my reaction would be this: David Gerard has a conflict of interest about this topic; therefore he should not be allowed to edit it, and all his previous edits should be treated with suspicion. The rest is simply preparing my case, as well as I can, for the judge and the jury, who are definitely not Bayesians, and want to see "solid", not probabilistic arguments.

The argument for David's conflict of interest is threefold. (1) He is a representative (admin? not sure) of RationalWiki, which is some sense is LessWrong's direct competitor, so it's kinda like having a director of Pepsi Cola edit the article on Coca Cola, only at a million times smaller scale. How are these two websites competitors? They both target the same niche, which is approximately "a young intelligent educated pro-science atheist, who cares a lot about his self-image as 'rational'". They have "rational" in their name, we have it pretty much everywhere except in the name; we compete for being the online authorities on the same word. (2) He has a history of, uhm, trying to associate LW with things he does not like. He made (not sure about this? certainly contributed a lot) the RW article on Roko's Basilisk several years ago; LW complained about RW already in 2012. Note: It does not matter for this point whether RW or LW was actually right or wrong; I am just trying to establish that these two have a several years of mutual dislike. (3) This would be most difficult to prove, but I believe that most sensational information about LW was actually inspired by RW. I think most mentions of Roko's Basilisk could be traced back to their article. So what David is currently doing in Wikipedia is somewhat similar to citogenesis... he writes something on his website, media find it and include it in their sensationalist reports, then he "impartially" quotes the media for Wikipedia. On some level, yes, the incident happened (there was one comment, which was once deleted by Eliezer -- as if nothing similar ever happened on any online forum), but the whole reason for its "notability" is, well, David Gerard; without his hard work, no one would give a fuck.

So this is the core, and then there are some additional details. Such as, it is misleading to tell the readers what 1% of LW survey identify as, without even mentioning the remaining 99%. Clearly, "1% neoreactionaries" is supposed to give it a right-wing image, which adding "also, 4% communists, and 20% socialists" (I am just making the numbers up at the moment) would immediately disprove. And the general pattern of David's edits, for increasing the length of the parts talking about basilisk and neoreaction, and decreasing the lenght of everything else.

My thoughts so far. But I am quite a noob as far as wiki wars are concerned, so maybe there is an obvious flaw in this that I haven't noticed. Maybe it would be best if a group of people could cooperate in precise wording of the comment (probably at a bit more private place, so that parts of the debate couldn't be later quoted out of context).

It's worth noting that David Gerard was a LW contributor with a significant amount of karma: http://lesswrong.com/user/David_Gerard/

This isn't what "conflict of interest" means at Wikipedia. You probably want to review WP:COI, and I mean "review" it in a manner where you try to understand what it's getting at rather than looking for loopholes that you think will let you do the antisocial thing you're contemplating. Your posited approach is the same one that didn't work for the cryptocurrency advocates either. (And "RationalWiki is a competing website therefore his edits must be COI" has failed for many cranks, because it's trivially obvious that their true rejection is that I edited at all and disagreed with them, much as that's your true rejection.) Being an advocate who's written a post specifically setting out a plan, your comment above would, in any serious Wikipedia dispute on the topic, be prima facie evidence that you were attempting to brigade Wikipedia for the benefit of your own conflict of interest. But, y'know, knock yourself out in the best of faith, we're writing an encyclopedia here after all and every bit helps. HTH!

If you really want to make the article better, the guideline you want to take to heart is WP:RS, and a whacking dose of WP:NOR. Advocacy editing like you've just mapped out a detailed plan for is a good way to get reverted, and blocked if you persist.

Is any of the following not true?

  • You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.

  • A lot of what the "reliable sources" write about LW originates from your writing about LW.

  • You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.

The first two would suggest I'm a subject-matter expert, and particularly the second if the "reliable sources" consistently endorse my stuff, as you observe they do. This suggests I'm viewed as knowing what I'm talking about and should continue. (Be careful your argument makes the argument you think it's making.) The third is that you dislike my opinion, which is fine, but also irrelevant. The final sentence fails to address any WP:RS-related criterion. HTH!

The first two would suggest I'm a subject-matter expert

Why? Are the two or three most vocal critics of evolution also experts? Does the fact that newspapers quote Michio Kaku or Bill Nye on the dangers of global warming make them climatology experts?

Oh, I see, it's one of those irregular words:

I am a subject-matter expert
you have a conflict of interests

despite hearing that one a lot at Rationalwiki, it turns out the big Soros bucks are thinner on the ground than many a valiant truthseeker thinks

In case it wasn't obvious (it probably was, in which case I apologize for insulting your intelligence, or more precisely I apologize so as not to insult your intelligence), TheAncientGeek was not in fact making a claim about you or your relationship with deep-pocketed malefactors but just completing the traditional "irregular verb" template.

That's fine :-) It ties in with what I commented above, i.e. conspiracists first assuming that disagreement must be culpable malice.

I think you must somehow have read what I wrote as the exact reverse of what I intended. (Unless you are calling yourself a conspiracist.) TAG is not assuming that anything must be culpable malice, he is just finishing off a joke left 2/3 done.

That's the joke, when a conspiracist calls one a "paid shill".

No one called anyone a paid shill.

Perhaps I am just being particularly dim at the moment. Perhaps you're being particularly obtuse for some reason. Either way, probably best if I drop this now.

Or just what words mean in the context in question, keeping in mind that we are indeed speaking in a particular context.

[here, let me do your homework for you]

In particular, expertise does not constitute a Wikipedia conflict of interest:

https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#External_roles_and_relationships

While editing Wikipedia, an editor's primary role is to further the interests of the encyclopedia. When an external role or relationship could reasonably be said to undermine that primary role, the editor has a conflict of interest. (Similarly, a judge's primary role as an impartial adjudicator is undermined if she is married to the defendant.)

Any external relationship—personal, religious, political, academic, financial or legal—can trigger a COI. How close the relationship needs to be before it becomes a concern on Wikipedia is governed by common sense. For example, an article about a band should not be written by the band's manager, and a biography should not be an autobiography or written by the subject's spouse.

Subject-matter experts are welcome to contribute within their areas of expertise, subject to the guidance on financial conflict of interest, while making sure that their external roles and relationships in that field do not interfere with their primary role on Wikipedia.

Note "the subject doesn't think you're enough of a fan" isn't listed.

Further down that section:

COI is not simply bias

Determining that someone has a COI is a description of a situation. It is not a judgment about that person's state of mind or integrity.[5] A COI can exist in the absence of bias, and bias regularly exists in the absence of a COI. Beliefs and desires may lead to biased editing, but they do not constitute a COI. COI emerges from an editor's roles and relationships, and the tendency to bias that we assume exists when those roles and relationships conflict.[9] COI is like "dirt in a sensitive gauge."[10]

On experts:

https://en.wikipedia.org/wiki/Wikipedia:Expert_editors

Expert editors are cautioned to be mindful of the potential conflict of interest that may arise if editing articles which concern an expert's own research, writings, discoveries, or the article about herself/himself. Wikipedia's conflict of interest policy does allow an editor to include information from his or her own publications in Wikipedia articles and to cite them. This may only be done when the editors are sure that the Wikipedia article maintains a neutral point of view and their material has been published in a reliable source by a third party. If the neutrality or reliability are questioned, it is Wikipedia consensus, rather than the expert editor, that decides what is to be done. When in doubt, it is good practice for a person who may have a conflict of interest to disclose it on the relevant article's talk page and to suggest changes there rather than in the article. Transparency is essential to the workings of Wikipedia.

i.e., don't blatantly promote yourself, run it past others first.

You're still attempting to use the term "conflict of interest" when what you actually seem to mean is "he disagrees with me therefore should not be saying things." That particular tool, the term "conflict of interest", really doesn't do what you think it does.

The way Wikipedia deals with "he disagrees with me therefore should not be saying things" is to look at the sources used. Also, "You shouldn't use source X because its argument originally came from Y which is biased" is not generally a winning argument on Wikipedia without a lot more work.

Before you then claim bias as a reason, let me quote again:

https://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sources#Biased_or_opinionated_sources

Wikipedia articles are required to present a neutral point of view. However, reliable sources are not required to be neutral, unbiased, or objective. Sometimes non-neutral sources are the best possible sources for supporting information about the different viewpoints held on a subject.

Common sources of bias include political, financial, religious, philosophical, or other beliefs. Although a source may be biased, it may be reliable in the specific context. When dealing with a potentially biased source, editors should consider whether the source meets the normal requirements for reliable sources, such as editorial control and a reputation for fact-checking. Editors should also consider whether the bias makes it appropriate to use in-text attribution to the source, as in "Feminist Betty Friedan wrote that...", "According to the Marxist economist Harry Magdoff...," or "Conservative Republican presidential candidate Barry Goldwater believed that...".

So if, as you note, the Reliable Sources regularly use me, that would indicate my opinions would be worth taking note of - rather than the opposite. As I said, be careful you're making the argument you think you are.

(I don't self-label as an "expert", I do claim to know a thing or two about the area. You're the one who tried to argue from my opinions being taken seriously by the "reliable sources".)

No one is actually suggesting that either "expertise" or "not being enough of a fan" constitutes a conflict of interest, nor are those the attributes you're being accused of having.

On the other hand, the accusations actually being made are a little unclear and vary from occasion to occasion, so let me try to pin them down a bit. I think the ones worth taking seriously are three in number. Only one of them relates specifically to conflicts of interest in the Wikipedia sense; the others would (so far as I can see) not be grounds for any kind of complaint or action on Wikipedia even if perfectly correct in every detail.

So, they are: (1) That you are, for whatever reasons, hostile to Less Wrong (and the LW-style-rationalist community generally, so far as there is such a thing) and keen to portray it in a bad light. (2) That as a result of #1 you have in fact taken steps to portray Less Wrong (a.t.Lsr.c.g.s.f.a.t.i.s.a.t.) in a bad light, even when that has required you to be deliberately misleading. (3) That your close affiliation with another organization competing for mindshare, namely RationalWiki, constitutes a WP:COI when writing about Less Wrong.

Note that #3 is quite different in character from a similar claim that might be made by, say, a creationist organization; worsening the reputation of the Institute for Creation Research is unlikely to get more people to visit RationalWiki and admire your work there (perhaps even the opposite), whereas worsening the reputation of Less Wrong might do. RW is in conflict with the ICR, but (at least arguably) in competition with LW.

For the avoidance of doubt, I am not endorsing any of those accusations; just trying to clarify what they are, because it seems like you're addressing different ones.

I already answered #3: the true rejection seems to be not "you are editing about us on Wikipedia to advance RationalWiki at our expense" (which is a complicated and not very plausible claim that would need all its parts demonstrated), but "you are editing about us in a way we don't like".

Someone from the IEET tried to seriously claim (COI Noticeboard and all) that I shouldn't comment on the deletion nomination for their article - I didn't even nominate it, just commented - on the basis that IEET is a 501(c)3 and RationalWiki is also a 501(c)3 and therefore in sufficiently direct competition that this would be a Wikipedia COI. It's generally a bad and terrible claim and it's blitheringly obvious to any experienced Wikipedia editor that it's stretching for an excuse.

Variations on #3 are a perennial of cranks of all sorts who don't want a skeptical editor writing about them at Wikipedia, and will first attempt not to engage with the issues and sources, but to stop the editor from writing about them. (My favourite personal example is this Sorcha Faal fan who revealed I was editing as an NSA shill.) So it should really be considered an example of the crackpot offer, and if you find yourself thinking it then it would be worth thinking again.

(No, I don't know why cranks keep thinking implausible claims of COI are a slam dunk move to neutralise the hated outgroup. I hypothesise a tendency to conspiracist thinking, and first assuming malfeasance as an explanation for disagreement. So if you find yourself doing that, it's another one to watch out for.)

I already answered #3

No, you really didn't, you dismissed it as not worth answering and proposed that people claiming #3 can't possibly mean it and must be using it as cover for something else more blatantly unreasonable.

I understand that #3 may seem like an easy route for anyone who wants to shut someone up on Wikipedia without actually refuting them or finding anything concrete they're doing wrong. It is, of course, possible that that Viliam is not sincere in suggesting that you have a conflict of interest here, and it is also possible (note that this is a separate question) that if he isn't sincere then his actual reason for suggesting that you have is simply that he wishes you weren't saying what you are and feels somehow entitled to stop you for that reason alone. But you haven't given any, y'know, actual reasons to think that those things are true.

Unless you count one of these: (1) "Less Wrong is obviously a nest of crackpots, so we should expect them to behave like crackpots, and saying COI when they mean 'I wish you were saying nice things about us' is a thing crackpots do". Or (2) "This is an accusation that I have a COI, and obviously I don't have one, so it must be insincere and match whatever other insincere sort of COI accusation I've seen before". I hope it's clear that neither of those is a good argument.

Someone from the IEET tried to seriously claim [...]

I read the discussion. The person in question is certainly a transhumanist but I don't see any evidence he is or was a member of the IEET, and the argument he made was certainly bad but you didn't describe it accurately at all. And, again, the case is not analogous to the LW one: conflict versus competition again.

first assuming malfeasance as an explanation for disagreement

I agree, that's a bad idea. I don't quite understand how you're applying it here, though. So far as I can tell, your opponents (for want of a better word) here are not troubled that you disagree with them (e.g., they don't deny that Roko's basilisk was a thing or that some neoreactionaries have taken an interest in LW); they are objecting to your alleged behaviour: they think you are trying to give the impression that Roko's basilisk is important to LWers' thinking and that LW is a hive of neoreactionaries, and they don't think you're doing that because you sincerely believe those things.

So it's malfeasance as an explanation for malfeasance, not malfeasance as an explanation for disagreement.


I repeat that I am attempting to describe, not endorsing, but perhaps I should sketch my own opinions lest that be thought insincere. So here goes; if (as I would recommend) you aren't actually concerned about my opinions, feel free to ignore what follows unless they do become an issue.

  • I do have the impression that you wish LW to be badly thought of, and that this goes beyond merely wanting it to be viewed accurately-as-you-see-it. I find this puzzling because in other contexts (and also in this context, in the past when your attitude seemed different) the evidence available to me suggests that you are generally reasonable and fair. (Yes, I have of course considered the possibility that I am puzzled because LW really is just that bad and I'm failing to see it. I'm pretty sure that isn't the case, but I could of course be wrong.)

  • I do not think the case that you have a WP:COI on account of your association with RationalWiki, still less because you allegedly despise LW, is at all a strong one, and I think that if Viliam hopes that making that argument would do much to your credibility on Wikipedia then his hopes would be disappointed if tested.

  • I note that Viliam made that suggestion with a host of qualifications about how he isn't a Wikipedia expert and was not claiming with any great confidence that you do in fact have a COI, nor that it would be a good idea to say that you do.

  • I think his suggestion was less than perfectly sincere in the following sense: he made it not so much because he thinks a reasonable person would hold that you have a conflict of interest, as because he thinks (sincerely) that you might have a COI in Wikipedia's technical sense, and considers it appropriate to respond with Wikipedia technicalities to an attack founded on Wikipedia technicalities.

  • The current state of the Wikipedia page on Less Wrong doesn't appear terribly bad to me, and to some extent it's the way it is because Wikipedia's notion of "reliable sources" gives a lot of weight to what has attracted the interest of journalists, which isn't your fault. But there are some things that seem ... odd. Here's the oddest:

    • Let's look at those two refs (placed there by you) for the statement that "the neoreactionary movement takes an interest in Less Wrong" (which, to be sure, could be a lot worse ... oh, I see that you originally wrote "is associated with Less Wrong" and someone softened it; well done, someone). First we have a TechCrunch article. Sum total of what it says is that "you may have seen" neoreactionaries crop up "on tech hangouts like Hacker News and Less Wrong". I've seen racism on Facebook; is Facebook "associated with racism" in any useful sense? Second we have a review of "Neoreaction: a basilisk" claiming "The embryo of the [neoreactionary] movement lived in the community pages of Yudkowsky’s blog LessWrong", which you know as well as I do to be flatly false (and so do the makers and editors of WP's page on neoreaction, which quite rightly doesn't even mention Less Wrong). These may be Reliable Sources in the sense that they are the kind of document that Wikipedia is allowed to pay attention to. They are not reliable sources for the claim that neoreaction and Less Wrong have anything to do with one another, because the first doesn't say that and the second says it but is (if I've understood correctly) uncritically reporting someone else's downright lie.

    • I have to say that this looks exactly like the sort of thing I would expect to see if you were trying to make Less Wrong look bad without much regard for truth, and using Wikipedia's guiding principles as "cover" rather than as a tool for avoiding error. I hope that appearance is illusory. If you'd like to convince me it is, I'm all ears.

Villiam started with a proposal to brigade Wikipedia. This was sufficiently prima facie bad faith that I didn't, and still don't, feel any obligation to bend over backwards to construct a kernel of value from his post. You certainly don't have to believe me that his words 100% pattern match to extruded crank product from my perspective, but I feel it's worth noting that they do.

I feel answering his call for brigade with a couple of detailed link- and quote-heavy comments trying to explain what the rules actually are and how they actually work constituted a reasonable effort to respond sincerely and helpfully on my part, and offer guidance on how not to 100% pattern match to extruded crank product in any prospective editor's future Wikipedia endeavours.

If you have problems with the Wikipedia article, these are best addressed on the article talk page, and 0% here. (Readers attempting this should be sure to keep to the issues and not attempt to personalise issues as being about other editors.)

Anything further will be repeating ourselves, I think.

Viliam started with a proposal to brigade Wikipedia.

No, he didn't. He started with a description of something he might do individually. Literally the only things he says about anyone else editing Wikipedia are (1) to caution someone who stated an intention of doing so not to rush in, and (2) to speculate that if he does something like this it might be best for a group of people to cooperate on figuring out how to word it.

(More generally as a Wikipedia editor I find myself perennially amazed at advocates for some minor cause who seem to seriously think that Wikipedia articles on their minor cause should only be edited by advocates, and that all edits by people who aren't advocates must somehow be wrong and bad and against the rules. Even though the relevant rules are (a) quite simple conceptually (b) say nothing of the sort. You'd almost think they don't have the slightest understanding of what Wikipedia is about, and only cared about advocating their cause and bugger the encyclopedia.)

but in the context of Wikipedia, you should after all keep in mind that I am an NSA shill.

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of which we are already observing) that will act as a catalyst for this. Number one, if economic inequality increases, I think a lot of the blame for this will be placed on the elite (as it always is), but in particular the cognitive elite (which makes up an ever-increasing share of the elite). Whatever the views of the cognitive elite are will become the philosophy of evil from the perspective of the masses. Because the elite are increasingly made up of very high intelligence people, many of whom with a connection to technology or Silicon Valley, we should expect that the dominant worldview of that environment will increasingly contrast with the worldview of those who haven't benefited or at least do not perceive themselves to benefit from the increasing growth and wealth driven by those people. What's worse, it seems that even if economic gains benefit those at the very bottom too, if inequality still increases, that is the only thing that will get noticed.

The second issue is that as technology improves, our powers of inference increase, and privacy defenses become weaker. It's already the case that we can predict a person's behavior to some degree and use that knowledge to our advantage (if you're trying to sell something to them, give them / deny them a loan, judge whether they would be a good employee, or predict whether or not they will commit a crime). There's already a push-back against this, in the sense that certain variables correlate with things we don't want them to, like race. This implies that the standard definition of privacy, in the sense of simply not having access to specific variables, isn't strong enough. What's desired is not being able to infer the values of certain variables, either, which is a much, much stronger condition. This is a deep, non-trivial problem that is unlikely to be solved quickly - and it runs into the same issues as all problems concerning discrimination do, which is how to define 'bias'. Is reducing bias at the expense of truth even a worthy goal? This shifts the debate towards programmers, statisticians and data scientists who are left with the burden of never making a mistake in this area. "Weapons of Math Destruction" is a good example of the way this issue gets treated.

We will also continue to observe a lot ideas from postmodernism being adopted as part of political ideology of the left. Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme. And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought. So if a lot of the above problems get worse, I think there is a chance that rationalism will get blamed as it has been in the framework of postmodernism.

The summary of this is: As politics becomes warfare between worldviews rather than arguments for and against various beliefs, populist hostility gets directed towards what is perceived to be the worldview of the elite. The elite tend to be more rationalist, and so that hostility may get directed towards rationalism itself.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

(I thought the post was reasonably written.)

Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".

The process is not different from what occurred in the past, and I think this was basically the catalyst for anti-semitism during the post industrial revolution era. You observe a characteristic of a group of people who seem to be doing a lot better than you, in that case a lot of them happened to be Jewish, and so you then associate their Jewish-ness with your lack of success and unhappiness.

The main difference is that society continues to modernize and technology improves. Bad ideas for why some people are better off than others become unpopular. Actual biases and unfairness in the system gradually disappear. But despite that, inequality remains and in fact seems to be rising. What happens is that the only thing left to blame is instrumental rationality. I imagine that people will look as hard as they can for bias and unfairness for as long as possible, and will want to see it in people who are instrumentally rational.

In a free society, (and even more so as a society becomes freer and true bigotry disappears) some people will be better off just because they are better at making themselves better off, and the degree to which people vary in that ability is quite staggering. But psychologically it is too difficult for many to accept this, because no one wants to believe in inherent differences. So it's sort of a paradoxical result of our society actually improving.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Writing style looks fine. My quibbles would be with the empirical claims/predictions/speculations.

Is the elite really more of a cognitive elite than in the past?

Strenze's 2007 meta-analysis (previously) analyzed how the correlations between IQ and education, IQ and occupational level, and IQ and income changed over time. The first two correlations decreased and the third held level at a modest 0.2.

Will elite worldviews increasingly diverge from the worldviews of those left behind economically?

Maybe, although just as there are forces for divergence, there are forces for convergence. The media can, and do, transmit elite-aligned worldviews just as they transmit elite-opposed worldviews, while elites fund political activity, and even the occasional political movement.

Would increasing inequality really prevent people from noticing economic gains for the poorest?

That notion sounds like hyperbole to me. The media and people's social networks are large, and can discuss many economic issues at once. Even people who spend a good chunk of time discussing inequality discuss gains (or losses) of those with low income or wealth.

For instance, Branko Milanović, whose standing in economics comes from his studies of inequality, is probably best known for his elephant chart, which presents income gains across the global income distribution, down to the 5th percentile. (Which percentile, incidentally, did not see an increase in real income between 1988 and 2008, according to the chart.)

Also, while the Anglosphere's discussed inequality a great deal in the 2010s, that seems to me a vogue produced by the one-two-three punch of the Great Recession, the Occupy movement, and the economist feeding frenzy around Thomas Piketty's book. Before then, I reckon most of the non-economists who drew special attention to economic inequality were left-leaning activists and pundits in particular. That could become the norm once again, and if so, concerns about poverty would likely become more salient to normal people than concerns about inequality.

Will the left continue adopting lots of ideas from postmodernism?

This is going to depend on how we define postmodernism, which is a vexed enough question that I won't dive deeply into it (at least TheAncientGeek and bogus have taken it up). If we just define (however dodgily) postmodernism to be a synonym for anti-rationalism, I'm not sure the left (in the Anglosphere, since that's the place we're presumably really talking about) is discernibly more postmodernist/anti-rationalist than it was during the campus/culture wars of the 1980s/1990s. People tend to point to specific incidents when they talk about this question, rather than try to systematically estimate change over time.

Granted, even if the left isn't adopting any new postmodern/anti-rationalist ideas, the ideas already bouncing around in that political wing might percolate further out and trigger a reaction against rationalism. Compounding the risk of such a reaction is the fact that the right wing can also operate as a conduit for those ideas — look at yer Alex Jones and Jason Reza Jorjani types.

Is politics becoming more a war of worldviews than arguments for & against various beliefs?

Maybe, but evidence is needed to answer the question. (And the dichotomy isn't a hard and fast one; wars of worldviews are, at least in part, made up of skirmishes where arguments are lobbed at specific beliefs.)

Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme.

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought.

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof. What's worse still is that sceptical and relativistic claims can be supported using the toolkit of rationality. "Postmodernists" tend to be sceptics and relativists, but you don't have to be a "postmodernist" to be a relativist or sceptic. As non-bay-area, mainstream, rationalists understand well. If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

[*] "Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

"Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Not quite. "Poststructuralism" is an ex-post label and many of the thinkers that are most often identified with the emergence of "postmodern" ideas actually rejected it. (Some of them even rejected the whole notion of "postmodernism" as an unhelpful simplification of their actual ideas.) "Continental philosophy" really means the 'old-fashioned' sort of philosophy that Analytic philosophers distanced themselves from; you can certainly view postmodernism as encompassed within continental philosophy, but the notions are quite distinct. Similarly, "critical theory" exists in both 'modernist'/'high modern' and 'postmodern' variants, and one cannot understand the 'postmodern' kind without knowing the 'modern' critical theory it's actually referring to, and quite often criticizing in turn.

All of which is to say that, really, it's complicated, and that while describing postmodernism as a "different set of concerns, goals and approaches" may hit significantly closer to the mark than merely caricaturing it as an antithesis to rationality, neither really captures the worthwhile ideas that 'postmodern' thinkers were actually developing, at least when they were at their best. (--See, the big problem with 'continental philosophy' as a whole is that you often get a few exceedingly worthwhile ideas mixed in with heaps of nonsense and confused thinking, and it can be really hard to tell which is which. Postmodernism is no exception here!)

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth. And the 'goal' of postmodernism is to break apart and criticize everything that claims to be able to do those things. You would be hard pressed to find a better example of something diametrically opposed to rationalism. (I'm going to guess that with high likelihood I'll get accused of not understanding postmodernism by saying that).

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Well yeah, being able to unequivocally define anything is difficult, no argument there. But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. Then what happens is they get accused by postmodernists of claiming to have the One and Only True and Correct Definition of Truth and Correctness, and of claiming that we have access to the Objective Reality. The point is that as soon as you allow for any leeway in this at all (leeway in allowing for some in-between area of there being a true objective reality with 100% access to and 0% access to), you basically obtain rationalism. Not because the principles it derives from are that there is an objective reality that is possible to Truly Know, or that there are facts that we know to be 100% true, but only that there are sets of claims we have some degree of confidence in, and other sets of claims we might want to calculate a degree of confidence in based on the first set of claims.

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof.

It happens to be an attitude that works really well in practice, but the other two attitudes can't actually be used in practice if you were to adhere to them fully. They would only be useful for denying anything that someone else believes. I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict? In probability theory you can have degrees of confidence that are non-zero that add up to one, but it's unclear if this is the same thing as relativism in the sense of "multiple truths". I would guess that it isn't, and multiple truths really means holding two incompatible beliefs to both be true.

If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

Except that you can't demonstrate superiority of anything within the framework of postmodernism. Within rationalism it's very easy and straightforward.

I imagine the reason that some rationalists might find postmodernism to be useful is in the spirit of overcoming biases. This in and of itself I have no problem with - but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

The actual ground-level stance is more like: "If you think that you know some sort of objective reality, etc., it is overwhelmingly likely that you're in fact wrong in some way, and being deluded by cached thoughts." This is an eminently rational attitude to take - 'it's not what you don't know that really gets you into trouble, it's what you know for sure that just ain't so.' The rest of your comment has similar problems, so I'm not going to discuss it in depth. Suffice it to say, postmodern thought is far more subtle than you give it credit for.

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute. It seems like that's setting a very low bar for what postmodernism actually hopes to accomplish.

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute.

The reason why postmodernism often looks like that superficially is that it specializes in critiquing "gigantic modern philosophical edifice[s]" (emphasis on 'modern'!). It takes a gigantic philosophy to beat a gigantic philosophy, at least in some people's view.

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

Citation needed.

Well yeah, being able to unequivocally define anything is difficult, no argument there

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. T

Engineers use an intuitive and pragmatic definition of truth that allows them to actually do things. Rationalists are more in the philosophy business.

It happens to be an attitude that works really well in practice,

For some values of "work". It's possible to argue in detail that predictive power actually doesn't entail correspondence to ultimate reality, for instance.

I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict?

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

Except that you can't demonstrate superiority of anything within the framework of postmodernism

That's not what I said.

but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

There's no such thing as postmodernism and I'm not particularly in favour of it. My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

Citation needed.

Citing it is going to be difficult, even the Stanford Encyclopedia of Philosophy says "That postmodernism is indefinable is a truism." I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion. It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense. This is mostly the sense in which I refer to it in my original post.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

Citing it is going to be difficult,

To which the glib answer is "that's because it isn't true".

" I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

Dennett gives a concise definition because he has the same simplistic take on the subject as you. What he is not doing is showing that there is an actually group of people who describe themselves as postmodernists, and have those views. The use of the terms "postmodernist" is a bad sign: it's a tern that works like "infidel" and so on, a label for an outgroup, and an ingroup's views on an outgroup are rarely bedrock reality.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

When we, the ingroup, can't define something it's Ok, when they, the outgroup, can't define something, it shows how bad they are.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

People are quite psychologically capable of having compartmentalised beliefs, that sort of thing is pretty ubiquitous, which is why I was able to find an example from the rationalist community itself. Relativism without contextualisation probably doesn't make much sense, but who is proposing it?

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion.

As you surely know that I mean there is no group of people who both call themselves postmodernists and hold the views you are attributing to postmodernists.

It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

It's kind of diffuse. But you can talk about scepticism, relativism, etc, if those are the issues.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense.

There's some terrible epistemology on the left, and on the right, and even in rationalism.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

I mean Yudkowsky's approach. Which flies under the flag of Bayesianism, but doesn't make much use of formal Bayesianism.

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

I don't think it's necessary for 'rationality' to be used an applause light for this to happen. The only things needed, in my mind, are:

  • A group of people who adopt rationality and are instrumentally rationalist become very successful, wealthy and powerful because of it.
  • This groups makes up an increasing share of the wealthy and powerful, because they are better at becoming wealthy and powerful than the old elite.
  • The remaining people who aren't as wealthy or successful or powerful, who haven't adopted rationality, make observations about what the successful group does and associates whatever they do / say as the tribal characteristics and culture of the successful group. The fact that they haven't adopted rationality makes them more likely to do this.

And because the final bullet point is always what occurs throughout history, the only difference - and really the only thing necessary for this to happen - is that rationalists make up a greater share of the elite over time.

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party.

Somewhat ironically, this is exactly the sort of cargo-cultish "rationality" that originally led to the emergence of postmodernism, in opposition to it and calling for some much-needed re-evaluation and skepticism around all "cached thoughts". The moral I suppose is that you just can't escape idiocy.

Not exactly. What happened at first was that Marxism - which, in the early 20th century, became the dominant mode of thought for Western intellectuals - was based on rationalist materialism, until it was empirically shown to be wrong by some of the largest social experiments mankind is capable of running. The question for intellectuals who were unwilling to give up Marx after that time was how to save Marxism from empirical reality. The answer to that was postmodernism. You'll find that in most academic departments today, those who identify as Marxists are almost always postmodernists (and you won't find them in economics or political science, but rather in the english, literary criticism and social science departments). Marxists of the rationalist type are pretty much extinct at this point.

I broadly agree, but you're basically talking about the dynamics that resulted in postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about - i.e what people like the often-misunderstood Jacques Derrida were actually trying to say. It's even clearer when you look at Michael Foucault, who was indeed a rather sharp critic of "high modernity", but didn't even consider himself a post-modernist (whereas he's often regarded as one today). Rather, he was investigating pointed questions like "do modern institutions like medicine, psychiatric care and 'scientific' criminology really make us so much better off compared to the past when we lacked these, or is this merely an illusion due to how these institutions work?" And if you ask Robin Hanson today, he will tell you that we're very likely overreliant on medicine, well beyond the point where such reliance actually benefits us.

postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about

So you concede that everyone you're harassing is 100% correct, you just don't want to talk about postmodernism? So fuck off.

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.