This seems a great summary to me (an outsider). My own stab at it: this rationality movement seems to be about the wide application of science + analytic philosophy, especially philosophy of science (though most involved don’t know much philosophy, so don’t realise this). (Cf EA is about application of philosophy of ethics, especially of course utilitarianism.)
The novel aspect seems to be mainly the membership and the application, ie beyond normal science/technology and academia.
And the distinction with post-rationality (about which I know little) seems somewhat like early vs late Wittgenstein (ie formal analysis eg logic, math vs more hand-wavy nuance incorporating social function, etc).
I think the error is not just that they generalised incorrectly, but that they didn’t know enough to be justified in doing so. So it combines overconfidence and over/misgeneralising.
The word ‘sophomoric’ includes some of the right connotations. One definition says ‘Overconfident but immature or poorly informed’.
So though ‘sophomoric’ is not quite specific enough itself, maybe it could be used to make a new phrase eg ‘sophomoric bias’ or ‘sophomoric generalisation’.
Interesting post. Maybe others have mentioned this, but a difference with startup founders is charisma, as they often lack it (eg tech nerds are famously uncharismatic), though of course it helps.
Also incidentally this post highlights how different the US and UK (and various other western countries) are in their attitude to religion; in the UK church attendance is tiny, and open religiosity is almost universally seen as weird and embarrassing. So this whole church planting thing seems very odd.
Years ago a small part of my work involved proof-reading successive editions of a book (a 500-page manual). I would write my suggested changes on a printout - not typo corrections, but improvements to wording & content requiring thought & expertise.
Once when doing this I had a slight sense of deja vu after correcting a page, so I looked up the same marked-up page in an earlier edition I had proof-read a year or more before. Not only had I marked the exact same changes (which mistakenly had not been implemented), but used almost identical pen-strokes, including both times changing a word, then thinking better of it and crossing out my change in favour of a different suggestion. So I had clearly gone through an identical thought process over several minutes for the whole page. (I still have both pages somewhere.)
I wondered at the time if psychologists had ever studied this kind of thing.
No offence to JW, but incidentally is there a term for the common cognitive bias where someone who knows a lot about X assumes (incorrectly) the same applies to superficially similar things Y that they know little about? More specific than mere ‘overconfidence’ or ‘overgeneralising’.
I think your point is roughly what I thought, viz.: isn’t this just loss aversion?
Mostly agree, but a way in which the post might partly map onto the UK is this:
Governments know they’ll lose power in a few years, at which point any controversial legislation they enacted will be reversed by the opposing dictatorship. I.e. the other major faction has a veto, but in the future. So there is still a benefit in seeking consensus.
(Often the non-government party will feign strong opposition to legislation to make headlines and look important, but will not actually reverse it when they subsequently get into power.)
Contrariwise, it seems odd that stone tool making is not a popular hobby, given what a crucial activity it was for 99% of our history.
Which suggests maybe we rapidly unevolved interest in it soon after the Stone Age.
We had many other handicrafts which continued to be useful and so persisted (even to this day - some only losing their usefulness very recently with industrialisation, continuing for now as hobbies not yet affected by evolution (eg knitting).) But stone tools are not among them.
In case no-one else has raised this point:
From the AI’s perspective, modifying the AI’s goals counts as an obstacle. If an AI is optimizing a goal, and humans try to change the AI to optimize a new goal, then unless the new goal also maximizes the old goal, the AI optimizing goal 1 will want to avoid being changed into an AI optimizing goal 2, because this outcome scores poorly on the metric “is this the best way to ensure goal 1 is maximized?”.
Is this necessarily the case? Can’t the AI (be made to) try to maximise its goal knowing that the goal may change over time, hence not trying to stop it from being changed, just being prepared to switch strategy if it changes?
A footballer can score a goal even with moving goalposts. (Albeit yes it’s easier to score if the goal doesn’t move, so would the footballer necessarily stop it moving if he could?)
There are ways of showing that you are probably being honest in such situations and thereby making yourself more credible than those that are not. Viz. setting out your own weaknesses. For example, in business plans for investors this can be done in a SWOT analysis (which includes listing weaknesses and threats, as well as how you aim to deal with them).
People who claim to have no weaknesses, or at least do not mention any, or who only admit to slight weaknesses (and not to obvious larger ones), lack credibility.