Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.
Circle geometry should be removed from the high school maths syllabus and replaced with statistics because stats is used in science, business and machine learning, while barely anyone needs circle geometry.
Claim: this thread would be better (although, it's already great) if people added confidence levels to their claims at the beginning, and updated them at the end of the discussion. (confidence level - 75%)
claim: LW commenter GPT2 is a bot that generates remarkably well-formed comments, but devoid of actual thought or meaning. confidence: 20% that it's no or minimal human intervention, 90%+ that it's computer-generated text, but a human might be seeding, selecting, and posting the results.
subclaim: this should be stopped, either by banning/blocking the user, or by allowing readers to block it.
update: based on a comment, I increase my estimate that it's fully automated to 95%+ I look forward to learning what the seed corpus is, and whether i...
In a five-year-old contrarian thread I had stated that "there is no territory, it's maps all the way down." There was a quality discussion thread with D_Malik about it, too. Someone also mentioned it on reddit, but that didn't go nearly as well. Since then, various ideas of postrationality have become more popular, but this one still remains highly controversial. It is still my claim, though.
Mod note: I decided to promote this post to the frontpage, which does mean frontpage guidelines apply, though I think overall we can be pretty flexible in this thread. Depending on how it goes we might want to promote future threads like this to the frontpage or leave them on personal blog.
Related, street epistemology. it's a practice similar to to Socratic questioning ("invented" by peter boghossian in his book 'a manual for creating atheists').
Here's a live example (and two more channels. these also have lectures about it)
Claim: Instrumental and Epistemic rationality often diverge, and rationalists don't win as much because they don't give this fact enough weight.
Claim: The "classical scenario" of AI foom as promoted by e.g. Bostrom, Yudkowsky, etc. is more plausible than the scenario depicted in Drexler's Comprehensive AI Systems.
This post was popular, but the idea never got picked up. Let's have an experimental open thread this month!
The rules:
Let top level comments be debatable claims, first tier responses be questions, second tier answers, responses, answers, etc. Try to go as deep as possible, I'd expect an actual update to be increasingly likely to happen as you continue the conversation.