1. I did think about that when I wrote it, and it's a bit strong. (I set myself a challenge to write and publish this in 15 min, so didn't spent any more time optimising the title.) Other recommendations welcome. Thinking about the actual claim though, I find myself quite confident that something in this direction is right. (A larger uncertainty would be if it is the best thing for us to sink resources into, compared to other interventions).
2. Agree that there seems to be lots of black-box wisdom embedded in the institutions and practices of religions, and could be cool to try to unwrap it and import some good lessons.
I will note though that there's a difference between:
I haven't been to a dojo (except briefly as a kid) so don't have a clear model what it's about.
Not sure how I feel about the part on "you must face off against an opponent, and you run the risk of getting hurt". I think I disagree, and might write up why later.
Thanks for describing that! Some questions:
1) What are some examples of what "practicing CFAR techniques" looks like?
2) To what extent are dojos expected to do "new things" vs. repeated practice of a particular thing?
For example, I'd say there's a difference between a gym and a... marathon? match? I think there's more of the latter in the community at the moment: attempting to solve particular bugs using whatever means are necessary.
I didn't know about weekly dojos and have never attended any, that sounds very exciting. Tell me more about what happens at the Berlin weekly dojo events?
Also, to clarify, I meant both "pubs" and "gyms" metaphorically -- i.e. lots of what happens on LessWrong is like a pub in the above sense, whereas other things, like the recent exercise prize, is like a gym.
Consider the difference between a pub and a gym.
You go to a pub with your rationalist friends to:
You go to a gym to:
One key distinction here is effort.
Going to a bar is low effort. Going to a gym is high effort.
In fact, going to gym requires such a high effort that most people have a constant nagging guilt about doing it. They proceed to set up accountability systems with others, hire personal trainers, use habit installer apps, buy gym memberships as commitment devices, use clever hacks to always have their gym bag packed and ready to go, introspect on their feelings off anxiety about it and try to find work-arounds or sports which suit them, and so forth...
People know gyms are usually a schlep, yet they also know going there is important, so they accept that they'll have to try really hard to build systems which get them exercising.
However, things seem different for rationality. I've often heard people go "this rationality stuff doesn't seem very effective, people just read some blog posts and go to a workshop or two, but don't really seem more effective than other mortals".
But we wouldn't be surprised if someone said "this fitness stuff doesn't seem very effective, some of my friends just read some physiology bloggers and then went to a 5-day calisthenics bootcamp once, but they're not in good shape at all". Of course they aren't!
I think I want to suggest two improvements:
1) On the margin, we should push more for a cultural norm of deliberate practice in the art of rationality.
It should be natural to get together with your friends once a week and use OpenPhil's calibration app, do Thinking Physics problems, practice CFAR techniques, etc...
2) But primarily: we build gyms.
Gyms are places where hundreds of millions of dollars of research have gone into designing equipment specifically allowing you to exercise certain muscles. There are also changing rooms to help you get ready, norms around how much you talk (or not) to help people focus, personal trainers who can give you advice, saunas and showers to help you relax afterwards...
For rationality, we basically have nothing like this . Each time you want to practice rationality, you basically have to start by inventing your own exercises.
 The only example I know of is Kocherga, which seems great. But I don't know a lot about what they're doing, and ideally which should have rationality gyms either online or in every major hub, not just Moscow.
In some sense, markets have a particular built-in interpretability: for any trade, someone made that trade, and so there is at least one person who can explain it. And any larger market move is just a combination of such smaller trades.
This is different from things like the huge recommender algorithms running YouTube, where it is not the case that for each recommendation, there is someone who understands that recommendation.
However, the above argument fails in more nuanced cases:
IIRC neuroeconomics is quite different: it studies how humans make and represent economic decisions (eg "we've found an fmri signal in the orbitofrontal cortex that's correlated with expected value of this decision"), which is different from modelling the internal physiologial functions of a body as an entire economy with various supply chains and equilibrium states.
For people who are into forecasting, I made a Foretold notebook where you can predict which posts will end up in the final Best of 2018 book.
I think this paper might be relevant: https://users.cs.duke.edu/~conitzer/predictionWINE09.pdf
Abstract. A potential downside of prediction markets is that they may incentivize agents to take undesirable actions in the real world. For example, a prediction market for whether a terrorist attack will happen may incentivize terrorism, and an in-house prediction market for whether a product will be successfully released may incentivize sabotage. In this paper, we study principal-aligned prediction mechanisms– mechanisms that do not incentivize undesirable actions. We characterize all principal-aligned proper scoring rules, and we show an “overpayment” result, which roughly states that with n agents, any prediction mechanism that is principal-aligned will, in the worst case, require the principal to pay Θ(n) times as much as a mechanism that is not. We extend our model to allow uncertainties about the principal’s utility and restrictions on agents’ actions, showing a richer characterization and a similar “overpayment” result.
That paper seems quite different from this post in important ways.
In particular, the gist of the OP seems to be something like "showing that pre-formal intuitions about instrumental convergence persist under a certain natural class of formalisations". In particular, it does so using formalism closer to standard machine learning research.
The paper you linked seems to me to instead assume that this holds true, and then apply that insight in the context of military strategy. Without speculating about the merits of that, it seems like a different thing which will appeal to different readers, and if it is important, it will be important for somewhat different reasons.