Sorted by New


CFAR’s new focus, and AI Safety

This post makes me very happy. It emphasizes points I wanted to discuss here a while ago (e.g. collective thinking and the change of focus) but didn't have the confidence to.

In my opinion, we should devote more time to hypothesis testing on both individual and collective rationality. Many suggestions to improve individual rationality have been advanced on LW. The problem is we don't know how effective these techniques are. Is it possible to test them at CFAR or at LW meetings ? I've seen posts about rationality drugs - to take an example - and even though some people shared their experience, making a study about it would enable us to collect data, avoid experimental asymmetry and response bias. Of course, one obvious bias is selection bias but our goal is to produce relevant results for this community. So picking individuals roughly representative of the whole community is the next problem. (Rationality drugs are besides the point.) More importantly, does this general idea seem stupid to you, is it one of those "good in theory, bad in practice" ideas ?

I am very interested in collective problem-solving but I did not find insightful resources about it, do you know of any ?

P.S : English is not my mother tongue ; don't be surprised if this post is imprecise and full of grammatical mistakes.