Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.
FYI, this is not what the word "corrigibility" means in this context. (Or, at least, it's not how we at MIRI have been using it, and it's not how Stuart Russell has been using it, and it's not a usage that I, as one of the people who originally brought that word into the AI alignment space, endorse....(read more)
> By your analogy, one of the main criticism of doing MIRI-style AGI safety research now is that it's like 10th century Chinese philosophers doing Saturn V safety research based on what they knew about fire arrows.
This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI...(read more)
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was "a few people are nudged slightly more towards becoming AI alignment researchers someday", and that the outcome of "actually cause at least one very talented person to become AI alignment r...(read more)
I don't claim that it developed skill and talent in all participants, nor even in the median participant. I do stand by my claim that it appears to have had drastic good effects on a few people though, and that it led directly to MIRI hires, at least one of which would not have happened otherwise :-...(read more)
Thanks! :-p It's convenient to have the 2015 fundraisers end before 2015 ends, but we may well change the way fundraisers work next year.