CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

I've been developing a game. Systemically, it's about developing accurate theories. The experience of generating theories, probing specimens, firing off experiments, figuring out where the theories go wrong, and refining the theories into fully general laws of nature which are reliable enough to create perfect solutions to complex problem statements. This might make it sound complicated, but it does all of that with relatively few components. Here's a screenshot of the debug build of the game over a portion of the visual design scratchpad (ignore the bird thing, I was just doodling): https://makopool.com/fcfar.png
The rule/specimen/problemstatement is the thing on the left, the experiments/solutions that the player has tried are on the right. You can sort of see in the scratchpad that I'm planning to change how the rule is laid out to make it more central and to make the tree structure as clear as possible (although there's currently an animation where it sort of jiggles the branches in a way that I think makes structure clear, it doesn't look as good this way).
It might turn out to be something like a teaching tool. It illuminates a part of cognition that I think we're all very interested in, not just comprehension, it also tests/trains (I would love to know which) directed creative problemsolving. It seems to reliably teach how frequently and inevitably our right-seeming theories will be wrong.
Playtesting it has been... kind of profound. I'll see a playtester develop a wrong theory and I'll see directly that there's no other way it could have gone. They could not have simply chosen to reserve judgement and not be wrong. They came up with a theory that made sense given the data they'd seen, and they had to be wrong. It is now impossible for me to fall for it when I'm presented with assertions like "It's our best theory and it's only wrong 16% of the time". To coin an idiom.. you could easily hide the curvature of the earth behind an error rate that high, I know this because I've experienced watching all of my smartest friends try their best to get the truth and end up with something else instead.
The game will have to teach people to listen closely to anomalous cases and explore their borders until they find the final simple truth. People who aren't familiar with that kind of thinking tend to give up on the game very quickly. People who are familiar with that kind of thinking tend to find it very rewarding. It would be utterly impotent for me to only try to reach the group who already know most of what the game has to show them. It would be easy to do that. I really really hope I have the patience to struggle and figure out how to reach the group who does not yet understand why the game is fun, instead. It could fail to happen. I've burned out before.
My question: what do you think of that, what do you think of the witness, and would you have any suggestions as to how I could figure out whether the game has the intended effects as a teaching tool.
I've heard of Zendo and I've been looking for someone to play Eleusis with for a while heh (maybe I'll be able to get the local EA group to do it one of these days).
Fun isn't a generic substance. Fun is subjective. A person's sense of fun is informed by something. If you've internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that trai... (read more)