CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

Riceissa's question was brief, so I'll add a bunch of my thoughts on this topic.
I also remember there was something of a hush around the broader x-risk network on the topic of timelines, sometime around the time of FLI's second AI conference. Since then I've received weird mixed signals about what people think, with hushed tones of being very worried/scared. The explicit content is of a similar type to Sam Altman's line "if you believe what I believe about the timeline to AGI and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else" but rarely accompanied with an explanation of the reasoning that lead to that view.
I think that you can internalise models of science, progress, computation, ML, and geopolitics, and start to feel like "AGI being built" is part of your reality, your world-model, and then figure out what actions you want to take in the world. I've personally thought about it a bit and come to some of my own conclusions, and I've generally focused on plans designed for making sure AGI goes well. This is the important and difficult work of incorporating abstract, far ideas into your models of near-mode reality.
But it's also seems to me that a number of x-risk people looked at many of the leaders getting scared, and that is why they believe the timeline is short. This is how a herd turns around and runs in fear from an oncoming jaguar - most members of the herd don't stop to check for themselves, they trust that everyone else is running for good reason. More formally, it's known as an info cascade. This is often the rational thing to do when people you trust act as if something dangerous is coming at you. You don't stop and actually pay attention to the evidence oneself.
(I personally experience such herd behaviour commonly when using the train systems in the UK. When a train is cancelled and 50 people are waiting beside it to get on, I normally don't see the board that announces which platform to go to for the replacement train, as it's only visible to a few of the people, but very quickly the whole 50 people are moving to the new platform for the replacement train. I also see it when getting off a train at a new train station, where lots of people don't really know which way to walk to get out of the building: immediately coming off the train, is it left or right? But the first few people tend to make a judgement, and basically everyone else follows them. I've sometimes done it myself, been the first off and started walking confidently in a direction, and have everyone start confidently follow me, and it always feels a little magical for a moment, because I know I just took a guess.)
But the unusual thing about our situation, is that when you ask the leaders of the pack why they think a jaguar is coming, they're very secretive about it. In my experience many clued-in people will explicitly recommend not sharing information about timelines. I'm thinking about OpenPhil, OpenAI, MIRI, FHI, and so on. I don't think I've ever talked to people at CFAR about timelines.
To add more detail to my saying it's considered 'the' decision-relevant variable by many, here's two quotes. Ray Arnold is a colleague and a friend of mine, and two years ago he wrote a good post on his general updates about such subjects, that said the following:
Qiaochu also talked about it as the decision-relevant question:
Ray talks in his post about how much of his beliefs on this topic comes from trusting another person closer to the action, which is perfectly reasonable thing to do, though I'll point out again it's also (if lots of people do it) herd behaviour. Qiaochu talks about how he never figured out the timeline to AGI with an explicit model, even though he takes short timelines very seriously, which also sounds like a process that involves trusting others a bunch.
It's okay to keep secrets, and in a number cases it's of crucial importance. Much of Nick Bostrom's career is about how some information can be hazardous, and about how not all ideas are safe at our current level of wisdom. But it's important to note that "short timelines" is a particular idea that has had the herd turn around and running in fear to solve an urgent problem, and there's been a lot of explicit recommendations to not give people the info they'd need to make a good judgment about it. And those two things together are always worrying.
It's also very unusual for this community. We've been trying to make things go well wrt AGI for over a decade, and until recently we've put all our reasoning out in the open. Eliezer and Bostrom published so much. And yet now this central decision-node, "the decision-relevant variable", is hidden from the view of most people involved. It's quite strange, and generally is the sort of thing that is at risk for abuse by whatever process is deciding what the 'ground truth' is. I don't believe the group of people involved in being secretive about AI timelines have spent at all as much time thinking about the downsides of secrecy or put in the work to mitigate them. Of course I can't really tell, given the secrecy.
All that said, as you can see in the quotes/links that I and Robby provided elsewhere in this thread, I think Eliezer has made the greatest attempt of basically anyone to explain how he models timelines, and wrote very explicitly about his updates after AlphaGo Zero. And the Fire Alarm post was really, really great. In my personal experience the things in the quotes above is fairly consistent with how Eliezer reasoned about timelines before the deep learning revolution.
I think a factor that is likely to be highly relevant is that companies like DeepMind face a natural incentive to obscure understanding their progress and to be the sole arbiters of what is going to happen. I know that they're very careful about requiring all visitors to their offices to sign NDAs, and requiring employees to get permission for any blogposts they're planning to write on the internet about AI. I'd guess a substantial amount of this effect comes from there, but I'm not sure.
Edit: I edited this comment a bunch of times because I initially wrote it quickly, and didn't quite like how it came out. Sorry if anyone was writing a reply. I'm not likely to edit it again.
Edit: I think it's likely I'll turn this into a top level post at some point.