I have seen/heard from at least two sources something to the effect that MIRI/CFAR leadership (and Anna in particular) has very short AI timelines and high probability of doom (and apparently having high confidence in these beliefs). Here is the only public example that I can recall seeing. (Of the two examples I can specifically recall, this is not the better one, but the other was not posted publicly.) Is there any truth to these claims?

Showing 3 of 16 replies (Click to show all)

FWIW, I don't feel this way about timelines anymore. Lot more pessimistic about estimates being mostly just noise.

2Jalex Stark6moYes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by "probability of <event> by <time>". It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false) (it's somewhat similar saying that your 2-sigma confidence interval around the "true probability" of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics) My interpretation of the second-hand evidence about Shane Legg's opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn't publicly changed their opinion since)
3philh6moPerhaps I'm misinterpreting you, but I feel like this was intended as disagreement? If so, I'd appreciate clarification. It seems basically correct to me, and consistent with what I said previously. I still think that: if, in 2011, you gave 10% probability by 2018 and 50% by 2028; and if, in 2019, you still give 50% by 2028 (as an explicit estimate, i.e. you haven't just not-given an updated estimate); then this is surprising, even acknowledging that 50% is probably not very precise in either case.

We run the Center for Applied Rationality, AMA

by AnnaSalamon 1 min read19th Dec 2019329 comments

112


CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).

Topics that may be interesting include (but are not limited to):

  • Why we think there should be a CFAR;
  • Whether we should change our name to be less general;
  • How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
  • How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.

Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!

(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)

[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

112