I've been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project.

The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level? 
  • How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?
For example we have recently investigated technology's general proclivity to abrupt progress, surveyed existing AI surveys, and examined the evidence from chess and other applications regarding how much smarter Einstein is than an intellectually disabled person, among other things. 

Some more on our motives and strategy, from our about page:

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.

The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we'd like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog.

If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form

Crossposted from my blog.


Mentioned in
New Comment
17 comments, sorted by Click to highlight new comments since: Today at 8:43 PM

The first link, AI Impacts, is broken.

Thanks, fixed.

Thanks, I think there is a lot of instrumental value in collecting high-quality answers to these questions. I look forward to reading more, and to pointing people to this site.

Looking at the very bottom of AI Impacts home page - the disclaimer looks rather unfriendly.

I'd suggest petitioning to change it to the LessWrong variety

Here is the text: To the extent possible under law, the person who associated CC0 with AI Impacts has waived all copyright and related or neighboring rights to AI Impacts. This work is published from: United States.

What do you mean by 'unfriendly'?

If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.

My comment is intended as helpful feedback. If it is not helpful I'd be happy to delete it.

Your original feedback seems helpful but your follow-up doesn't. You could have said "I don't know" or "I have nothing further to add on that point".

I mean unfriendly in the ordinary sense of the word. Maybe uninviting would be as good.

Perhaps a careful reading of that disclaimer would be friendly or neutral - I don't know. My quick reading of it was: by interacting with AI Impacts you could be waiving some sort of right. To be honest I don't know what a CCO is.

I have nothing further to add to this.

Ah, I see. Thanks. We just meant that Paul and I are waiving our own rights to the content - it's like Wikipedia in the sense that other people are welcome to use the content. We should perhaps make that clearer.


"waived all"? you mean "assigned all" right?

I don't think so—the copyright rights to AI Impacts are waived, in the sense that we don't have them.


The text I was questioning (see above) would have the contributor waive copyright without assigning it, which ends up placing the contributed work in the public domain. If that is the intention I find it a little surprising.

Yes, it's in the public domain.


Cool, so I take all the content of the site, re-purpose it as I see fit, including changing attributions or using in derivative work without attribution. That's what you had in mind, right?

Yes. What is the problematic case?

These are very important questions that are coming up as points of disagreement between FLI and some AI researchers (triggered by recent announcements). Interested in knowing if you are collaborating with FLI in some form.

We haven't been so far.