Tom Davidson from Forethought Research and I have a new paper responding to some recent skeptical takes on the Singularity Hypothesis (e.g. this one). Roughly half the paper is philosophical and the other half is empirical. Both halves argue that we should take the Singularity Hypothesis more seriously than many...
I wanted to share a new paper from the special issue on AI safety that I'm editing, which takes up the influential idea that evolutionary theory gives us some reason to think that the project of value alignment is bound to fail and (in my opinion) shows that it has...
Just wanted to share a new paper on AI consciousness with Simon Goldstein that members of this community might be interested in. Here's the abstract: It is generally assumed that existing artificial systems are not phenomenally conscious, and that the construction of phenomenally conscious artificial systems would require significant technological...
As some of you may know, I'm editing a special issue of the journal Philosophical Studies on AI safety (along with @Dan H ). I thought I'd share the first paper from the issue, which deals with some issues in AI safety theory that have been frequently discussed on LessWrong....
This post was written by Simon Goldstein, associate professor at the Dianoia Institute of Philosophy at ACU, and Cameron Domenico Kirk-Giannini, assistant professor at Rutgers University, for submission to the Open Philanthropy AI Worldviews Contest. Both authors are currently Philosophy Fellows at the Center for AI Safety. Abstract: Recent advances...
This is a draft written by Cameron Domenico Kirk-Giannini, assistant professor at Rutgers University, and Simon Goldstein, associate professor at the Dianoia Institute of Philosophy at ACU, as part of a series of papers for the Center for AI Safety Philosophy Fellowship's midpoint. Dan helped post to the Alignment Forum....