Tom Davidson from Forethought Research and I have a new paper responding to some recent skeptical takes on the Singularity Hypothesis (e.g. this one). Roughly half the paper is philosophical and the other half is empirical. Both halves argue that we should take the Singularity Hypothesis more seriously than many people have been taking it of late. I'm sharing this with the community here because (i) I think it will be of general interest, and (ii) this is a project I'm still working on, and I would appreciate feedback from the community.
Here's the abstract:
The singularity hypothesis posits a period of rapid technological progress following the point at which AI systems become able to contribute to AI research. Recent philosophical criticisms of the singularity hypothesis offer a range of theoretical and empirical arguments against the possibility or likelihood of such a period of rapid progress. We explore two strategies for defending the singularity hypothesis from these criticisms. First, we distinguish between weak and strong versions of the singularity hypothesis and show that, while the weak version is nearly as worrisome as the strong version from the perspective of AI safety, the arguments for it are considerably more forceful and the objections to it are significantly less compelling. Second, we discuss empirical evidence that points to the plausibility of strong growth assumptions for progress in machine learning and develop a novel mathematical model of the conditions under which strong growth can be expected to occur. We conclude that the singularity hypothesis in both its weak and strong forms continues to demand serious attention in discussions of the future dynamics of growth in AI capabilities.
Note that this is an academic paper, and the goal is to have it published in a journal. So, for example, our ability to go into detail about some topics is limited by space constraints, and we can't introduce arguments that rely on priors that we have made up without published empirical evidence.