Stefan_Schubert

Wiki Contributions

Comments

Another way to frame this, then, is that "For any choice of AI difficulty, faster pre-takeoff growth rates imply shorter timelines."

I agree. Notably, that sounds more like a conceptual and almost trivial claim.

I think that the original claims sound deeper than they are because they slide between a true but trivial interpretation and a non-trivial interpretation that may not be generally true.

Thanks.

My argument involved scenarios with fast take-off and short time-lines. There is a clarificatory part of the post that discusses the converse case, of a gradual take-off and long time-lines:

Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.

Maybe a related clarification could be made about the fast take-off/short time-line combination.

However, this claim also confuses me a bit:

No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.

The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view "that marginal improvements in AI capabilities are hard", gradual take-off and longer timelines correlate. And the author seems to suggest that that's a plausible view (though empirically it may be false). I'm not quite sure how to interpret this combination of claims.

For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

What would you say about the following argument?

  • Suppose that we get AGI tomorrow because of a fast take-off. If so timelines will be extremely short.
  • If we instead suppose that take-off will be gradual, then it seems impossible for timelines to be that short.
  • So in this scenario - this choice of AGI difficulty - conditioning on gradual take-off doesn't seem to imply shorter timelines.
  • So that's a counterexample to the claim that for every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

I'm not sure whether it does justice to your reasoning, but if so, I'd be interested to learn where it goes wrong.

Holden Karnofsky defends this view in his latest blog post.

I think it’s too quick to think of technological unemployment as the next problem we’ll be dealing with, and wilder issues as being much further down the line. By the time (or even before) we have AI that can truly replace every facet of what low-skill humans do, the “wild sci-fi” AI impacts could be the bigger concern.

A related view is that less advanced/more narrow AI will do be able to do a fair number of tasks, but not enough to create widespread technological unemployment until very late, when very advanced AI quite quickly causes lots of people to be unemployed.

One consideration is how long time it will take for people to actually start using new AI systems (it tends to take some time for new technologies to be widely used). I think that some have speculated that that time lag may be shortened as AI become more advanced (as AI becomes involved in the deployment of other AI systems).

Scott Alexander has written an in-depth article about Hreha's article:

The article itself mostly just urges behavioral economists to do better, which is always good advice for everyone. But as usual, it’s the inflammatory title that’s gone viral. I think a strong interpretation of behavioral economics as dead or debunked is unjustified.

See also Alex Imas's and Chris Blattman's criticisms of Hreha (on Twitter).

I think that though there's been a welcome surge of interest in conceptual engineering in recent years, the basic idea has been around for quite some time (though under different names). In particular, Carnap argued that we should "explicate" rather than "analyse" concepts already in the 1940s and 1950s. In other words, we shouldn't just try to explain the meaning of pre-existing concepts, but should develop new and more useful concepts that partially replace the old concepts.

Carnap’s understanding of explication was influenced by Karl Menger’s conception of the methodological role of definitions in mathematics, exemplified by Menger’s own explicative definition of dimension in topology.
...
Explication in Carnap’s sense is the replacement of a somewhat unclear and inexact concept C, the explicandum, by a new, clearer, and more exact concept , the explicatum.

See also Logical Foundations of Probability, pp. 3-20.

Potentially relevant new paper:

The logic of universalization guides moral judgment
To explain why an action is wrong, we sometimes say: “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

A new paper may give some support to arguments in this post:

The smart intuitor: Cognitive capacity predicts intuitive rather than deliberate thinking
Cognitive capacity is commonly assumed to predict performance in classic reasoning tasks because people higher in cognitive capacity are believed to be better at deliberately correcting biasing erroneous intuitions. However, recent findings suggest that there can also be a positive correlation between cognitive capacity and correct intuitive thinking. Here we present results from 2 studies that directly contrasted whether cognitive capacity is more predictive of having correct intuitions or successful deliberate correction of an incorrect intuition. We used a two-response paradigm in which people were required to give a fast intuitive response under time pressure and cognitive load and afterwards were given the time to deliberate. We used a direction-of-change analysis to check whether correct responses were generated intuitively or whether they resulted from deliberate correction (i.e., an initial incorrect-to-correct final response change). Results showed that although cognitive capacity was associated with the correction tendency (overall r = .13) it primarily predicted correct intuitive responding (overall r = .42). These findings force us to rethink the nature of sound reasoning and the role of cognitive capacity in reasoning. Rather than being good at deliberately correcting erroneous intuitions, smart reasoners simply seem to have more accurate intuitions.
Load More