Yeah, I think so. But since those people generally find AI less important (there's both less of an upside and less of a downside) they generally participate less in the debate. Hence there's a bit of a selection effect hiding those people.
There are some people who arguably are in that corner who do participate in the debate, though - e.g. Robin Hanson. (He thinks some sort of AI will eventually be enormously important, but that the near-term effects, while significant, will not be at the level people on the right side think).
Looking at the 2x2 I posted I wonder if you could call the lower left corner something relating to "non-existential risks". That seems to capture their views. It might be hard to come up with a catch term, though.
The upper left corner could maybe be called "sceptics".
Not exactly what you're asking for, but maybe a 2x2 could be food for thought.
Realist and pragmatist don't seem like the best choices of terms, since they pre-judge the issue a bit in the direction of that view.
I think psychologists-scientists should have unusually good imaginations about the potential inner workings of other minds, which many ML engineers probably lack.
That's not clear to me, given that AI systems are so unlike human minds.
tell your fellow psychologist (or zoopsychologist) about this, maybe they will be incentivised to make a switch and do some ground-laying work in the field of AI psychology
Do you believe that (conventional) psychologists would be especially good at what you call AI psychology, and if so, why? I guess other skills (e.g. knowledge of AI systems) could be important.
I think that's exactly right.
I think that could be valuable.
It might be worth testing quite carefully for robustness - to ask multiple different questions probing the same issue, and see whether responses converge. My sense is that people's stated opinions about risks from artificial intelligence, and existential risks more generally, could vary substantially depending on framing. Most haven't thought a lot about these issues, which likely contributes. I think a problem problem with some studies on these issues is that researchers over-generalise from highly framing-dependent survey responses.
I wrote an extended comment in a blog post.
Summing up, I disagree with Hobbhahn on three points.I think the public would be more worried about harm that AI systems cause than he assumes.I think that economic incentives aren’t quite as powerful as he thinks they are, and I think that governments are relatively stronger than he thinks.He argues that governments’ response will be very misdirected, and I don’t quite buy his arguments.Note that 1 and 2/3 seem quite different: 1 is about how much people will worry about AI harms, whereas 2 and 3 are about the relative power of companies/economic incentives and governments, and government competency. It’s notable that Hobbhahn is more pessimistic on both of those relatively independent axes.
Summing up, I disagree with Hobbhahn on three points.
Note that 1 and 2/3 seem quite different: 1 is about how much people will worry about AI harms, whereas 2 and 3 are about the relative power of companies/economic incentives and governments, and government competency. It’s notable that Hobbhahn is more pessimistic on both of those relatively independent axes.
Another way to frame this, then, is that "For any choice of AI difficulty, faster pre-takeoff growth rates imply shorter timelines."
I agree. Notably, that sounds more like a conceptual and almost trivial claim.
I think that the original claims sound deeper than they are because they slide between a true but trivial interpretation and a non-trivial interpretation that may not be generally true.
My argument involved scenarios with fast take-off and short time-lines. There is a clarificatory part of the post that discusses the converse case, of a gradual take-off and long time-lines:
Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.
Maybe a related clarification could be made about the fast take-off/short time-line combination.
However, this claim also confuses me a bit:
No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.
The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view "that marginal improvements in AI capabilities are hard", gradual take-off and longer timelines correlate. And the author seems to suggest that that's a plausible view (though empirically it may be false). I'm not quite sure how to interpret this combination of claims.