Like priests, for example. It's simply not supposed. What else?

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

nim

Jul 15, 2023

91

Politicians do not appear to expect their roles to be superseded by better automation.

Great, thank you.

Dagon

Jul 15, 2023

30

If you look at jobs that have not changed in the modern (say, since 1910) age, you'll get a good idea.  And there are none.  Priests are a good example of jobs which still kind of exist, but are very different in fraction of the population and in day-to-day activities.

There are LOTS of similar jobs, where face-to-face human contact is a large part of the value, or human accountability to other humans (someone to yell at if something goes wrong).  Literally NONE of those jobs were unaaffected by the internet, and none will be unchanged by AI, over time.  

One key reframing is to realize that "make more efficient" IS THE SAME THING as "steal jobs".  Whenever a robot makes someone's work easier, faster, or otherwise better, it means the world needs fewer of those workers for a given level of output.  

9 comments, sorted by Click to highlight new comments since: Today at 5:58 AM

Thank you, I stand corrected. What other occupations would you think are "safe" in the public's mind?

Rent-seekers.

(Or, if you like: capitalists.)

Anyone who’s perceived as making money not by doing some work (however intellectual, emotional, physical, or otherwise “human” in any way you like), but in virtue of property ownership (where the “property” may be capital in the traditional sense—real estate, factories, etc.—or it may be the rights to extract rents for various things, e.g. copyrights and other IP). It’s hard to see how AI can replace such people[1], and there’s a widespread perception that even after all jobs are done by computers, the people who own things and are able to extract rents thereby, will still be able to do so.


  1. Though it may make them irrelevant. For example, if everyone has access to AI that can generate art / entertainment on demand, then the copyright to popular franchises is worthless. But people do not typically think of such things, in my experience (for various reasons). ↩︎

Thank you (and also I have never really thought about this, so if you have more to say, please do.)

[-]mishka10mo30

Like priests

:-) This might depend on denomination (and on whether robots are perceived to be conscious).

Cf., for example, Good News from the Vatican

There's some sources that put a substantial dent in your example. A recent one that's relevant to current AI trends is an experimental GPT-backed church service (article via Ars Technica) in Germany as part of a convention of Protestants, but some years ago there was already Mindar, a robot Buddhist preacher (article via CNN) in Japan, via a collaboration between the Kodaiji temple and a robotics professor at Osaka University.

I don't mean that this is not happening. I mean that nobody (whom I have read) views this as something to be concerned about.

I think it would be helpful to have more context around the initial question, then. Do I infer correctly that your model of the phenomenon under discussion includes something like “there exists broad media consensus around which jobs literally can ever versus literally cannot ever be done (in a sufficiently competent and socially accepted way) by machines[1]”? Because I'm not sure such a consensus exists meaningfully enough to come to a boundary conclusions about, especially because to the extent that there is one, it seems like the recent spate of developments has blown up its stability for a while yet. It would've made more sense to ask what jobs machines can never do twenty years ago, in which fields like writing and visual art would have been popular examples—examples where we now have clear economic-displacement unrest.

As for the specific example, the “I'm terrified!” quote by Rabbi Joshua Franklin in this other CNN article about a GPT-generated sermon seems like it's in the general vein of “machines will steal jobs”. I'm not sure whether the intent of your question would consider this truer counterevidence to the first example (perhaps because it's an article published by a major mass media organization), cherrypicking (perhaps because it wasn't positioned in such a way as to get into the popular mindset the same way as some other jobs—I don't know whether this is true), or irrelevant (perhaps because you're pointing at something other than the trend of mass media articles that I associate with the “machines will steal jobs” meme).

There's also a separate underlying filter here regarding which jobs are salient in various popular consciousnesses in the first place, and I'm not sure how that's meant to interact with the question either…


  1. I'm using “machines” rather than “robots” here primarily because I think pure software replacements are part of the same discursive trends and are substantially similar in the “human vs automation” tradeoff ways despite physical differences and related resource differences. ↩︎

No, that other counterexample is fine.

And yes, I am more interested in the separate underlying filter than in what the machines can actually do. The "what people consider as something that actually matters, instead of stuff like high-precision surgery or image manipulation or whatever". But this doesn't seem well-defined, so I'd rather try narrower indirect questions.