I've been on the lookout for new jobs recently and one thing I have noticed is that the market seems flooded with ads for AI-related jobs. What I mean is not work on building models (or aligning them, alas), but rather, work on building applications using generative AI or other advances to make new software products. My impression of this is that first, there's probably something of a bubble, because I doubt many of these ideas can deliver on their promises, especially as they rely so heavily on still pretty unreliable LLMs and such. And second, that while the jobs are well paid and sound fun, I'm not sure how I feel about them. These jobs all essentially aim at automating away other jobs, one way or another. That is a good thing only insofar as various other things happen, and depending on the specific job and quality of the work - a good automated GP for diagnosis would probably do a lot of good, but a rushed one might be net negative, and automating creative work is IMO just the wrong road to go down in general if we want good AI futures.

What are your intuitions about this? Which kinds of AI jobs do you consider having more potential for overall positive/negative value for society?

New Answer
New Comment

3 Answers sorted by

Chris_Leong

62

Developing skills related to AI puts you in a better position to make AI go well. At least for me, this outweighs the other concerns that you've mentioned.

Note: This doesn't mean that you should take a job that advances fundamental AI capabilities. This would probably be net-negative as things are already moving far too fast for society to adapt. But it sounds like you're more considering jobs related to AI applications, so I'd say to go for it.

Jay Bailey

54

I think that there are two questions one could ask here:

  • Is this job bad for x-risk reasons? I would say that the answer to this is "probably not" - if you're not pushing the frontier but are only commercialising already available technology, your contribution to x-risk is negligible at best. Maybe you're very slightly adding to the generative AI hype, but that ship's somewhat sailed at this point.

  • Is this job bad for other reasons? That seems like something you'd have to answer for yourself based on the particulars of the job. It also involves some philosophical/political priors that are probably pretty specific to you. Like - is automating away jobs good most of the time? Argument for yes - it frees up people to do other work, it advances the amount of stuff society can do in general. Argument for no - it takes away people's jobs, disrupts lives, some people can't adapt to the change.

I'll avoid giving my personal answer to the above, since I don't want to bias you. I think you should ask how you feel about this category of thing in general, and then decide how picky or not you should be about these AI jobs based on that category of thing. If they're mostly good, you can just avoid particularly scummy fields and other than that, go for it. If they're mostly bad, you shouldn't take one unless you have a particularly ethical area you can contribute to.

[-]dr_s20

I suppose I'm mostly also looking for aspects of this I might have overlooked, or inside perspective about any details from someone who has relevant experience. I think I tend to err a bit on caution on things but ultimately I believe that "staying pure" is rarely a road to doing good (at most it's a road to not doing bad, but that's relatively easy if you just do nothing at all). Some of the problems with automation would have applied to many of the previous rounds of it, and those ultimately came out mostly good, I think, but also it somehow feels This Time It's Different (but then again, I do tend to skew towards pessimism and seeing all the possible ways things can go wrong...).

3Jay Bailey
I guess my way of thinking of it is - you can automate tasks, jobs, or people. Automating tasks seems probably good. You're able to remove busywork from people, but their job is comprised of many more things than that task, so people aren't at risk of losing their jobs. (Unless you only need 10 units of productivity, and each person is now producing 1.25 units so you end up with 8 people instead of 10 - but a lot of teams could also quite use 12.5 units of productivity well) Automating jobs is...contentious. It's basically the tradeoff I talked about above. Automating people is bad right now. Not only are you eliminating someone's job, you're eliminating most other things this person could do at all. This person has had society pass them by, and I think we should either not do that or make sure this person still has sufficient resources and social value to thrive in society despite being automated out of an economic position. (If I was confident society would do this, I might change my tune about automating people) So, I would ask myself - what type of automation am I doing? Am I removing busywork, replacing jobs entirely, or replacing entire skillsets? (Note: You are probably not doing the last one. Very few, if any, are. The tech does not seem there atm. But maybe the company is setting themselves up to do so as soon as it is, or something) And when you figure out what type you're doing, you can ask how you feel about that.
2dr_s
A fair point. I suppose part of my doubt though is exactly: are most of these applications going to automate jobs, or merely tasks? And to what extent does contributing to either advance the know how that might eventually help automating people?
3Jay Bailey
I don't know about the first one - I think you'll have to analyse each job and decide about that. I suspect the answer to your second question is "Basically nil". I think that unless you are working on state-of-the-art advances in: A) Frontier models B) Agent scaffolds, maybe. You are not speeding up the knowledge required to automate people.

Dagon

42

It's definitely overhyped. I hesitate to call it a bubble - it's more like the normal software business model with a new cover.  Tons of projects and startups with pretty tenuous business models and improbable grand visions, most of which will peter out after a few years.  But that has been going on for decades, and will likely continue until true AI makes it all irrelevant.

Most of these jobs are less interesting, and less impactful than they claim.  Which makes the ethical considerations far less important.  My advice is to focus on the day-to-day experience and what you can learn there.  Pick one where you like the people, and get to actually build something rather than just rearranging existing crap.

[-]dr_s20

it's more like the normal software business model with a new cover

True enough, though it's also the fact that these projects seem to have almost entirely displaced everything else that makes me suspect we're almost in bubble regime. VCs just throwing money at anything that involves AI.

Most of these jobs are less interesting, and less impactful than they claim.

Well, I mean, they could be somewhat impactful in expectation. One out of a hundred might become big, and you don't know which (in fact, 1% I suspect would be a good success rate...).