Most concerns about AI tend to boil down to:

  1. Loss of control to AI systems - What if AI were smarter than us and took over the world?
  2. Concentration of power - What if AI gave too much power to someone bad?

I'm surprised I haven't heard consideration of a third, more basic risk.

Will this technology be good?

Suppose you're in the Southern United States in 1793 and you believe that an important moral question—perhaps the most important moral question—is labor ethics of how cotton is processed. I suspect I don't need to go into detail about why you might think this.

Proceeding directly from this belief, an obviously good thing is, what if a machine could process cotton instead of people? Then at least the people who processed cotton wouldn't have to anymore.[1]

Imagine you work hard and, through grit and luck, it works! 

What is your confidence interval of how much better this makes life?

I hope your interval included negative numbers. For some reason, I never hear negative numbers for how much better life would be if AI could do lots of jobs for us.

This is in fact exactly what happened. Eli Whitney tried to reduce enslaved labor by creating a replacement machine, but it worked backward.[2]

Isn't this "centralization of power"?

No,

  1. The gains from the gin were no more centralized than previous production surplus
  2. The problem wasn't its effect on processing. It was the indirect effect on cotton growing. And that didn't change at all in centralization.
  3. It would hold true regardless of centralization. The problem wasn't centralization, it was more cotton growing.

What can AI do that is bad?

I don't have clear answers. But the cotton gin should be enough of a cautionary tale about economics and unintended consequences.

A start is "tricking people". The central motive of all economic activity is to modify human behavior, usually involving a step where they give you money. Training a net costs money. How will you make that money back? The net will help you change people's behavior to give you their money. Not every way to change someone's behavior is good.

Another angle is "social change". If there's an invention that turns dirty water into clean water cheaper, it can change the math of economic activity and that can bubble up to have indirect social changes. AI is more direct. Its main successes have been text and image: abstract goods whose sole purpose is to feed directly into people's brains. It can change how people make decisions, directly, and in fact already does, in ads.

You've probably already thought hard about applications, and whether they'll be good or bad. But a meta point is: AI applications tend to go straight through people's brains more than most innovations in, say, physics. And inventions that change people's minds are the scariest, the most volatile, and the most likely to have unexpected effects.

  1. ^

    AI analog: what if a machine could do lots of different jobs now done by people?

  2. ^

    https://en.wikipedia.org/wiki/Eli_Whitney#:~:text=Whitney%20believed%20that%20his%20cotton,the%20end%20of%20southern%20slavery.

7

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 5:52 AM

I think I already knew the point this post was making, but I actually hadn't known the story of the Cotton Gin specifically being aimed at helping alleviate slavery. (Or maybe I had but I'd forgotten). It's a pretty compelling example.

(It seemed like the post was deliberately avoiding going into the object level of that example, maybe because it was kinda political, but fwiw I think the post would have been better if it had one sentence spelling out the outcome)

I hope your interval included negative numbers. For some reason, I never hear negative numbers for how much better life would be if AI could do lots of jobs for us.

Fwiw, concern about unemployment is AFAICT the primary thing I hear most people worrying about AI talk about, and my impression is that on LessWrong this is less common, or includes a lot of pushback, because the lay people who normally make these kinds of arguments are thinking about them through a fairly simplistic lens, and/or missing the bigger picture of "AI might just kill everyone before unemployment even becomes a major issue."

I'm not sure there's a single bad thing that is analogous, like unemployment. I think the bigger point is, it's scary, and especially AI is volatile, and it's very unclear whether technologies are good in retrospect, for many reasons other than "they centralize power".

A more direct analogy might be, suppose AI does what people hope it does. What happens next? It's unfair to say about the cotton gin, "Imagine the manual labor were replace with a machine" and stop there. Specifically, prices will move and people will respond to those price changes. Generally, the environment will change, but people will adapt their own behavior to those changes.

It's not clear there aren't general principles that can be drawn. For example, any technology that makes it easier to remove clean water will, first order, cause there to be dirty water. Second order, it will probably cause more areas of land to be settled. We aren't sure about all the complex unforeseen consequences, but this seems like a good general rule of thumb. More land settled generally means more people and economic activity.

So...one possible scenario would be: "all intellectual tasks requiring long education times and a talent for abstract reasoning have been taken over by the AI, thus allowing the creation of a perfect social system, and humans are redirected completely from those tasks"

Here is a degenerate scenario:

"Humans who engage in abstract reasoning often are the cause of rebellions at worst, and technological revolutions which require social changes at worst. Ourr social system is perfect, but sometimes fragile, therefore, the humans who can do independent intellectual tasks and abstract reasoning are superfluous at best and harmful at worst"

"The leader has a specific amount of education, no one would dare call him ignorant, and his is certainly not superfluous, therefore, the amount of education he received is the perfect amount. Kill anyone who has had more, and anyone who seeks more than him, because obviously, they intend revolution against our perfect system"

Equatorial Guinea didn't need the AI justification to reach the third line in the post colonial era--and the election that put that leader into power was democratic.

https://en.m.wikipedia.org/wiki/Francisco_Macías_Nguema

New to LessWrong?