There are fast takeoff scenarios but there are also scenarios where AI slowly take more and more power in society till they have enough society that other stakeholders lack the power to move against AI agents. If we want to prepare for such a world it might be important to reduce the amount of power of AI within organizations and increase the amount of power that humans have within existing organizations.

Amazon gets criticized for firing people based on algorithmic decisions without a human making the decision. Would it be valuable to pick this fight to argue for human decision making when it comes to allocating power (the nature of what hiring decisions are about)?

New Answer
Ask Related Question
New Comment

2 Answers sorted by

Probably not a useful canary for AI takeover.  Coarse-grained hiring decisions are visible enough that humans in the companies are going to watch very closely, and not permanently cede power.  It'll just never happen that senior or executive employees are automatically dispositioned (well, at least until the AI controls the board of direcors and CxO positions).

The worrisome uses of AI are more impactful and less visible.  Investment decisions, supplier selection, customer lockouts (based on "suspicious" activity that isn't prosecutable fraud), etc. 

Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of "AI", not even in the buzzword sense of AI = ML. For all we know it's doing something trivially simple, like combining a few measured properties (how often they're on time, etc.) with a few manually assigned weights and thresholds. Even if it's using ML, it's going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.

Even if some laws are passed about this, they'd be expandable in the directions of "Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more"; and "we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can't prove they're not violating the law, then they're not allowed". The latter has a very narrow domain of applicability so would not affect AI risk.

What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.

4 comments, sorted by Click to highlight new comments since: Today at 4:49 PM

My guess would be that the impact this directly has on AI Safety is negligible, and we should focus more on how it impacts the economy and such.

We lack moves with likely non-negligible impacts on AI Safety. A key question is whether we can find dignified moves that a least seem to go in the right direction. 

My guess is as a cause area it would be fairly easy to rally people behind (very clear problems, even in the present day; countless viral posts complaining about it which indicates some amount of unmet demand), so it may be worth aiming for if you value having a larger chance of getting tangible results in.

It's also a good foundation to later argue to restrict other AI uses if you already pushed through successful regulation. 

New to LessWrong?