I generally concur with MIRI's latest on safety. AI progress seems bad. 

AI development is pretty tied to GPU (TPU?) production at this point? If GPU production dropped by 95% for a few years, or if GPUs stopped getting faster for a few years, I wonder if that would affect AI timelines at all? Possibly not, I don't know what Google / DeepMind's / OpenAI's hardware vs. researcher balance sheet looks like. If hardware costs are small relative to researchers then it stands to reason that the hardware market doesn't impact things much.

Production of this hardware is pretty centralized (Taiwan Semiconductor), RnD is too (AMD, Nvidia). If the leaders of these orgs miraculously were convinced by reason to act against their business interests in a heroic way, maybe that would buy everyone a few months or a year. Maybe not, would be interested in more analysis of the above.

Just thought this seemed worth pointing out since I had not seen it discussed. 

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:50 AM

You're essentially asking NVIDIA to stop functioning as a business at all. Doesn't seem viable and any company defecting from the heroic sacrifice would massively benefit, which may also be the companies or countries least interested in safety.

As far as outside-the-box scenarios... if crypto prices continued to rise exponentially, and it turned out that all types of crypto except the old trustworthy Proof of Work algorithms were flawed, then a larger and larger percentage of the planet's computation would be spent on the blockchain, right? While in some ways having almost all of the planet's compute solving arbitrarily difficult math problems is a nightmare, it could effectively delay AI research making the hardware less available. (Or it could do the opposite, creating stronger incentives for faster GPUs earlier...)