A strict and swift non-permanent pause is certainly one option. Use very aggressive regulation. Perspires slightly.
Pause all development and research everywhere and all at once. Don’t stop totally. Don’t be anti-accelerationism. Just slow the rate acceleration a bit.
Unpause when sufficient safety research has been conducted, safety measures have been identified and agreed upon, and can be practically implemented.
This should take approximately 5-50 years. If the world can mobilize the way it did during the COVID-19 pandemic, 5 years feels reasonable.
The world isn’t so bad that we can’t wait for AGI - or whatever the actual goal of the rat race is.
I would suggest that once paused, the following strategy is used to unpause.
"If we stop and don't develop AGI before our geopolitical enemies because we're afraid of a tiny risk of an extinction, they will develop it regardless, then one of two things happen: either global extinction, or our extinction in our enemies' hands. Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us. If it goes bad, it'd have gone bad anyway in their or our hands, so that case doesn't matter."
This has become a common description of why AI companies and governments are moving quickly. In general, I agree with the description, but I specifically struggle with this portion if it:
“Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us.”
I’m assuming that - and please correct me if I’m misinterpreting here - “extinguish” here means something along the lines of, “remove the ability to compete effectively for resources (e.g. customers or other planets)” not “literally annihilate”.
If I got that totally wrong, no need to read on.
If that’s roughly correct, well, so what? How does being “first” actually solve the misaligned AGI problem? “Global extinction” as you put it.
Being first doesn’t come with the benefit of forcing all subsequently created AGI to be aligned / safe. The government or corporation in second (third, fourth, etc.) place surely can and probably will continue to attempt to build an AGI. They’re probably even more likely to create one in a more reckless manner by trying to catch up as quickly as possible.
I agree with your assessment. I live in Boston and conducted some research on this same topic a couple of years back.
I took a look at the state sanitary code too. I thought that may be prohibitive, but was surprised to find that it really isn't.
The Massachusetts state sanitary code (410.420 (D) Minimum Square Footage) provides:
(1) Every dwelling unit shall contain at least 150 square feet of habitable floor space for its first occupant, and at least 100 square feet of habitable floor space for each additional occupant.
(2) A rooming unit shall contain a minimum of 100 square feet of habitable floor space when: (a) The unit contains one single room for living and sleeping only; and (b) Is occupied by no more than one person.
(3) In every residence, each room used for sleeping purposes by one occupant shall contain at least 70 square feet of floor space.
(4) In every residence, each room used for sleeping by more than one occupant shall contain at least 50 square feet of floor space for each occupant.
My assumption had been that maximum occupancy requirements would so restrictive as to prevent most families from living with housemates. This was based on a presumed strong, positive correlation between families seeking to live with unrelated housemates to offset their rent and families living in homes with a relatively high number of people per square foot of living area.
Families seeking to offset the cost of living with an unrelated housemates are also probably seeking to offset the cost of living in other ways too, right? One such way would be not splurging on more living space than is necessary. Less space typically equals lower living cost.
But, living in smaller spaces results in a higher person per square foot, placing the occupants closer to the state sanitary code maximum occupancy thresholds.
I still feel these to be reasonable assumptions.
But they aren't actually dependent on the state’s thresholds. So, are most families seeking to add housemates to offset their rent actually near or over the actual state maximum occupant / square foot thresholds?
Some back of the envelope math suggests no. Let's assume the following:
Now, imagine a family of 3 living in an ~750 square foot home. Let's also imagine that home has two bedrooms, each 100 square feet. Based on the minimum square footage requirements under the state sanitary code, that household compliantly support 4 individuals.
If the bedrooms were 150 square feet each - let's assume this home now just has a smaller kitchen and living room to compensate - the house could compliantly support 6 people.
I recommend determining the maximum occupants based on your living space's square footage. Or maybe a friends. You'll likely be surprised how many folks can compliantly (as far as the sanitary code is considered, anyways) live in a given space.
I see - thanks for the explanation!
The US had nuclear weapons before any other country. Other countries have these weapons now. The p-boom was quite high at some points but nobody was annihilated with this technology.
Admittedly, nuclear weapons are not a perfect analog for AI due to many reasons, but I think it is a reasonable analog.
With this in mind, I wanted to ask out of curiosity, what % risk do you think there needs to be for annihilation to occur?