RHollerith

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Wiki Contributions

Comments

The problem is that the public correctly perceives that economic growth and technological progress make the average life better, so it is hard to get political support for any measures to slow them down. I can think of two policy proposals that already have a lot of support that we could throw our weight behind. Most supporters of these proposals are unaware of the significant progress-slowing effects of these proposals, which is the only reason they are as popular as they are. I don't want to say more in public because it makes us AI decelerationists look bad to casual readers, but I welcome PMs on the subject.

To directly answer your question: yes, if you value the survival of our species rather than just the experiences of the current generation of humans, a general slowdown of the economy and of human technology would be a good thing given the current situation around AI research. Unless you have some plan for actually effecting a slowdown that is a lot more effective (and less cynical and almost-dishonorable) than what I suggested in the previous paragraph, though, there are better courses of action for us to focus our attention on.

I'm assuming that the only effective ways of slowing down 'progress' is to get laws passed.

Yes, I am pretty sure that the fewer additional people skilled in technical AI work, the better. In the very unlikely event that before the end, someone or some group actually comes up with a reliable plan for how to align an ASI, we certainly want a sizable number of people able to understand the plan relatively quickly (i.e., without first needing to prepare themselves through study for a year), but IMHO we already have that.

"The AI project" (the community of people trying to make AIs that are as capable as possible) probably needs many thousands of additional people with technical training to achieve its goal. (And if the AI project doesn't need those additional people, that is bad news because it probably means we are all going to die sooner rather than later.) Only a few dozen or a few hundred researchers (and engineers) will probably make substantial contributions toward the goal, but neither the apprentice researchers themselves, their instructors or their employers can tell which researchers will ever make a substantial contribution, so the only way for the project to get an adequate supply of researchers is to train and employ many thousands. The project would prefer to employ even more than that.

I am pretty sure it is more important to restrict the supply of researchers available to the AI project than it is to have more researchers who describe themselves as alignment researchers. It's not flat impossible that the AI-alignment project will bear fruit before the end, but is it very unlikely. In contrast, if not stopped somehow (e.g., by the arrival of helpful space aliens or some other miracle) the AI project will probably succeed at its goal. Most people pursuing careers in alignment research are probably doing more harm than good because the AI project tends to be able to use any results they come up with. MIRI is an exception to the general rule, but MIRI has chosen to stop its alignment research program on the grounds that it is hopeless.

Restricting the supply of researchers for the AI project by warning talented young people not to undergo the kinds of training needed by the AI project increases the length of time left before the AI project kills us all, which increases the chances of a miracle such as the arrival of the helpful space aliens. Also, causing our species to endure 10 years longer than it would otherwise endure is an intrinsic good even if it does not instrumentally lead to our long-term survival.

I hope that the voluminous discussion on exactly how bad each of the big AI labs are doesn't distract readers from what I consider the main chances: getting all the AI labs banned (eventually) and convincing talented young people not to put in the years of effort needed to prepare themselves to do technical AI work.

The PR repercussions would be enormous

To add to this: Just Stop Oil can afford to make people angry (to an extent) and to cause gridlock on the streets of London because the debate about climate change has been going on long enough that most people who might have an influence on fossil-fuel policy have already formed a solid opinion -- and even Just Stop Oil probably cannot afford the reputational consequences of killing, e.g., Exxon executives.

As long as most of the possible decision makers have yet to form a solid opinion about whether AI research needs to be banned, we cannot afford the reputational effects of violent actions or even criminal actions. Note that the typical decision maker in, e.g., Washington, D.C., is significantly more disapproving of criminal behavior (particularly, violent criminal behavior) than most of the people you know.

Our best strategy is to do the hard work that enables us to start using state power to neutralize the AI accelerationists. We do not want to do anything hasty that would cause the power of the state (specifically the criminal justice system) to be used to neutralize us.

Making it less cool “to be AI” is an effective intervention, but crime is not a good way to effect that.

I got the distinct impression (from reading comments written by programmers) that Microsoft had a hard time hiring programmers starting in the late 1990s and persisting for a couple of decades because they were perceived by most young people as a destructive force in society. Of course, Microsoft remains a very successful enterprise, but there are a couple of factors that might make the constricting effect of making it uncool to work for an AI lab stronger than the constricting effect of making it uncool to work for Microsoft: first, it take a lot more work (i.e., acquiring skills of knowledge) to start to be able to make a technical contribution to the effort to advance AI than it takes to able to contribute to Microsoft. (Programming was easy to learn for many of the people who turned out to be good hires at Microsoft.) Second, what work is required to get good enough at programming to start to be able to contribute at Microsoft is readily transferable to other jobs that were not (and are not) considered destructive to society. In contrast, if you've spent the last 5 to 7 years in a full-time effort to become able to contribute to the AI acceleration effort, there's really no where else you can use those skills: you have to start over career-wise basically. The hope is that if enough people notice what I just said before putting in that 5 to 7 years of effort, a significant fraction of the most talented ones decide to do something else (because they don't want to invest that effort, then not be able to reap the career rewards because of their own moral qualms or because AI progress has been banned by the governments).

Some people will disagree with my assertion that people who invested a lot of time getting the technical knowledge needed to contribute to the AI acceleration effort will not be able to use that knowledge anywhere else: they think that they can switch from being capability researchers to being alignment researchers. Or they think they can switch from employment at OpenAI to employment at some other, "good" lab. I think that is probably an illusion. I think about 98% of the people calling themselves alignment researchers are (contrary to their hopes and beliefs) actually contributing to AI acceleration. And I think that all the AI labs are harmful -- at least all of them that are spending $billions on training models. I feel bad saying so in what is essentially a footnote to a comment on some other subject, but I'm too tired today to do any better.

But my point is that an effective intervention is to explain that (namely, the destructiveness of AI acceleration and the non-transferability of the technical skills used in AI acceleration) to young people before they put years of work into preparing themselves to do AI engineering or research.

I tentatively agree that in theory there should be no difference in fuel efficiency at the task of remaining in the air, i.e., providing lift.

The reason the US military is switching from helicopters to VTOLs for transporting soldiers is that VTOLs are more fuel-efficient at making trips of more than 100 miles or so. Of course, the way they do that is by covering ground faster than a helicopter.

Although I like the lively presentation, my initial reaction is that this post doesn't persuade me.

If Ukraine and its allies succeed in inducing a soldier to leave the Russian army, that frees up money (namely, the salary that the Kremlin used to pay the soldier that left) that Russia can use to hire another soldier, so the only monetary cost you've imposed on the Kremlin is the cost of training the soldier that left.

Currently money is the most important resource for the Kremlin (probably by a large margin) and if the supply of young Russian men ever becomes the critical resource, then the scale of the war has increased so much that we should probably start calling it World War III.

Answer by RHollerith20

I would like for the software I use to maintain my notes and to-do lists to look as good as a text-heavy web page does. I.e., I.e., I want something like a text editor, but with better typography. (The way it is now, I use Emacs to maintain notes and lists. At least it lets me specify the width of the left and right margins, and I like how easy it is to customize.)
 

[This comment is no longer endorsed by its author]Reply

Your "given a lab environment where we strongly expect an AI to be roughly human level" seem to assume the thing to be proven.

Suppose a lab is trying out a novel AI architecture or even just a significant algorithmic innovation. You can inspect the source code for the training algorithm, can inspect the trillions of weights that result from the training, and can ask questions of the designers. How do you estimate its capabilities without giving the AI a lot of access to things it can use to take over (e.g., the ability to write code and have it run, gullible humans)?

If you have an approach I think is promising, I'll drop everything and lobby funders to give you a big salary to work on your approach.

The problem with the simulated environment is making it accurate enough that the AI cannot tell that it is in a simulated environment. Even if you can fool the AI with 99% probability, the 1% of cases in which the AI is not fooled will tend to be the ones where the AI is very dangerous indeed.

The hard part is predicting the intelligence of an AI before you unleash it on the world.

Load More