Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
Increasing the volume of alignment research helps only if alignment research does not help capabilities researchers, but my impression is that most alignment research that has been done so far has helped capabilities researchers approximately as much as it has helped alignment researchers. Just because a line of research is described as "alignmnent research" does not automatically cause it to help alignment researchers more than capability researchers.
In summary, I don't consider your (a) a cause for hope because the main problem is that increasing capabilities to the point of disaster (extinction or such) is easier than solving the alignment problem and your (a) does not ameliorate the main problem much if at all for the reason I just explained.
Suppose AI assistants similar to Claude transform the economy. Now what? How is the risk of human extinction reduced?
Yes, alignment researchers have become more capable, and yes, the people trying to effect an end or a long pause of AI "progress" have become more capable, but so have those trying to effect more AI "progress".
Also, rapid economic change entails rapid changes in most human interpersonal relationships, which is hard on people and according to some is the main underlying cause of addictions. Addicts aren't likely to come to understand that AI "progress" is very dangerous even if they are empowered by AI assistants.
Your argument supports the assertion that if AI "progress" stopped now then we'd be better off then we would've been without any AI progress (in the past), but of course that is very different than being optimistic about the outcome of AI tech's continuing to "progress".
My worry is one or two people loyal to the red team leave the site, which makes people on the blue team feel more free to use the site to criticize the red team, causing more red teamers to leave (and attracting blue-team zealots who filter everything through an ideological lens) in a positive feedback loop ending in a site with the same problem as Bluesky already has and many subreddits already have, namely, the zealots produce large quantities of low-quality writing, which drowns out the high-quality contributions and discourages many who can make high-quality contributions from even starting to contribute.
ADDED. Since LW is currently very far from Bluesky, perhaps it would've been more persuasive for me to argue that if LW were to start to have even half as many low-effort political comments as Hacker News, many would probably stop reading LW, or at least that is my worry.
you used the word "government" but to be clear, the supreme court and the states are also the government!
In British English, "the government" means the executive branch, and the entire thing (including the judiciary and the legislature) is called the state.
And while I have your attention allow me to echo the person you are replying to, namely, it would be ideal if a reader could not even tell from your comments which party you prefer (since you run the site) and great grandparent is pretty strong evidence for which one.
I made the same prediction a month ago:
https://www.lesswrong.com/posts/tsconpYZiPAQc7CHQ/?commentId=QzWaC7Efejzu29b4h
Maybe I failed to write something that reasonable people could parse.
"The prediction market"? I am confused what you mean by that.
I mean a technology and its implementations similar to how "the telephone" refers to a technology.
I should have written, "I wouldn't be surprised if prediction markets start growing much faster than they have been growing over the last 3 decades or so", to avoid taking a position on whether they are not currently important.
This soldier spent 2 years fighting for Ukraine, including 6 months recently as an operator of FPV drones, and he is also skeptical that drones will revolutionize military affairs during the next few years. I don't recall anything about his arguments, but my recollection is he does provide some argumentation in this interview.