A basic primer on why AI might lead to human extinction, and why solving the problem
is difficult. Scott Alexander walks readers through a number of questions with evidence based on progress from machine learning.
Epistemic status: After a couple hours of arguing with myself, this still feels potentially important, but my thoughts are pretty raw here.
Hello LessWrong! I’m an undergraduate student studying at the University of Wisconsin-Madison, and part of the new Wisconsin AI Safety Initiative. This will be my first “idea” post here, though I’ve lurked on the forum on and off for close to half a decade by now. I’d ask you to be gentle, but I think I’d rather know how I’m wrong! I’d also like to thank my friend Ben Hayum for going over my first draft and WAISI more broadly for creating a space where I’m finally pursuing these ideas in a more serious capacity. Of course, I’m not speaking for anyone but myself here.
With that...
All the smart people agitating for a 6-month moratorium on AGI research seem to have unaccountably lost their ability to do elementary game theory. It'a a faulty idea regardless of what probability we assign to AI catastrophe.
Our planet is full of groups of power-seekers competing against each other. Each one of them could cooperate (join in the moratorium) defect (publicly refuse) or stealth-defect (proclaim that they're cooperating while stealthily defecting). The call for a moratorium amounts to saying to every one of those groups "you should choose to lose power relative to those who stealth-defect". It doesn't take much decision theory to predict that the result will be a covert arms race conducted in a climate of fear by the most secretive and paranoid among the power...
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
...AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary
Can you be more specific about what you don't agree with? Which parts can't happen, and why?
Slimrock Investments Pte. Ltd. is listed on the Alameda County Recorder's records as associated with Lightcone's recent purchase of the Rose Garden Inn in Berkeley.
"Assignment of Rents" implies that they are the lender who provided the capital to purchase the property. There is not much information about them on the internet. They appear to be a holding company incorporated in Singapore.
However, I was able to find them in a list of creditors in the bankruptcy proceedings for FTX:
What is Lightcone's relationship to Slimrock, and is there any specific reason that the purchase of the Rose Garden Inn was financed through them rather than a more mundane/pedestrian lender?
This makes sense; thanks for the quick reply.
There are a lot of FTX creditors and it's not surprising to me that the best financing option for Lightcone would be an EA rather than a commercial bank, and given they're an EA it's also not a shock that they would have some financial interaction with FTX; many people had financial interactions with FTX, including many prominent EAs. (you can see that screenshot lists them as creditor 2,229, and there were many more after them in that document!)
In the early 21st century, the climate movement converged around a "2°C target", shown in Article 2(1)(a) of the Paris Climate Accords:
The 2°C target helps facilitate coordination between nations, organisations, and individuals.
The AI governance community should converge around a similar target.
In this article, I propose a target of...
Unfortunately we may already have enough compute, and it will be difficult to enforce a ban on decentralized training (which isn't competitive yet, but likely could be with more research).
It's a 3 hours 23 minutes episode.
[I might update this post with a summary once I'm done listening to it.]
Yes, unfortunately, Eliezer's delivery was suffering in many places from assuming that listeners have a lot of prior knowledge/context.
If he wishes to become a media figure going forward (which looks to me like an optimal thing to do for him at this point), this is one of the most important aspects to improve in his rhetoric. Pathos (the emotional content) is already very good, IMO.
Editor's note: this post is several years out of date and doesn't include information on modern systems like GPT-4, but is still a solid layman's introduction to why superintelligence might be important, dangerous and confusing.
1: What is superintelligence?
A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs.
1.1: Sounds a lot like science fiction. Do people think about this in the real world?
Yes. Two years ago, Google bought artificial intelligence startup DeepMind for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for...
It's still my go-to for laymen, but as I looked at it yesterday I did sure wish there was a more up-to-date one.
This post is a container for my short-form writing. See this post for meta-level discussion about shortform.
My understanding is that they used to have a lot more special-purpose modules than they do now, but their "occupancy network" architecture has replaced a bunch of them. So they have one big end-to-end network doing most of the vision, which hands a volumetric representation over to the collection of special-purpose-smaller-modules for path planning. But path planning is the easier part (easier to generate synthetic data for, easier to detect if something is going wrong beforehand and send a take-over alarm.).
The sequence [2, 4, 6] is valid. Test other sequences to discover what makes a sequence valid. When you think you know, write down your guess, reveal the rule, and see how it compares.
(You should try to deduce the truth using as few tests as possible; however, your main priority is getting the rule right.)
You can play my implementation of the 2-4-6 problem here (should only take a few minutes). For those of you who already know the solution but still want to test your inductive reasoning skills, I've made some more problems which work the same way but apply different rules.
I knew about 2-4-6 problem from HPMOR, I really like the opportunity to try it out myself. These are my results on the four other problems:
Number of guesses:
8 guesses of which 3 were valid and 5 non-valid
Guess:
"A sequence of integers whose sum is non-negative"
Result: Failure
Number of guesses:
39 of which 23 were valid 16 non-valid
Guess:
"Three ordered real numbers where the absolute difference between neighbouring numbers is decreasing."
Result: Success
Number of guesses:
21 of which 15 were valid and 6 non-valid
Guess...
Agreed, this would make it super easy to front-run you.