Updated! Excuse the delay
I buy that… so many of the folks funded by Emergent Ventures are EAs, so directly arguing against AI risk might alienate his audience
Still, this Straussian approach is a terrible way to have a productive argument
My mistake! Fixed
Many thanks for the update… and if it’s true that you could write the very best primer, that sounds like a high value activity
I don’t understand the astroid analogy though. Does this assume the impact is inevitable? If so, I agree with taking no action. But in any other case, doing everything you can to prevent it seems like the single most important way to spend your days
Many thanks! It looks like EA was the right angle... found some very active English-speaking EA groups right next to where I'll be
I bet you're right that a perceived lack of policy options is a key reason people don't write about this to mainstream audiences
Still, I think policy options exist
The easiest one is adding right right types AI capabilities research to the US Munitions List, so they're covered under ITAR laws. These are mind-bogglingly burdensome to comply with (so it's effectively a tax on capabilities research). They also make it illegal to share certain parts of your research publicly
It's not quite the secrecy regime that Eliezer is looking for, but it's a big step in that direction
I think 2, 3, and 8 are true but pretty easy to overcome. Just get someone knowledgeable to help you
4 (low demand for these essays) seems like a calibration question. Most writers probably would lose their audience if they wrote about it as often as Holden. But more than zero is probably ok. Scott Alexander seems to be following that rule, when he said that we was summarizing the 2021 MIRI conversations at a steady drip so as not to alienate the part of his audience that doesn’t want to see that
I think 6 (look weird) used to be true, but it’s not any more. It’s hard to know for sure without talking to Kelsey Piper or Ezra Klein, but I suspect they didn’t lose any status for their Vox/NYT statements
I agree that it's hard, but there are all sorts of possible moves (like LessWrong folks choosing to work at this future regulatory agency, or putting massive amounts of lobbying funds into making sure the rules are strict)
If the alternative (solving alignment) seems impossible given 30 years and massive amounts of money, then even a really hard policy seems easy by comparison
How about if you solve a ban on gain-of-function research first, and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
Eliezer gives alignment a 0% chance of succeeding. I think policy, if tried seriously, has >50%. So it's a giant opportunity that's gotten way too little attention
I'm optimistic about policy for big companies in particular. They have a lot to lose from breaking the law, they're easy to inspect (because there's so few), and there's lots of precedent (ITAR already covers some software). Right now, serious AI capabilities research just isn't profitable outside of the big tech companies
Voluntary compliance is also a very real thing. Lots of AI researchers a... (read more)
Look at gain of function research for the result of a government moratorium on research. At first Baric feared that the moratorium would end his research. Then the NIH declared that his research isn't officially gain of function and continued funding him.
Regulating gain of function research away is essentially easy mode compared to AI.
A real Butlerian jihad would be much harder.
It sounds like Eliezer is confident that alignment will fail. If so, the way out is to make sure AGI isn’t built. I think that’s more realistic than it sounds
1. LessWrong is influential enough to achieve policy goals
Right now, the Yann LeCun view of AI is probably more mainstream, but that can change fast.LessWrong is upstream of influential thinkers. For example:- Zvi and Scott Alexander read LessWrong. Let’s call folks like them Filter #1- Tyler Cowen reads Zvi and Scott Alexander. (Filter #2)- Malcolm Gladwell, a mainstream influencer, reads Tyler Cowen... (read more)
I tend to agree that Eliezer (among others) underestimates the potential value of US federal policy. But on the other hand, note No Fire Alarm, which I mostly disagree with but which has some great points and is good for understanding Eliezer's perspective. Also note (among other reasons) that policy preventing AGI is hard because it needs to stop every potentially feasible AGI project but: (1) defining 'AGI research' in a sufficient manner is hard, especially when (2) at least some companies naturally want to get around such regulations, and (3) at least ... (read more)
Is there a good write up of the case against rapid tests? I see Tom Frieden’s statement that rapid tests don’t correlate with infectivity, but I can’t imagine what that’s based on
In other words, there’s got to be a good reason why so many smart people oppose using rapid tests to make isolation decisions
Could you spell out your objection? It’s a big ask, having read a book just to find out what you mean!
Short summary: Biological anchors are a bad way to predict AGI. It’s a case of “argument from comparable resource consumption.” Analogy: human brains use 20 Watts. Therefore, when we have computers with 20 Watts, we’ll have AGI! The 2020 OpenPhil estimate of 2050 is based on a biological anchor, so we should ignore it.
Lots of folks made bad AGI predictions by asking:
To find (1), they use a “biological anchor,” like the computing power of the human brain, or the tota... (read more)
What particular counterproductive actions by the public are we hoping to avoid?
Zvi just posted EY's model
I should’ve been more clear…export controls don’t just apply to physical items. Depending on the specific controls, it can be illegal to publicly share technical data, including source code, drawings, and sometimes even technical concepts
This makes it really hard to publish papers, and it stops you from putting source code or instructions online
Why isn’t there a persuasive write-up of the “current alignment research efforts are doomed” theory?
EY wrote hundreds of thousands of words to show that alignment is a hard and important problem. And it worked! Lots of people listened and started researching this
But that discussion now claims these efforts are no good. And I can’t find good evidence, other than folks talking past each other
I agree with everything in your comment except the value of showing EY’s claim to be wrong:
I agree. This wasn’t meant as an object level discussion of whether the “alignment is doomed” claim is true. What I’d hopes to convey is that, even if the research is on the wrong track, we can still massively increase the chances of a good outcome, using some of the options I described
That said, I don’t think Starship is a good analogy. We already knew that such a rocket can work in theory, so it was a matter of engineering, experimentation, and making a big organization work. What if a closer analogy to seeing alignment solved was seeing a proof of P=NP this year?
In fact, what I’d really like to see from this is Leverage and CFAR’s actual research, including negative results
What experiments did they try? Is there anything true and surprising that came out of this? What dead ends did they discover (plus the evidence that these are truly dead ends)?
It’d be especially interesting if someone annotated Geoff’s giant agenda flowchart with what they were thinking at the time and what, if anything, they actually tried
Also interested in the root causes of the harms that came to Zoe et al. Is this an inevitable consequence of Leverage’s beliefs? Or do the particular beliefs not really matter, and it’s really about the social dynamics in their group house?
I don’t agree with the characterization of this topic as self-obsessed community gossip. For context, I’m quite new and don’t have a dog in the fight. But I drew memorable conclusions from this that I couldn’t have gotten from more traditional posts
First, experimenting with our own psychology is tempting and really dangerous. Next time, I’d turn up the caution dial way higher than Leverage did
Second, a lot of us (probably including me) have an exploitable weakness brought on high scrupulously combined with openness to crazy-sounding ideas. Next time, I’d b... (read more)
So is this an accurate summary of your thinking?
Really enjoyed this. I’m skeptical, because (1) a huge number of things have to go right, and (2) some of them depend on the goodwill of people who are disincentivized to help
Most likely: the Vacated Territory flounders, much like Birobidzhan (Which is a really fun story, by the way. In the 1930’s, the Soviet Union created a mostly-autonomous colony for its Jews in Siberia. Macha Gessen tells the story here)
In September 2021, the first 10,000 Siuslaw Syrians touched down in Siuslaw National Forest, land that was previously part of Oregon.
It was a... (read more)