aaguirre

Physics Professor at. UC Santa Cruz, and also a co-Founder of the Future of Life Institute, Metaculus, and the Foundational Questions Institute

Wiki Contributions

Comments

Great article Garrison!

I found that the vitriolic debate between the people worried about extinction and those worried about AI’s existing harms hides the more meaningful divide — between those trying to make AI more profitable and those trying to make it more human.

Bravo.

Matthew I think you're missing a pretty important consideration here, which is that all of these policy/governance actions did not "just happen" -- a huge amount of effort has been put into them, much of it by the extended AI safety community, without which I think we would be in a very different situation. So I take almost the opposite lesson from what's happening: concerted effort to actually try to govern AI might actually succeed -- but we should be doubling down on what we are doing right and learning from what we are doing wrong, not being complacent.

(comment crossposted from EA forum)

Very interesting post! But I'd like to push back. The important things about a pause, as envisaged in the FLI letter, for example, are that (a) it actually happens, and (b) the pause is not lifted until there is affirmative demonstration that the risk is lifted. The FLI pause call was not, in my view, on the basis of any particular capability or risk, but because of the out-of-control race to do larger giant scaling experiments without any reasonable safety assurances. This pause should still happen, and it should not be lifted until there is a way in place to assure that safety. Many of the things FLI hoped could happen during the pause are happening — there is huge activity in the policy space developing standards, governance, and potentially regulations. It's just that now those efforts are racing the un-paused technology.

In the case of "responsible scaling" (for which I think the ideas of "controlled scaling" or "safety-first scaling" would be better), what I think is very important is that there not be a presumption that the pause will be temporary, and lifted "once" the right mitigations are in place. We may well hit point (and may be there now), where it is pretty clear that we don't know how to mitigate the risks of the next generation of systems we are building (and it may not even be possible), and new bigger ones should not be built until we can do so. An individual company pausing "until" it believes things are safe is subject to the exact same competitive pressures that are driving scaling now — both against pausing, and in favor of lifting a pause as quickly as possible. If the limitations on scaling come from the outside, via regulation or oversight, then we should ask for something stronger: before proceeding, show to those outside organizations that scaling is safe. The pause should not be lifted until or unless that is possible. And that's what the FLI pause letter asks for.

If anyone would like to be funded to do actual high quality research on this topic, I strongly encourage application to FLI's Humanitarian Impacts of Nuclear War grant program. For decades there have been barely any careful studies because there is barely any research funding or support. It's quite possible the effects are not as bad as currently predicted, but it's quite possible they are worse — the modern nuclear winter studies fund that things are worse than the early ones in the 80s (though fortunately the arsenals are much smaller now.)

It seems quite important to me to have a clear-eyed view of what the results of "small" and "large" nuclear wars are like. An all-out nuclear war between the US and Russia currently would probably involve of order 1900 warheads on each side, which is still a stupendous number. (See the BAAS's nuclear notebook for some pretty detailed arsenal numbers.) If something starts, I'm deeply pessimistic about a maintaining a "limited" nuclear war between these two, much as I'd like to believe otherwise.

I think this depends a lot on the use case. I envision for the most part this would be used in/on large known clusters of computation, as an independent check on computation usage and a failsafe. In that case it will be pretty easy to distinguish from other uses like gaming or cryptocurrency mining. If we're in the regime where we're worried about sneaky efforts to assemble lots of GPUs under the radar and do ML with them, then I'd expect there would be pattern analysis methods that could be used as you suggest, or the system could be set up to feed back more information than just computation usage.

The purpose of the COMPUTE token and blockchain here would be to provide a publicly verifiable ledger of the computation done by the computational cores. It would not be integral to the scheme but would be useful for separating the monitoring and control, as detailed in the post. I hope it is clear that a token as a tradeable asset is not at all important to the core idea.

Very cool, thanks for the pointer!

There's no single metric or score that is going to capture everything. Metaculus points as the central platform metric were devised to —as danohu says — reward both participation and accuracy. Both are quite important. It's easy to get a terrific Brier score by cherry-picking questions. (Pick 100 questions that you think have 1% or 99% probability. You'll get a few wrong but your mean Brier score will be ~(few)*0.01. Log score is less susceptible to this). You can also get a fair number of points for just predicting the community prediction — but you won't get that many because as a question's point value increases (which it does with the number of predictions), more and more of the score is relative rather than absolute.

If you want to know how good a predictor is, points are actually pretty useful IMO, because someone who is near the top of the leaderboard is both accurate and highly experienced. Nonetheless more ways of comparing people to each other would be useful. You can look at someone's track record in detail, but we're also planning to roll out a more ways to compare people with each other. None of these will be perfect; there's simply no single number that will tell you everything you might want — why would there be?

I am not an expert on the Outer Space Treaty either, but by also by anecdotal evidence, I have always heard it to be of considerable benefit and a remarkable achievement of diplomatic foresight during the Cold War. However, I would welcome any published criticisms of the Outer Space Treaty you wish to provide.

It's important to note that the treaty was originally ratified in 1967 (as in, ~two years before landing on the Moon, ~5 years after the Cuban Missile Crisis). If you critique a policy for its effects long after its original passage (as with reference to space mining, or as others have the effects of Section 230 of the CDA passed in 1996), your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy. Likewise, it is important to run the counterfactual to the policy never being enacted. In this circumstance, I’m not sure how you envision a breakdown in US-USSR (and other world powers) negotiations on the demilitarization of space in 1967 would have led to better outcomes.

You're certainly entitled to your (by conventional standards) pretty extreme anti-regulatory view of e.g. the FDA, IRBs, environmental regulations, etc., and to your prior that regulations are in general highly net negative. I don't share those views but I think we can probably agree that there are regulations (like seatbelts, those governing CFCs, asbestos, leaded gasoline, etc.) that are highly net positive, and others (e.g. criminalization of some drugs, anti-cryptography, industry protections against class action suits, etc.) that are nearly completely negative. What we can do to maximize the former and minimize the latter is a discussion worth having, and a very important one.

In the present case of autonomous weapons, I again think the right reference class is that of things like the bioweapons convention and the space treaty. I think these, also, have been almost unreservedly good: made the world more stable, avoided potentially catastrophic arms races, and left industries (like biotec, pharma, space industry, arms industry) perfectly healthy and arguably (especially for biotech) much better off than they would have been with a reputation mixed up in creating horrifying weapons. I also think in these cases, as with at least some AWs like antipersonnel WMDs, there is a pretty significant asymmetry, with the negative affects (of no regulation) having a tail into extremely bad outcomes, while the negative affects of well-structured regulations seem pretty mild at worst. Those are exactly the sorts of regulations/agreements I think we should be pushing on.

Very glad I wrote up the piece as I did, it's been great to share and discuss it here with this community, which I have huge respect for!

Load More