Senator Bernie Sanders is planning to introduce legislation that would ban the construction of new AI data centers. You can find his video announcement here, and here is the transcript:
...Thanks very much for joining me. I will soon be introducing legislation calling for a moratorium on the construction of new data centers.
Now, as a result, I've been called a luddite, anti-innovation, anti-progress, pro-Chinese, among many other things. So why am I doing that? Why am I calling for a moratorium on the construction of new data centers?
Bottom line: We are at the beginning of the most profound technological revolution in world history. That's the truth. This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job displacement. It will threaten our democratic institutions. It will impact our emotional well-being, and what it even means to be a human being. It will impact how we educate and raise our kids. It will impact the nature of warfare, something we are seeing right now in Iran.
Further, and frighteningly, some very knowledgeable people fear that that what was once seen as science fiction could soon beco
It's the kind of action that when universalized does indeed end the AGI death race! That is in an important sense proposing an end to the AGI death race.
If everyone stopped building datacenters you really have made a lot of progress towards stopping the death race (and of course, the algorithm that produces banning datacenter construction would probably not stop there).
I think this is a common misconception. I'm pretty sure algorithmic progress will eventually reach a point where what currently takes a datacenter will be possible on a single machine for a slightly longer training period. If that same algorithm runs on a datacenter it would produce something superhuman, but cutting down to only single-gpu training would then not be enough to completely stop. Algo progress is a slow slog of "grad student descent", so it likely takes quite a bit longer, and maybe it takes enough longer to figure out alignment. But it doesn't stop the death race, it just slows it down. Actual stopping would require shredding all silicon big enough to even run the fully trained AI, which doesn't seem to be in the cards. I'm not saying datacenter construction is good or should continue, or that this won't buy time, but I think people are wishful-thinking about how much time it buys.
Agree qualitatively (and possible quantitatively). However, there's a quite large knock-on effect, which is a strong bundle of signals of "AGI is bad, don't make AGI". These signals move in various directions between different entities, carrying various messages, but they generally push against AGI. (E.g. signaling legitimacy of the Stop position; the US signaling to other states; society signaling to would-be capabilities researchers; Congress self-signaling "we're trying to ban this whole thing and will continue to add patches to ban dangerous stuff"; etc.)
Thinking this through step by step in the framework of the AI Futures Model:
First, I'll check what the model says, then I'll reconstruct the reasoning behind why it predicts that.
By default, with Daniel's parameters, Automated Coder (AC) happens in 2030 and ASI happens in 2031 1.33 years later.
If I stop experiment and training compute growth at the start of 2027, then the model predicts Automated Coder in 2039 rather than 2030. So 4x slower in calendar time (exactly matching habyrka's guess). It also looks to have well over a 5 year takeoff from AC to ASI as opposed to the default of 1.33 years.
I got this by plugging in this modified version of our time series to this unreleased branch of our website.
However, this is highly sensitive to the timing of the compute growth pause, because it's a shock to the flow rather than the stock. e.g. if I instead stop growth at the start of 2029 as in this worksheet, then AC happens in Mar 2031, taking ~2.2 years instead of ~1.2, so slowing things down by <2x. It does still slow down takeoff from AC to ASI to 4 years, so by ~3x (and this is probably at least a slight underestimate because we don't model hardware R&D automation).
Now I'll re...
This is cool! I'm sad he spends so much of his time criticising the good part (AI doing tonnes of productive labour). I say this not because I want to demand every ally agree with me on every point, but because I want to early disavow beliefs that political expediency might want me to endorse.
It seems to me a meaningfully open question whether automating all human labor will end up net benefiting humans, even assuming we survive; of course it might, but I think much more dystopian outcomes also seem plausible. Markets tend to benefit humans because the price signals we send tend to correlate with our relative needs, and hence with our welfare; I think it is not obvious that this correlation will persist once humans become unable to generate economic value.
It seems the pro-Trump Polymarket whale may have had a real edge after all. Wall Street Journal reports (paywalled link, screenshot) that he’s a former professional trader, who commissioned his own polls from a major polling firm using an alternate methodology—the neighbor method, i.e. asking respondents who they expect their neighbors will vote for—he thought would be less biased by preference falsification.
I didn't bet against him, though I strongly considered it; feeling glad this morning that I didn't.
I don't remember anyone proposing "maybe this trader has an edge", even though incentivising such people to trade is the mechanism by which prediction markets work. Certainly I didn't, and in retrospect it feels like a failure not to have had 'the multi-million dollar trader might be smart money' as a hypothesis at all.
Knowing now that he had an edge, I feel like his execution strategy was suspect. The Polymarket prices went from 66c during the order back to 57c on the 5 days before the election. He could have extracted a bit more money from the market if he had forecasted the volume correctly and traded against it proportionally.
On one hand, I feel a bit skeptical that some dude outperformed approximately every other pollster and analyst by having a correct inside-view belief about how existing pollster were messing up, especially given that he won't share the surveys. On the other hand, this sort of result is straightforwardly predicted by Inadequate Equilibria, where an entire industry had the affordance to be arbitrarily deficient in what most people would think was their primary value-add, because they had no incentive to accuracy (skin in the game), and as soon as someone with an edge could make outsized returns on it (via real-money prediction markets), they outperformed all the experts.
On net I think I'm still <50% that he had a correct belief about the size of Trump's advantage that was justified by the evidence he had available to him, but even being directionally-correct would have been sufficient to get outsized returns a lot of the time, so at that point I'm quibbling with his bet sizing rather than the direction of the bet.
Norvid on Twitter made the apt point that we will need to see the actual private data before we can really judge. Not unusual for lucky people to backrationalize their luck as a sure win.
I think it is probably possible in principle to train superintelligence on a laptop, and I worry that this inconvenient fact is often elided in discourse about halting AI. It is extremely helpful that for now, AI training is so absurdly inefficient that non-proliferation strategies roughly as light-touch as the IAEA—e.g., bans on AI data centers, or powerful GPUs—might suffice to seriously slow AI progress. And I think humanity would be foolish not to take advantage of this relatively cheap temporary opportunity to slow AI progress, so that we can buy as much time as we can to figure out how to improve our chances of surviving the creation of superintelligence. But I do think superintelligence is likely to be created eventually regardless, at least absent non-proliferation regimes drastically more costly/invasive than the IAEA; relatedly, I do expect that the long-term survival of life will still probably require solving the alignment problem eventually.
I think the global treaty has additional hope, which further wants to be capitalized on, in that it strongly signals "we don't want to make AGI". This can ramify through society, academic institutions, parties with big-money philanthropists, student groups, etc. Quoting from here:
A professor doing cutting-edge domain-nonspecific AI research should read in the paper that this is very bad; then should have students stop signing up for classes and research; and have student protests; and should be shunned by colleagues; and should have administration pressure them to switch areas; and then they should get their government funding cut. It should feel like what happens if you announce "Hey everyone! I'm going to go work in advertising for a bunch of money, convincing teenagers to get addicted to cigarettes!", but more so.
Regarding
the long-term survival of life will still probably require solving the alignment problem eventually.
Does it? What if we just don't, and find other robust ways to prevent the creation of AGI, and other ways to have a very hopeworthy future?
Very rough BOTEC, corrections welcome:
A thing to keep in mind:
That's a sizable gap but not all that crazy in the grand scheme of things? Intuitively one would have to have a pretty detailed (and accurate) understanding of things to (calibratedly) predict "yes train on datacenter, no train on laptop". Does that make any sense?
I’m not sure what you’re getting at with all your “just”s. Like, it doesn’t seem like we can “just” get a data centre ban. Why would these other bans be easier? probably you don’t mean that they would be, but I’m confused what you do mean.
Similarly, I don’t understand which worldview “where you can’t use technological progress to make it harder to unilaterally deploy AI” you’re talking about. In particular, I don’t see such a worldview expressed in the comment you’re replying to. I’d guess you think it’s a consequence of the “drastically more costly/invasive” qualifier, but the connection is a little remote for me to follow.
For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress.
IIUC, it’s legal everywhere on Earth to do basic research that might eventually lead to a new, much more inexpensive and hard-to-monitor method to enrich uranium to weapons grade.
I’m thinking mainly of laser isotope enrichment, which was first explored in the 1970s. No super-inexpensive method has turned up, thankfully. (The best-known approach seems to be in the same ballpark as gas centrifuges in terms of cost, specialty parts etc., or if anything somewhat worse. Definitely not radically simpler and cheaper.) But I think there’s a big space of possible techniques, and meanwhile people in academia keep inventing new types of lasers and new optical excitation and separation paradigms. I don’t think there’s any general impossibility proof that kg-scale uranium enrichment in a random basement with only widely-available parts can’t ever get invented someday by this line of research.
(If it did, it probably wouldn’t be the death of nonproliferation because you can still try to monitor and control ...
Nah, my model allows ASI without massive compute at any point in the process, see “Foom & Doom 1: ‘Brain in a box in a basement’” (esp. §1.3), and maybe also “The nature of LLM algorithmic progress” §4.
Arguments criticizing the FDA often seem to weirdly ignore the "F." For all I know food safety regulations are radically overzealous too, but if so I've never noticed (or heard a case for) this causing notable harm.
Overall, my experience as a food consumer seems decent—food is cheap, and essentially never harms me in ways I expect regulators could feasibly prevent (e.g., by giving me food poisoning, heavy metal poisoning, etc). I think there may be harmful contaminants in food we haven't discovered yet, but if so I mostly don't blame the FDA for that lack of knowledge, and insofar as I do it seems an argument they're being under-zealous.
Criticizing FDA food regulations is a niche; it is hard to criticize 'the unseen', especially when it's mostly about pleasure and the FDA is crying: 'we're saying lives! Won't someone thinking of the children? How can you disagree, just to stuff your face? Shouldn't you be on a diet anyway?'
But if you go looking, you'll find tons of it: pasteurized cheese and milk being a major flashpoint, as apparently the original unpasteurized versions are a lot tastier. (I'm reminded of things like beef tallow for fries or Chipotle - how do you know how good McDonald's french fries used to taste before an overzealous crusader destroyed them if you weren't there 30+ years ago? And are you really going to stand up and argue 'I think that we should let people eat fries made with cow fat, because I am probably a lardass who loves fries and weighs 300 pounds, rather than listen to The Science™'?) There's also the recent backfiring of overzealous allergy regulations, which threatens to cut off a large fraction of the entire American food supply to people with sesame & peanut allergies, due solely to the FDA. (Naturally, of course, the companies get the blame.) Similarly, I read food industry peop...
I was surprised to find a literature review about probiotics which suggested they may have significant CNS effects. The tl;dr of the review seems to be: 1) You want doses of at least or CFU, and 2) You want, in particular, the strains B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei.
I then sorted the top 15 results on Amazon for "probiotic" by these desiderata, and found that this one seems to be best.
Some points of uncertainty:
For convenience, here's a slightly edited-for-clarity version of the abstract:
38 studies (all randomized controlled trials) were included: 25 in animals and 15 in humans (2 studies were conducted in both). Most studies used Bifidobacterium (eg, B. longum, B. breve, and B. infantis) and Lactobacillus (eg, L. helveticus, and L. rhamnosus), with doses between and 10^10 colony-forming units for 2 weeks in animals and 4 weeks in humans.
These probiotics showed efficacy in improving psychiatric disorder-related behaviors including anxiety, depression, autism spectrum disorder (ASD), obsessive-compulsive disorder, and memory abilities, including spatial and non-spatial memory.
Because many of the basic science studies showed some efficacy of probiotics on central nervous system function, this background may guide and promote further preclinical and clinical studies. Translating animal studies to human studies has obvious limitations but also suggests possibilities. Here, we provide several suggestions for the translation of animal studies. More experimental designs with both behavioral and neuroimaging measures in healthy volunteers and patients are needed in the future.
Possibly another good example of scientists failing to use More Dakka. The mice studies all showed solid effects, but then the human studies used the same dose range (10^9 or 10^10 CFU) and only about half showed effects! Googled for negative side effects of probiotics and the healthline result really had to stretch for anything bad. Wondering if, as much larger organisms, we should just be jacking up the dosage quite a bit.
In the early 1900s the Smithsonian Institution published a book each year, which mostly just described their organizational and budget updates. But they each also contained a General Appendix at the end, which seems to have served a function analogous to the modern "Edge" essays—reflections by scientists of the time on key questions of interest. For example, the 1929 book includes essays speculating about what "life" and "light" are, how insects fly, etc.
Apparently Otzi the Iceman still has a significant amount of brain tissue. Conceivably memories are preserved?
I found LinkedIn's background breakdown of DeepMind employees interesting; fewer neuroscience backgrounds than I would have expected.