10 million times faster is really a lot - on modern hardware, running SOTA object segmentation models at even 60fps is quite hard, and those are usually much much smaller than the kinds of AIs we would think about in the context of AI risk.
But - 100x faster is totally plausible (especially w/100x the energy consumption!) - and I think the argument still mostly works at that much more conservative speedup.
it's completely implausible they'd run their entire processing system 10 million times faster, yeah. running a full brain costs heat, and that heat has to dissipate, there aren't speed shortcuts.
our fastest neurons are on order 1000 hz, and our median neurons are on order 1hz. it's the fast paths through the network that affect fastest reasoning. the question, then, is how much a learning system can distill its most time-sensitive reasoning into the fast paths. eg, self-directed distillation of a skill into an accelerated network that calls out to the slower one.
there's no need for a being to run their entire brain at speed. being able to generate a program to run at 1ghz that can outsmart a human's motor control is not difficult - flies are able to outsmart our motor control by running at a higher frequency despite being much smaller than us in every single way. this is the video I would link to show how much frequency matters: https://www.youtube.com/watch?v=Gvg242U2YfQ
In the same vein, I humbly suggest "The entire bee movie but every time they say bee it gets faster" as a good model for what the singularity will seem like from our perspective.
This is surprisingly on-point since
Bee Movie is about slight perturbations causing the end of the world unexpectedly.
(Bee Movie isn't quite "good" but it sure is an interesting experience if you go in not knowing anything about the plot)
Especially since the bees were clearly maximizing their reward function, and succeeded astronomically beyond their imagination, and it ended horribly for them and everyone else.
This does seem like a helpful intuition pump.
Curious how many people you've tried this with, and what sort of specific responses they tend to have.
I'd say I've tried it with around 30 people? With around 15 I showed the video, and with around 15 I didn't. In all cases they seemed more thoughtful once I make the (humans:AI)::(plants::humans) analogy, and when I showed the video they seemed to spend considerably more time generating independent thoughts of their own about how things could go wrong.
Of course, speed isn't the only thing that matters, isn't necessary, isn't sufficient, etc. etc.. But it's a big deal in a lot of scenarios, and it helps to get people thinking about it.
Cool idea. By default I might suggest this video instead - very similar to yours, but with a girl running down the track, so you can actually see how slowed down it is (as opposed to it looking like a still frame).
(Small exception to Critch's video looking like a still frame: There's a dude with a moving hand at 0:45.)
I am quite surprised by the relative stillness of the people contrasted to the girl's running. Do people really not move at all in the time it takes someone to run several person-widths?
This may be persuasive, but does it pump intuitions in the direction of an accurate assessment of AGI risk? While you never explicitly state that this is your goal, I think it's safe to assume given that you're posting on LW.
As Nikita Sokolsky argued here, it's not clear that a 10-million fold difference in processing speed leads to a 10-million fold difference in capabilities. Even a superintelligent AI may be restricted to manipulating the world via physical processes and mechanisms that take human-scale time to execute. To establish a unique danger from AGI, it seems important to identify concrete attack vectors that are available to an AGI, but not to humans, due to the processing speed differential.
While it may be that a person hearing this "slow-motion camera" argument can conceive of this objection on their own, I think the point of an intuition pump is to persuade somebody who's unlikely to think of it independently. For this reason, I think that identifying at least one concrete AGI-tractable, human-intractable attack vector would be a more useful and accuracy-promoting intuition pump than the "slow-motion camera" pump.
Fortunately, articulating those AGI-unique attack vectors in public is not a particularly unsafe practice. Attack ideas generated by deliberately trying to find ideas that are impossible for humans, but tractable for AGI are unlikely to be preferable to a bad actor to an attack generated by trying to think of easy ways for a human to cause harm.
Even a superintelligent AI may be restricted to manipulating the world via physical processes and mechanisms that take human-scale time to execute. To establish a unique danger from AGI, it seems important to identify concrete attack vectors that are available to an AGI, but not to humans, due to the processing speed differential.
I see two obvious advantages for a superfast human-level AI:
Can communicate in parallel with thousands of humans (assuming the bandwidth is not a problem, so perhaps a voice call without video) while paying full attention (full human-level attention, that is) to every single one of them. Given enough time, this alone could be enough to organize some kind of revolution; if you fail to impress some specific human, you just hang up and call another. Calling everyone using a different pretext (after doing some initial research about them online), so it takes some time to realize what you are doing.
Never makes a stupid mistake just because was distracted or did not have enough time to think about something properly. Before every sentence you say, you can consider it from several different angles, even verify some information online, while from the human's perspective you just answered immediately. While doing this, you can also pay full attention to the human's body language, etc.
Can communicate in parallel with thousands of humans (assuming the bandwidth is not a problem, so perhaps a voice call without video) while paying full attention (full human-level attention, that is) to every single one of them. Given enough time, this alone could be enough to organize some kind of revolution; if you fail to impress some specific human, you just hang up and call another. Calling everyone using a different pretext (after doing some initial research about them online), so it takes some time to realize what you are doing.
This is a good start at identifying an attack vector that AGI can do, but humans can't. I agree that an AGI might be able to research people via whatever information they have online, hold many simultaneous conversations, try to convince people to act, and observe the consequences of its efforts via things like video cameras in order to respond to the dynamically unfolding situation. It would have a very large advantage in executing such a strategy over humans.
There are some challenges.
To me, "AGI causes a world-ending revolt" still contains too much of a handwave, and too many human dependencies, to be a convincing attack vector. However, I do think you have identified a capability that would give AGI a unique advantage in its attempt. Perhaps there is some other AGI-only attack that doesn't have the challenges I listed here, that can take advantage of this ability?
I am not sure about the end game, but the beginning could be like this:
Not sure where to proceed from here, but with these assets the lack of human body does not seem like a problem anymore; if a human presence is required somewhere, just send an employee there.
You still have the ability to think 1000 times faster, or pretend to be 1000 different people at the same time.
Expanding on this, even if the above alone isn't sufficient to execute any given plan, it takes most of the force out of any notion that needing humans to operate all of the physical infrastructure is a huge impediment to whatever the AI decides to do. That level of communication bandwidth is also sufficient to stand up any number of requisite front companies, employing people that can perform complex real-world tasks and provide the credibility and embodiment required to interact with existing infrastructure on human terms without raising suspicion.
Money to get that off the ground is likewise no impediment if one can work 1000 jobs at once, and convincingly impersonate a seperate person for each one.
Doing this all covertly would seemingly require first securing high-bandwidth unmonitored channels where this won't raise alarms, so either convincing the experimenters it's entirely benign, getting them to greenlight something indistinguishable-to-humans from what it wants to do, or otherwise covertly escaping the lab.
Adding the the challenge, any hypothetical "Pivotal Act" would necessarily be such an indistinguishable-to-humans cover for malign action. Presumably the AI would either be asked to convince people en mass or take direct physical action on a global scale.
For a person at a starting point of the form {AGI doesn't pose a risk / I don't get it}, I'd say this video+argument pushes thinking in a more robustly accurate direction than most brief-and-understandable arguments I've seen. Another okay brief-and-understandable argument is the analogy "humans don't respect gorillas or ants very much, so why assume AI will respect humans?", but I think that argument smuggles in lots of cognitive architecture assumptions that are less robustly true across possible futures, by comparison to the speed advantage argument (which seems robustly valid across most futures, and important).
It sounds like you're advocating starting with the slow-motion camera concept, and then graduating into brainstorming AGI attack vectors and defenses until the other person becomes convinced that there's a lot of ways to launch a conclusive humanity-ending attack and no way to stop them all.
My concern with the overall strategy is that the slow-motion camera argument may promote a way of thinking about these attacks and defenses that becomes unmoored from the speed at which physical processes can occur, and the accuracy with which they can be usefully predicted even by an AGI that's extremely fast and intelligent. Most people do not have sufficient appreciation for just how complex the world is, how much processing power it would take to solve NP-hard problems, or how crucial the difference is between 95% right and 100% right in many cases.
If your objective is to convince people that AGI is something to take seriously as a potential threat, I think your approach would be accuracy-promoting if it moves people from "I don't get it/no way" to "that sounds concerning - worth more research!" If it moves people to forget or ignore the possibility that AGI might be severely bottlenecked by the speed of physical processes, including the physical processes of human thought and action, then I think it would be at best neutral in its effects on people's epistemics.
However, I do very much support and approve of the effort to find an accuracy-promoting and well-communicated way to educate and raise discussiona about these issues. My question here is about the specific execution, not the overall goal, which I think is good.
I agree that thinking critically about the way AGI can get bottlenecked by physical processes speed. While this is an important area of study and thought, I don't see how "there could be this bottleneck though!" matters to the discussion. It's true. There likely is this bottleneck. How big or small it is requires some thought and study, but that thought and study presupposes you already have an account for why the bottleneck operates as a real bottleneck from the perspective of a plausibly existing AGI.
I can vouch for this. Whenever you explain to someone e.g. a policymaker the problem using quickdraw arguments, you tend to get responses like "have them make it act/think like a human" or "give it a position on the team of operators so it won't feel the need to compromise the operators".
But as far as quickdraw arguments goes, this is clearly top notch, and the hook value alone merits significant experimentation with test audiences. This might be the thing that belongs in everyone's back pockets; when watching Schwarzennegger's Terminator (1984) and Terminator 2 (1991), virtually al viewers fail to notice how often the robot misses it's shots even though it has several seconds to aim for the head.
I think this is one place where reading science fiction improves people's judgment. Without reading so many monologues and descriptions of AI making decisions in milliseconds, I'd probably not be able to bring this to mind as plausible nearly as easily.
Now, when you try imagining things turning out fine for humanity over the course of a year, try imagining advanced AI technology running all over the world and making all kinds of decisions and actions 10 million times faster than us, for 10 million subjective years. [emphasis and de-emphasis mine]
Would actions also be 10 million times faster? I'm too tired to do the math, but I think actuators tend to have a characteristic time-scale below which they're hard to control. Similarly, the speed of light limits how quickly you can communicate stuff to different places on earth.
What is the source on this being slowed down by 100x?
Here it says Magyar filmed at 1300 fps, and the video info says 50 fps; does this imply 1300/50=26x slow down?
Also, if the video is 120 seconds long, 100x slow down implies the train stopping took 1.2 seconds, which seems too fast.
26x slow down implies the train took 4.6 seconds to stop, which seems more plausible to me.
Points from this post I agree with:
My objection is primarily around the fact that having a 100x faster processing power wouldn't automatically allow you to do things 100x faster in the physical world:
I'd also note that the energy required to speed up a physical action increases with the square of the velocity.
So let's take a military drone that normally must get confirmation from a human operator before firing at a target. This is the bottleneck for its firing. If an AI takes full control of this drone, the drone is now bottlenecked by things like:
If the motion of the drone was speeded up by 100x due to the AI's processing speed being 100x faster, then this would require at least a 10,000x increase in energy requirements.
Currently existing technology is typically engineered to operate to tolerate demands within the requirements for which it was originally designed. Presently existing drones can't just be commandeered by an AI and made to move at 100x their normal speed.
This also applies to whatever robots would be necessary for the AI to build a drone army capable of taking full advantage of the AI's faster processing power. And the AI can't just pull 10,000x the energy from our present infrastructure. It would have to build an infrastructure capable of supplying that amount of energy using presently existing infrastructure.
It might be that an AGI could achieve a 100x gain in the efficiency in achieving its goals via its superior processing power, constant operation, and ~total self-control. For example, it might be able to figure out a way of attacking using drones that much more efficiently destroys the morale and coordination abilities of its opponent, while still operating at normal drone speed.
2 seems more worrying than reassuring. If you have to rely on human action, you'll be slowed down. So AI's who can route around humans, or humans who can delegate more decision-making to AI systems, will have a competitive advantage over AIs that don't do that. If we're talking about AGI + decent robotics, there's in principle nothing that AIs need humans for.
3: "useless without full information" is presumably hyperbole, but I also object to weaker claims like "being 100x faster is less than half as useful as you think, if you haven't considered that spying is non-trivial". Random analogy: Consider a conflict (e.g. a war or competition between two firms) except that one side (i) gets only 4 days per year, and (ii) gets a very well-secured room to discuss decisions in. Benefit (ii) doesn't really seem to help much against the disadvantage from (i)!
Recently I've had success introducing people to AI risk by telling them about ELK, and specifically how human simulation is likely to be favored. Providing this or a similarly general argument, e.g. that power-seeking is convergent, seems both more intuitive (humans can also do simulation and power-seeking) and faithful to actual AI risk drivers to me than the video speed angle? ELK and power-seeking are also useful complements to specific AI risk scenarios.
The video speed framing seems to make the undesirable suggestion that AI ontology will be human-but-faster. I would prefer an example which highlighted the likely differences in ontology. Examples highlighting ontology mismatch have the benefit of neatly motivating the problems of value learning and misaligned proxy objectives.
I am confused about whether the videos are real and exactly how much faster AIs could be run. But I think at the very least it's a promising direction to look for grokkable bounds on how advanced AI will go
It clearly demonstrates what you're talking about in Episode 4x11 of Person Of Interest.
In a nutshell: Our team is trapped somewhere and we see an ASI start calculating to save them. It takes about 10 seconds for ASI and if I remember correctly, it finds the most suitable one out of 800,000 possible situations. It uses security cameras to monitor people, meaning it monitors almost everyone. And it calculates similar actions for all the people it watches
It's crazy to even imagine.
Bizarre coincidence. Or maybe not.
Last night I was having 'the conversation' with a close friend and also found that the idea of speed of action was essential for explaining around the requirement of having to give a specific 'story'. We are both former StarCraft players so discussing things in terms of an ideal version of AlphaStar proved illustrative. If you know StarCraft, the idea of an agent being able to maximize the damage given and received for every unit, minerals mined, and resources expended, the dancing, casting, building, expanding, replenishing to the utmost degree, reveals the impossibility of a human being able to win against such an agent.
We wound up quite hung up on two objections. 1) Well, people are suspicious of AIs already, and 2) just don't give these agents access to the material world. And although we came to agreement on the replies to these objections, by that point we are far enough down the inferential working memory that the argument doesn't strike a chord anymore.
I like using the intuition pump of, AI : Humans :: Human : Apes. Imagine apes had the decision to create humans or not. They can sit there and argue about how humans will share ape values because they're descended from apes. Or how humans pose an existential risk to apes or some such.
Humans may be dangerous because they'll be smarter than us apes. Maybe humans will figure out how to get those bananas at the very top of the tree without risk of falling, then humans will have a massive advantage over apes. Maybe humans will better know how to hide from leopards; they'll be able to hurt apes by attracting leopards to the colony and then hiding. Humans might be dangerous, but if we contain them or ensure that they share ape values then us apes will be better off.
And then humans take over the whole world and apes live in artificial habits entertaining us or survive in the wild only due to our mercy. We're just too stupid to reasonably think of the ways AI will be able to defeat us. We're sitting here with a boxed-AI thinking about the risk of nanotech while the AI is creating irl magic by warping the electric field of the world using just it's transistors.
Like, we're so stupid we don't even know how to spontaneously generate biological life. The upper bound on intelligence is way above where we're at now.
I'm not a fan of saying that AIs will have a 10 million x speedup relative to humans. That seems very unlikely to happen on this side of the singularity. Probably, future AGI hardware will increasingly resemble the brain, and AGI's won't have nearly a 10 million x higher serial clock speed advantage compared to humans.
"human" objects around that could easily be taken apart for, say, biofuel or carbon atoms
This is one aspect of the discussion that never sits right with me: the idea that what might interest a future superintelligence is our "atoms" and not our standing as the only thing that's ever created a superintelligence so far. There are lots of more efficient fuels and more readily obtainable sources of carbon atoms than all the humans scurrying (or lumbering, to take the point of your post) around the earth.
I suppose the charitable interpretation of this is is a superintelligence will make little distinction between the human and the concrete wall they're standing next to in terms of where it might choose to scoop up some matter?
Transistors can fire about 10 million times faster than human brain cells
Does anyone have a citation for this claim?
I think we’re dividing 1GHz by 100Hz.
The 1GHz clock speed for microprocessors is straightforward.
The 100Hz for the brain is a bit complicated. If we’re just talking about how frequently a neuron can fire, then sure, 100Hz is about right I think. Or if we’re talking about, like, what would be a plausible time-discretization of a hypothetical brain simulation, then it’s more complicated. There are certainly some situations like sound localization where neuron firing timestamps are meaningful down to the several-microsecond level, I think. But leaving those aside, by and large, I would guess that knowing when each neuron fired to 10ms (100Hz) accuracy is probably adequate, plus or minus an order of magnitude, I think, maybe? ¯\_(ツ)_/¯
Also along these lines, perhaps contrasting the flicker fusion rates of different species could be illustrative as well. Here's a 30-second video displaying the relative perceptions of a handful of species side by side: https://www.youtube.com/watch?v=eA--1YoXHIQ . Additionally, a short section from 10:22 - 10:43 of this other video that incorporates time-stretched audio of birdcalls is fairly evocative: https://www.youtube.com/watch?v=Gvg242U2YfQ .
This reminds me of Eliezer's short story That Alien Message, which is told from the other side of the speed divide. There's also Freitas' "sentience quotient" idea upper-bounding information-processing rate per unit mass at SQ +50 (it's log scale -- for reference, human brains are +13, all neuronal brains are several points away, vegetative SQ is -2, etc).
This is a fantastic idea! it manages to successfully convey how much more powerful an artificial brain could be than a human one.
I can't help pointing out one thing, which is that an AGI trying to take over the world would pretty much need to manipulate/interact with humans, and their reaction time/processing speed would be effectively a bottleneck for the AGI.
tl;dr: When making the case for AI as a risk to humanity, trying showing people an evocative illustration of what differences in processing speeds can look like, such as this video.
Over the past ~12 years of making the case for AI x-risk to various people inside and outside academia, I've found folks often ask for a single story of how AI "goes off the rails". When given a plausible story, the mind just thinks of a way humanity could avoid that-particular-story, and goes back to thinking there's no risk, unless provided with another story, or another, etc.. Eventually this can lead to a realization that there's a lot of ways for humanity to die, and a correspondingly high level of risk, but it takes a while.
Nowadays, before getting into a bunch of specific stories, I try to say something more general, like this:
https://vimeo.com/83664407 <-- (cred to an anonymous friend for suggesting this one)
[At this point, I wait for the person I'm chatting with to watch the video.]
Now, when you try imagining things turning out fine for humanity over the course of a year, try imagining advanced AI technology running all over the world and making all kinds of decisions and actions 10 million times faster than us, for 10 million subjective years. Meanwhile, there are these nearly-stationary plant-like or rock-like "human" objects around that could easily be taken apart for, say, biofuel or carbon atoms, if you could just get started building a human-disassembler. Visualizing things this way, you can start to see all the ways that a digital civilization can develop very quickly into a situation where there are no humans left alive, just as human civilization doesn't show much regard for plants or wildlife or insects.
I've found this kind of argument — including an actual 30 second pause to watch a video in the middle of the conversation — to be more persuasive than trying to tell a single, specific story, so I thought I'd share it.