New Comment
27 comments, sorted by Click to highlight new comments since: Today at 2:01 PM

Well, it's an interesting opinion piece. But I have no idea what reference to quantum computing and the Landauer limit is supposed to prove. Intelligence explosion is in part about compute overhang, sure, but we can have compute overhang without quantum computing, just by being inefficient about training.

Basically, it means that mildly superhuman intelligences trained by the Chinchilla scaling laws must use physical actions like trying to gain more energy for more compute, and it can't just try to get more intelligent by orders of magnitude in minutes, hours or weeks. Such takeovers will take years to gain a significant fraction of the global economy and power

In other words, Landauer's Limit acts as a firebreak for fast-takeoff, because it just can't get energy from nowhere, and physical systems do not scale fast enough for the Intelligence Explosion to work.

AI will takeover, it's just not as fast as Yudkowsky thinks it will.

[This comment is no longer endorsed by its author]Reply

Landauer's limit only 'proves' that when you stack it on a pile of assumptions a mile high about how everything works, all of which are more questionable than it. It is about as reliable a proof as saying 'random task X is NP-hard, therefore, no x-risk from AI'; to paraphrase Russell, arguments from complexity or Landauer have all the advantages of theft over honest toil...

Well, the assumptions are that everyone is using classical computers, thermodynamics exists, thus they must get energy from somewhere (albeit with superhuman skill). That's the list of assumptions necessary to prove the intelligence Explosion scenario wrong.

I've already accepted that a superhumanly intelligent AI can take over the world in about several years, which is arguably somewhat too fast for the public to realize, and AI X-risk chances are in my own view oscillating towards 30-60% this century, so this is still no joking matter. AI Alignment matters still, it still needs to get more researchers and money.

Now I will be wrong if quantum/reversible computers become more practical than they are currently, and companies can reliably train AI on quantum/reversible computers.

My post is more of a retrospective, as well as saying that things take time to manifest it's full impact.

EDIT: My point is, the Singularity as envisioned by Ray Kurzweil and John Smart has already happened, it just takes time from now to the end of the century. The end of the century will be very weird. It's just more continuously weird, with a discontinuity unlocking new possibilities that are then claimed by continuous effort by AGI and ASI.

[This comment is no longer endorsed by its author]Reply

That's the list of assumptions necessary to prove the intelligence Explosion scenario wrong.

No, it's not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like 'if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second' - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example - they are all, to put it technically, 'very big'.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn't we say "there's plenty of room at the top"? Even if there wasn't and you could 'only' go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can't do something in seconds and would instead take hours, how is that not an 'intelligence explosion' in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an 'intelligence gust of warm wind' if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your 'proof' are also the least important, which is the opposite of what you need, and serves only to dazzle and 'Eulerize' the innumerate.

* btw I lied; that multiplies to 3e75, not 3e65. Did you notice?

I was talking about irreversible classical computers, where Landauer bounds it much much harshly, not quantum computers relying on the much looser Margolus-Levitin limit.

To put this in perspective, there's a 35 order of magnitude or so difference between what a quantum computer's limit is and what a classical computer's limit is.

Here's a link:

While the most pessimistic conclusions are probably wrong (I think they will accept the energy cost of 300 watts-5 kilowatts to increase computation by one or 2 orders of magnitude, since intelligence is very valuable and such a thing would lead to a Vernor Vinge-like singularity) this is a nice post for describing the difference.

So it actually supports my post here, since I talked about how quantum/reversible computers would favor the intelligence Explosion story. I am pessimistic on quantum/reversible computers being practical before 2100, which is why the accelerating change story is favored.

So you and me actually agree here, so are we getting confused about something.

And black hole computers are possible, but realistically will be developed post-singularity.

[This comment is no longer endorsed by its author]Reply

I think from an x-risk perspective the relevant threshold is: when AI no longer needs human researchers to improve itself. Currently there is no (publicly known) model which can improve itself fully automatically. The question we need to ask is, when will this thing get out of our control? Today, it still needs us.

This seems to assume that those scaling laws are a hard bound, rather than just a contingent fact about a particular machine learning method.

This assumes a fixed scaling law. One possible way of improving oneself could be to design a better architecture with a better scaling exponent.

Considering that there is, in fact, no singularity at this point, you have clearly jumped the gun on this. There is no proof that actually feasible scaling will get us even a notable fraction of the way to singularity, so you can hardly proclaim that no singularity has been shown to be wrong, or which pro-singularity school is right. Your reasons for believing as you do aren't even really in the essay, so it's a castle built without any foundation.

Not having good evidence is no reason at all to wilfully ignore bad-in-some-respect evidence. It's just usually not worth bothering with. Not always, and the distinction needs to be drawn the hard way.

Doesn't apply when you are trying to justify something incredibly unrelated thing...especially when you are clearly wrong, and clearly claim that something has happened. Singularity is after so many steps and illogical leaps from where we are that there is little real evidence (all of which points away from imminent singularity.)  Your claim is a completely untrue basis for the entire thing. Make a real argument, don't simply beg the question.

I'm not the OP, I'm not making claims about technological singularity in this thread. I'm making claims about the policy of dismissing unfounded claims of any kind.

I admit, I didn't actually check the name of the original poster versus yours. It's not actually relevant to anything but my use of the pronoun 'you', and even that works perfectly well as a general 'you' (which is a quirk of English that is a bit strange). The rest of my response stands, it is a terrible argument to make about evidence. The original post is a textbook example of begging the question. Confident claims should not be made on effectively non-existent evidence.

it is a terrible argument to make about evidence

What do you mean by "argument about evidence"? I'm not arguing that the evidence is good. I'm arguing that in general response to apparently terrible evidence should be a bit nuanced, there are exceptions to the rule of throwing it out, and some of them are difficult to notice. I'm not even claiming that this is one of those exceptions. So I'd call it an argument about handling of evidence rather than an argument about evidence.

What's wrong with the argument about handling of evidence I cited?

You are definitely making an argument about the evidence's status. What's wrong is that you are smuggling in the assumption that there is evidence that is simply flawed rather than a lack of evidence. To 'willfully ignore' evidence requires that there be real evidence! The link appears to be a non-sequitur otherwise. It is hardly an isolated demand for rigor to state that there must be evidence at all on the matter being something that is clearly happening to have a retrospective on 'why it HAPPENED, who WAS wrong, and who WAS right'. Thus, your argument is terrible.

There is a useful distinction between an argument not applying, that is being irrelevant, flawed-in-context, and it being terrible, that is invalid, flawed-in-itself.

you are smuggling in the assumption that there is evidence that is simply flawed rather than a lack of evidence

This is not an assumption that would benefit from being sneakily smuggled in. Instead this is the main claim I'm making, that even apparently missing evidence should still be considered a form of evidence, merely of a very disagreeable kind, probably not useful, yet occasionally holding a germ of truth, so not worth unconditionally dismissing.

The post I cited is targeted at impressions like this, that something is not a legitimate example of evidence, not worthy of consideration. Perhaps it's not there at all! But something even being promoted to attention, getting noticed, is itself a sort of evidence. It may be familiar to not consider that a legitimate form of evidence, but it's more accurate to say that it's probably very weak (at least in that framing, without further development), not that it should never, on general principle, interact with state of knowledge in any way (including after more consideration).

Incidentally, evidence in this context is not something that should make one believe a claim with greater strength, it could as easily make one believe the claim less (even when the piece of evidence "says" to believe it more), for that is the only reason it can be convincing in the first place. So it's not something to defend skepticism from. It just potentially shifts beliefs, or sets the stage for that eventually happening, in a direction that can't be known in advance, so its effect couldn't really be read off the label.

You did not make that claim itself. The link is not even on target for what you are claiming. It is a rant against unachievable demands for rigor, not a claim that all demands for basic coherency are wrong; the clear reason, which I already stated for why it isn't evidence is because it is based entirely on a clearly false premise. If it doesn't apply to the question at hand, it isn't evidence! This is basic reasoning. When a thing hasn't happened, and there is no evidence it is imminent, we can't possibly have an actual retrospective, which is what this claims to be. If this were simply a post claiming that they think these schools are likely to be right, and these wrong, for some actual reasons, that would be a completely different kind of thing than this, which is entirely based on a clearly false claim of fact.

which is entirely based on a clearly false claim of fact

The failure mode is (for example) systematic inability to understand a sufficiently alien worldview that keeps relying on false claims of fact. The problem is attention being trained to avoid things that are branded with falsity, bias, and fallacy, even by association. One operationalization of a solution is Ideological Turing Test, diligently training to perchance eventually succeed in pretending to be the evil peddler of bunkum. Not very appealing.

I prefer the less specifically demanding method that is charity. It's more about compartmentalizing the bunkum without snuffing it out. And it has less hazardous applications.

This is useless until it isn't, and then it's crucial.

As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI's short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.

My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).

Also, GPT-3 can meta-learn languages the first time it receives the data.

And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov's words,

This is useless until it isn't, and then it's crucial.

And that's my theory of why AI progress looks so slow: We're in the 70s era for AI, and the people using AI are companies.

There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it's an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn't agree that its stringency would be misguided. This should still be only a minor portion of the budget.

It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.

(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn't say that skepticism is wrong in any specific case where it should be opposed. Sometimes it's wrong, but it doesn't follow in general, that's why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)

I'm honestly not sure you'll get anything out of my further replies since you seem to have little interest in my actual points. 

My original comment on the blatant falsehood that was the premise original post was as charitable as reasonably possible considering how untrue the premise was. If there was any actual evidence supporting the claim, it could have been added by an interlocutor.

I simply pointed out its most glaring flaws in plain language without value judgment. The main premise was false for even the most charitable version of the post. To be any more charitable would have required me to rewrite the entire thing since the false premise was written so deeply into the fabric of things. If they had restricted themselves to plausible things, it is quite possible it would have been a useful post, but this wasn't.

I neither have the time nor the inclination to write out the entirety of what the argument should have been. I don't really believe in Ideological Turing Tests, just like I don't believe that Turing tests are a great measure for AI. It's not that there aren't uses for it, it's just that those are niche (though an AI that reliably passes the Turing Test could make a lot of money for its creators.). I don't have forever to fix bad arguments.

A basic outline of the initial argument in the original post is:

1)The singularity is moving slowly, but already upon us. We are in takeoff.

2)The takeoff will remain slow (though quick enough to be startling.)

3)Thus no singularity is clearly wrong.

4)Fast take off is also clearly false.

5)Scaling is all that matters.

6)Since scaling will be so expensive due to Landauer's principle, high end AGI will happen, but not in private hands for a long time.

There are several more 'implications' of this that I won't bother writing because they clearly rely on the former things.

1 is meant to prove 3,4,5 directly, while 2 absolutely needs 1 to be true.

2 and 5 are meant to set the stage for 6..

These were largely bare assertions (which there is a place for). I objected that point 1 was clearly false, rendering points 3,4,5,6 impossible to tell based on this argument structure and the available evidence, and 2 clearly meaningless (since it is defined falsely, something cannot remain a certain way if it is not already that way). (Even though I agree with 4 and 6, and and I could rewrite their argument to make 2 much more sensible.) Since there was no extra evidence beyond a known unsound argument, that rendered the rest of it irrelevant. The leap from 1 to 5 would be quite weak even if not for the fact the 1 is false.

7)And thus we should be very scared of not having any wake up call for when AI will become very dangerous.

7 is clearly unsupported at this point since all of the assumptions leading here are useless.

I like a good hypothetical, but I don't really have any interest in continuing to engage with things that are that wrong factually, and won't admit it.

'The moon really is made of cheese, so what does that mean for how we should approach the sun?' That is literally the level of uselessness I find this approach to detailing the state of AI and how that relates how to approach alignment to have. (Like I said, the version in my original comment was the charitable one.)

I could make an argument for or against anything they claim in this post, but it wouldn't be a response to what they actually wrote, and I don't see how that would be useful.

Cute terminology I came up with while talking about this recently: 'zoom-foom gap'

 I like to call accelerating change period where AI helps accelerate further AI but only via working with humans at a less-than-human-contribution level the 'zoom' in contrast to the super-exponential artificial-intelligence-independently-improving-itself 'foom'. Thus, the period we are currently is the 'zoom' period, and the oh-shit-we're-screwed-if-we-don't-have-AI-alignment period is the foom period. When I talk about the future critical juncture wherein we realize could initiate a foom at then-present technology, but we restrain ourselves because we know we haven't yet nailed AI alignment, the 'zoom-foom gap'. This gap could be as little as seconds while a usually-overconfident capabilities engineer pauses just for a moment with their finger over the enter key, or it could be as long as a couple of years while the new model repeatedly fails the safety evaluations in its secure box despite repeated attempts to align it and thus wisely doesn't get released. 'Extending the zoom-foom gap' is thus a key point of my argument for why we should build a model-architecture-agnostic secure evaluation box. 

TLDR: I like having a reason to use the term 'zoom-foom gap'.

[+][comment deleted]1y32