Posts

Sorted by New

Wiki Contributions

Comments

A traditional Turing machine doesn't make a distinction between program and data.  The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture.   Since many systems are Turing complete, creating an immutable program seems impossible to me.  

For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.  

A second example.  Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.

https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

I feel like an important lesson to learn from analogy to air conditioners is that some technologies are bounded by physics and cannot improve quickly.(or at all).   I doubt anyone has the data, but I would be surprised if average air conditioning efficiency in BTUs per Watt plotted over the 20th century is not a sigmoid.

For seeing through the fog of war, I'm reminded of the German Tank Problem.  

https://en.wikipedia.org/wiki/German_tank_problem

Statistical estimates were ~50x more accurate than intelligence estimates in the cannonical example.  When you include the strong and reasonable incentives for all participants to propagandize, it is nearly impossible to get accurate information about an ongoing conflict.

I think as rationalists, if we're going to see more clearly than conventional wisdom, we need to find sources of information that have more fundamental basis.  I don't yet know what those would be.

In reality, an AI can use algorithms that find a pretty good solution most of the time. 

If you replace "AI" with "ML" I agree with this point. And yep this is what we can do with the networks we're scaling.  But "pretty good most of the time" doesn't get you an x-risk intelligence.  It gets you some really cool tools.

If the 3 sat algorithm is O(n^4) then this algorithm might not be that useful compared to other approaches. 

If 3 SAT is O(n^4) then P=NP and back to Aaronson's point; the fundamental structure of reality is much different than we think it is. (did you mean "4^N"?  Plenty of common algorithms are quartic.)  

For the many problems that are "illegible" or "hard for humans to think about" or "confusing", we are nowhere near the bound, so the AI has room to beat the pants off us with the same data. 

The assertion that "illegible" means "requiring more intelligence" rather than "ill-posed" or "underspecified" doesn't seem obvious to me.  Maybe you can expand on this? 

 

Could a superintelligence figure out relativity based on the experiences of the typical caveman?..These clues weren't enough to lead Einstein to relativity, but Einstein was only human. 

I'm not sure I can draw the inference that this means it was possible to generate the theory without the key observations it is explaining.  What I'm grasping at is how we can bound what cababilities more intelligence gives an agent.  It seems intuitive to me that there must be limits and we can look to physics and math to try to understand them.  Which leads us here:

Meaningless. Asymptotic runtime complexity is a mathematical tool that assumes an infinite sequence of ever harder problems.

I disagree.  We've got a highly speculative question in front of us.  "What can a machine intelligence greater than ours accomplish"?  We can't really know what it would be like to be twice as smart any more than an ape can. But if we stipulate that the machine is running on Turing Complete hardware and accept NP hardness then we can at least put an upper bound on the capabilities of this machine. 

Concretely, I can box the machine using a post-quantum cryptographic standard and know that it lacks the computational resources to break out before the heat death of the universe. More abstractly, any AI risk scenario cannot require solving NP problems of more than modest size.  (because of completeness, this means many problems and many of the oft-posed risk scenarios are off the table)

I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.

If less than human intelligence is sufficient, wouldn't humans have already done it? (or are you saying we're doing it right now?)

How intelligent does an agent need to be to send a HTTP request to the URL /ldap://myfirstrootkit.comon a few million domains?)

A human could do this or write a bot to do this.(and they've tried)  But they'd also be detected, as would an AI.  I don't see this as an x-risk, so much as a manageable problem.

(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs... per year. How fast exactly do you think returns diminish

I suspect they'll diminish exponentially, because threat requires solving problems of exponential hardness.  To me "1% of annual Nvidia GPUs", or "0.1% annual GPU production" sounds like we're at roughly N-3 of problem size we could solve by using 100% of annual GPU production.  

how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes?

I'm not confident in that.  

I spent some time reading the Grinnblatt paper.  Thanks again for the link.  I stand corrected on IQ being uncorrelated with stock prediction.  One part did catch my eye.

Our findings relate to three strands of the literature. First, the IQ and trading behavior analysis builds on mounting evidence that individual investors exhibit wealth-reducing behavioral biases. Research, exemplified by Barber and Odean (2000, 2001, 2002), Grinblatt and Keloharju (2001), Rashes (2001), Campbell (2006), and Calvet, Campbell, and Sodini (2007, 2009a, 2009b), shows that these investors grossly under-diversify, trade too much, enter wrong ticker symbols, are subject to the disposition effect, and buy index funds with exorbitant expense ratios. Behavioral biases like these may partly explain why so many individual investors lose when trading in the stock market (as suggested in Odean (1999), Barber, Lee, Liu, and Odean (2009); and, for Finland, Grinblatt and Keloharju (2000)). IQ is a fundamen- tal attribute that seems likely to correlate with wealth- inhibiting behaviors.

I went to some of references, this one seemed a particularly cogent summary. 

https://faculty.haas.berkeley.edu/odean/papers%20current%20versions/behavior%20of%20individual%20investors.pdf

The take home seems to be that high-IQ investors exceed the performance of low-IQ investors, but institutional investors exceed the performance of individual investors.  Maybe it is just insitutions selecting the smartest, but another coherent view is that the joint intelligence of the group("institution") exceeds the intelligence of high-IQ individuals.  We might need more data to figure it out.  

We don't know that, P vs NP is an unproved conjecture. Most real world problems are not giant knapsack problems. And there are algorithms that quickly produce answers that are close to optimal. Actually, most of the real use of intelligence is not a complexity theory problem at all. "Is inventing transistors a O(n) or an O(2^n) problem?"

 

P vs. NP is unproven. But I disagree that "most real world problems are not giant knapsack problems". The Cook-Levin theorem showed that many of the most interesting problems are reducible to NP-complete problems.  I'm going to quote this paper by Scott-Aaronson, but it is a great read and I hope you check out the whole thing.  https://www.scottaaronson.com/papers/npcomplete.pdf

Even many computer scientists do not seem to appreciate how different the world would be if we could solve NP-complete problems efficiently. I have heard it said, with a straight face, that a proof of P = NP would be important because it would let airlines schedule their flights better, or shipping companies pack more boxes in their trucks! One person who did understand was G ̈odel. In his celebrated 1956 letter to von Neumann (see [69]), in which he first raised the P versus NP question, G ̈odel says that a linear or quadratic-time procedure for what we now call NP-complete problems would have “consequences of the greatest magnitude.” For such an procedure “would clearly indicate that, despite the unsolvability of the Entscheidungsproblem, the mental effort of the mathematician in the case of yes-or-no questions could be completely replaced by machines.”

But it would indicate even more. If such a procedure existed, then we could quickly find the smallest Boolean circuits that output (say) a table of historical stock market data, or the human genome, or the complete works of Shakespeare. It seems entirely conceivable that, by analyzing these circuits, we could make an easy fortune on Wall Street, or retrace evolution, or even generate Shakespeare’s 38th play. For broadly speaking, that which we can compress we can understand, and that which we can understand we can predict. Indeed, in a recent book [12], Eric Baum argues that much of what we call ‘insight’ or ‘intelligence’ simply means finding succinct representations for our sense data. On his view, the human mind is largely a bundle of hacks and heuristics for this succinct-representation problem, cobbled together over a billion years of evolution. So if we could solve the general case—if knowing something was tantamount to knowing the shortest efficient description of it—then we would be almost like gods. The NP Hardness Assumption is the belief that such power will be forever beyond our reach.

I take the NP-hardness assumption as foundational.  That being the case, a lot of talk of AI x-risk sounds to me like saying that AI will be an NP oracle.  (For example, the idea that a highly intelligent expert system designing tractors could somehow "get out of the box" and threaten humanity, would require a highly accurate predictive model that would almost certainly contain one or many NP-complete subproblems)

 But current weather prediction doesn't use that data. It just uses the weather satellite data, because it isn't smart enough to make sense of all the social media data. I mean you could argue that most of the good data is from the weather satellite. That social media data doesn't help much even if you are smart enough to use it. If that is true, that would be a way that the weather problem differs from many other problems.

Yes I would argue that current weather prediction doesn't use social media data because cameras at optical wavelengths cannot sound the upper atmosphere.  Physics means there is no free lunch from social media data. 

I would argue that most real world problems are observationally and experimentally bound.  The seminal paper on photoelectric effect was a direct consequence of a series of experimental results from the 19th century.  Relativity is the same story.  It isn't like there were measurements of the speed of light or the ratio of frequency to energy of photons available in the 17th century just waiting for someone with sufficient intelligence to find them in the 17th century equivalent of social media.  And no amount of data on european peasants (the 17th century equivalent of facebook) would be a sufficient substitute.  The right data makes all the difference.  

A common AI risk problem like manipulating a programmer into escalating AI privileges is a less legible problem than examples from physics but I have no reason to think that it won't also be observationally bound.  Making an attempt to manipulate the programmer and being so accurate in the prediction that the AI is highly confident it won't be caught would require a model of the programmer as detailed(or more) than an atmospheric model.  There is no guarantee the programmer has any psychological vulnerabiliies.  There is no guarantee that they share the right information on social media.  Even if they're a prolific poster, why would we think this information is sufficient to manipulate them? 

AlphaGo went from mediocre, to going toe-to-toe with the top human Go players in a very short span of time. And now AlphaGo Zero has beaten AlphaGo 100-0. AlphaFold has arguably made a similar logistic jump in protein folding

Do you know how many additional resources this required? 

 

Cost of compute has been decreasing at exponential rate for decades, this has meant entire classes of algorithms which straightforward scale with compute also have become exponentially more capable, and this has already had profound impact on our world. At the very least, you need to show why doubling compute speed, or paying for 10x more GPUs, does not lead to a doubling of capability for the kinds of AI we care about.

Maybe this is the core assumption that differentiates our views.  I think that the "exponential growth" in compute is largely the result of being on the steepest sloped point of a sigmoid rather than on a true exponential.  For example Dennard scaling ceased around 2007 and Moore's law has been slowing over the last decade.  I'm willing to conceed that if compute grows exponentially indefinitely then AI risk is plausible, but I don't see any way that will happen. 

Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.

Thanks for the citation.  That is the kind of information I was hoping for.   Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?

So?

I think I can probably explain the "so" in my response to Donald below.

Overshooting by 10x (or 1,000x or 1,000,000x) before hitting 1.5x is probably easier than it looks for someone who does not have background in AI.

Do you have any examples of 10x or 1000x overshoot?  Or maybe a reference on the subject?

Load More