MrFailSauce
MrFailSauce has not written any posts yet.

MrFailSauce has not written any posts yet.

I feel like an important lesson to learn from analogy to air conditioners is that some technologies are bounded by physics and cannot improve quickly.(or at all). I doubt anyone has the data, but I would be surprised if average air conditioning efficiency in BTUs per Watt plotted over the 20th century is not a sigmoid.
For seeing through the fog of war, I'm reminded of the German Tank Problem.
https://en.wikipedia.org/wiki/German_tank_problem
Statistical estimates were ~50x more accurate than intelligence estimates in the cannonical example. When you include the strong and reasonable incentives for all participants to propagandize, it is nearly impossible to get accurate information about an ongoing conflict.
I think as rationalists, if we're going to see more clearly than conventional wisdom, we need to find sources of information that have more fundamental basis. I don't yet know what those would be.
In reality, an AI can use algorithms that find a pretty good solution most of the time.
If you replace "AI" with "ML" I agree with this point. And yep this is what we can do with the networks we're scaling. But "pretty good most of the time" doesn't get you an x-risk intelligence. It gets you some really cool tools.
If the 3 sat algorithm is O(n^4) then this algorithm might not be that useful compared to other approaches.
If 3 SAT is O(n^4) then P=NP and back to Aaronson's point; the fundamental structure of reality is much different than we think it is. (did you mean "4^N"? Plenty of common algorithms are quartic.)... (read more)
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
If less than human intelligence is sufficient, wouldn't humans have already done it? (or are you saying we're doing it right now?)
How intelligent does an agent need to be to send a HTTP request to the URL
/ldap://myfirstrootkit.comon a few million domains?)
A human could do this or write a bot to do this.(and they've tried) But they'd also be detected, as would an AI. I don't see this as an x-risk, so much as a manageable problem.
... (read more)(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of
I spent some time reading the Grinnblatt paper. Thanks again for the link. I stand corrected on IQ being uncorrelated with stock prediction. One part did catch my eye.
... (read more)Our findings relate to three strands of the literature. First, the IQ and trading behavior analysis builds on mounting evidence that individual investors exhibit wealth-reducing behavioral biases. Research, exemplified by Barber and Odean (2000, 2001, 2002), Grinblatt and Keloharju (2001), Rashes (2001), Campbell (2006), and Calvet, Campbell, and Sodini (2007, 2009a, 2009b), shows that these investors grossly under-diversify, trade too much, enter wrong ticker symbols, are subject to the disposition effect, and buy index funds with exorbitant expense ratios. Behavioral biases like these may
We don't know that, P vs NP is an unproved conjecture. Most real world problems are not giant knapsack problems. And there are algorithms that quickly produce answers that are close to optimal. Actually, most of the real use of intelligence is not a complexity theory problem at all. "Is inventing transistors a O(n) or an O(2^n) problem?"
P vs. NP is unproven. But I disagree that "most real world problems are not giant knapsack problems". The Cook-Levin theorem showed that many of the most interesting problems are reducible to NP-complete problems. I'm going to quote this paper by Scott-Aaronson, but it is a great read and I hope you check out the... (read 690 more words →)
AlphaGo went from mediocre, to going toe-to-toe with the top human Go players in a very short span of time. And now AlphaGo Zero has beaten AlphaGo 100-0. AlphaFold has arguably made a similar logistic jump in protein folding
Do you know how many additional resources this required?
... (read more)Cost of compute has been decreasing at exponential rate for decades, this has meant entire classes of algorithms which straightforward scale with compute also have become exponentially more capable, and this has already had profound impact on our world. At the very least, you need to show why doubling compute speed, or paying for 10x more GPUs, does not lead to a doubling of capability for
Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.
Thanks for the citation. That is the kind of information I was hoping for. Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
I think I can probably explain the "so" in my response to Donald below.
Overshooting by 10x (or 1,000x or 1,000,000x) before hitting 1.5x is probably easier than it looks for someone who does not have background in AI.
Do you have any examples of 10x or 1000x overshoot? Or maybe a reference on the subject?
A traditional Turing machine doesn't make a distinction between program and data. The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture. Since many systems are Turing complete, creating an immutable program seems impossible to me.
For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.
A second example. Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html