LESSWRONG
LW

Andrew Keenan Richardson
965120
Message
Dialogue
Subscribe

ML researcher

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
AI 2027: What Superintelligence Looks Like
Andrew Keenan Richardson5moΩ4110

I have a question about this chart. It seems like you fit a double-exponential function to it (dashed line), because the y-axis is exponential. Why didn't you fit a regular exponential to it, like the red line? It seems like if you do that, the projections for the whole article should be years slower. 

Overall, great article. It changed my mind in the direction of making this timeline feel more plausible. 

Reply
Many Common Problems are NP-Hard, and Why that Matters for AI
Andrew Keenan Richardson5mo10

That's outside the scope of this post, and also it's an open research question. I agree with you mathematically that there's considerable uncertainty. I used the word "evidence" to simply mean that we haven't found general polynomial time algorithms yet. 

Reply
Many Common Problems are NP-Hard, and Why that Matters for AI
Andrew Keenan Richardson5mo10

Maybe I didn't articulate my point very well. These problems contain a mix of NP-hard compute requirements and subjective judgements. 

Packing is sometimes a matter of listing everything in a spreadsheet and then executing a simple algorithm on it, but sometimes the spreadsheet is difficult to fully specify. 

Playing Pokemon well does not strike me as an NP-hard problem. It contains pathfinding, for which there are efficient solutions, and then mostly it is well solved with a greedy approach.

Reply
Many Common Problems are NP-Hard, and Why that Matters for AI
Andrew Keenan Richardson5mo23

I mostly agree with Gwern; they're very right that there are ways around complexity classes through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently. 

They conclude their essay to say:

at most, they demonstrate that neither humans nor AIs are omnipotent

and I think that's basically what I was trying to get at. Complexity provides a soft limit in some circumstances, and it's helpful to understand which points in the world impose that limit and which don't. 

Reply
Higher-effort summer solstice: What if we used AI (i.e., Angel Island)?
Andrew Keenan Richardson1y11

I love this idea but I have the logistics concern that it might be difficult-to-impossible to reserve the island for the time we want. 

Reply
AI's impact on biology research: Part I, today
Andrew Keenan Richardson2y41

Small edit: $100,000,000 dollars per human genome to $1,000 dollars per genome is 5 orders of magnitude, not 6. 

I'm looking forward to Part 2! (and Part 3?)

Reply2
Upcoming Changes in Large Language Models
Andrew Keenan Richardson2y10

Yes, this is basically what people are doing.  

Reply
What Is a Major Chord?
Andrew Keenan Richardson3y10

For those who don't want to break out a calculator, Wikipedia has it here:

https://en.wikipedia.org/wiki/Equal_temperament#Comparison_with_just_intonation

You can see the perfect fourth and perfect fifth are very close to 4/3 and 3/2 respectively. This is basically just a coincidence and we use 12 notes per octave because there are these almost nice fractions. A major scale uses the 2212221 pattern because that hits all the best matches with low denominators, skipping 16/15 but hitting 9/8, for example. 

Reply
"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence", Clune 2019
Andrew Keenan Richardson6y10

Imagine you had an oracle which could assess the situation an agent is in and produce a description for an ML architecture that would correctly "solve" that situation.

I think for some strong versions of this oracle, we could create the ML component from the architecture description with modern methods. I think this combination could effectively act as AGI over a wide range of situations, again with just modern methods. It would likely be insufficient for linguistic tasks.

I think that's what this article is getting at. The author is an individual from Uber. Does anyone know if this line of thinking has other articles written about it?

Reply
Load More
5Many Common Problems are NP-Hard, and Why that Matters for AI
5mo
9
9Hyperbolic Discounting and Pascal’s Mugging
2y
0
9Contra Alexander on the Bitter Lesson and IQ
2y
1
43Upcoming Changes in Large Language Models
2y
8