Hello folks! I think this is an appropriate post for this forum but if I'm mistaken please just give the word.

I'm working on an interesting research direction for AGI, and am looking for collaborators. This is primarily capabilities research at this point, and so some folks here may, validly, argue that it's negative EV. I think it's positive EV and can explain why but my confidence is far from 100% on that. I'll address this in a section at the end. Firstly, about the research:

I believe the field of Artificial Life has great potential for creating AGI and is massively under-explored as a path to AGI compared with more conventional machine learning approaches. Why: consider the following two data points that are both pointing towards the same conclusion.

  1. Machine Learning is moving towards simpler, more general techniques that also require much more compute. Complex, hand-designed features and algorithms (think GOFAI, Gabor Filters, Canny Edge Detection, etc.) were supplanted by simpler and more general techniques such as deep neural networks trained end-to-end [1]. Now, techniques such as Differentiable Architecture Search are learning architectures while techniques such as Model-Agnostic Meta-Learning are learning optimizers. This trend spanning decades is leading towards models that are more and more general, baking in fewer and fewer assumptions. Question for the reader: what is the ultimate limit of this trend?
  2. We have exactly one existence proof of AGI, and it was generated by an extremely simple program. While we don’t yet know the ultimate laws of physics, progress suggests that they are likely concise. Billions of different molecules turned out to be made of just 118 elements, which were in turn found to be made of just 3 sub-atomic particles. The most fundamental laws discovered so far, such as those from Einstein, Heisenberg, and Maxwell, can be written out in about 100 characters. The complex behavior we observe in our universe almost certainly arises from extremely simple rules at the bottom of physics.

These observations from progress in AI, as well as our one example of general intelligence evolving from physics, are two lines pointing towards a single conclusion: that there may exist one or more quite simple programs that, when run, create an AGI. Moreover, just as many hand-designed features became unnecessary when more general techniques came along, an extremely simple program might be the easiest way to create an AGI. Note that the program we call physics successfully generated an AGI without having any notion of learning, optimization, or even what an agent is.

To conclude, I believe there are physics-like programs that support self-replicators, and when run long enough will evolve an AGI. Of course for these to be useful we'd need one that takes less than the ~14Bn years that physics did. I have specific ideas of what these programs might look like and am currently working with a few top folks in the ALife world to push forward this research.

To be clear, I still think machine learning-like approaches might be more likely than ALife approaches to achieve AGI, but compared to the amount of investment in ALife approaches, I think ALife is massively undervalued.

 

What would an ALife approach of this kind look like?

Imagine a program that specifies a few simple particles and how they interact. You start with a universe of 10^11 particles (feasible on a single desktop computer) including a single self-replicator (built out of particles of course). This self-replicator replicates but soon random mutations produce a better self-replicator that out-competes the first one. And so on. If a system like this worked, one could potentially throw more and more memory and compute into it to get more and more complex organisms.

 

What is the best Steel-Man Argument against this approach to AGI?

While this technique could produce AGI, it's likely to take orders of magnitude more compute than more conventional machine learning techniques. Granted, physics did eventually create a self-replicator, and ultimately intelligence, but consider how long this took and how much compute went into them. Nature is an open-ended exploration process and did a TON of random shit in addition to producing sentient beings. We're better off taking inspiration from what Physics did but not trying to emulate it.

 

Is this capabilities research positive EV?

I think yes although I'm far from sure. If something along these lines is the fastest path to AGI, I think it needs to be in the right hands. My goal would be, some months or years from now, to get research results that make it clear we're on the right track to building AGI. I'd go to folks I trust such as Eliezer Yudkowsky/MIRI/OpenAI, and basically say "I think we're on track to build an AGI, can we do this together and make sure its safe?" Of course understanding that we may need to completely pause further capabilities research at some point if our safety team does not give us the OK to proceed. But if ALife approaches are in fact the easiest path to AGI, many existing safety efforts may be barking up the wrong tree.

 

About me: I studied Computer Science at Stanford and currently run a 40-person crypto company. I've thought pretty hard about the question of how we build an AGI for several years now, which is a lot less than folks like Eliezer or Paul Christiano. I'd be honored to hear their and other folks' thoughts on this research direction.

If this is something you'd be interested in talking about, feel free to comment or send me an email at rafaelcosman@alumni.stanford.edu.

Thanks,

Raf

 

[1] Clune, Jeff. "Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence." arXiv preprint arXiv:1905.10985 (2019).

2

4 comments, sorted by Click to highlight new comments since: Today at 8:28 AM
New Comment

How do you intend to avoid creating very many conscious and suffering people?

(and have you read Crystal Nights?)

Hey Zac, I think that's a valid concern. There are various "god powers" that we could potentially use to alleviate suffering, but again, that's not a complete solution. I would claim though, that even given the suffering our universe contains, we should be glad it exists (as opposed to not existing at all).

I suppose this is also related to the debate between negative and classical utilitarians!

If something along these lines is the fastest path to AGI, I think it needs to be in the right hands. My goal would be, some months or years from now, to get research results that make it clear we’re on the right track to building AGI. I’d go to folks I trust such as Eliezer Yudkowsky/MIRI/OpenAI, and basically say “I think we’re on track to build an AGI, can we do this together and make sure its safe?” Of course understanding that we may need to completely pause further capabilities research at some point if our safety team does not give us the OK to proceed.

If you "completely pause further capabilities research", what will stop other AI labs from pursuing that research direction further? (And possibly hiring your now frustrated researchers who by this point have a realistic hope for getting immense fame, a Turing Award, etc.).

Valid concern. I would say (1) keep our research results very secret (2) hire people that are fairly aligned? But I agree that’s not a sure fire solution at all.