This is a linkpost for a talk I gave this past summer for the ALIFE conference. If you haven't heard of it before, ALIFE (short for "artificial life") is a subfield of biology which... well, here are some of the session titles from day 1 of the conference to give the gist:

  • Cellular Automata, Self-Reproduction and Complexity
  • Evolving Robot Bodies and Brains in Unity
  • Self-Organizing Systems with Machine Learning
  • Untangling Cognition: How Information Theory can Demystify Brains

... so you can see how this sort of crowd might be interested in AI alignment.

Rory Greig and Simon McGregor definitely saw how such a crowd might be interested in AI alignment, so they organized an alignment workshop at the conference.

I gave this talk as part of that workshop. The stated goal of the talk was to "nerd-snipe ALIFE researchers into working on alignment-relevant questions of agency". It's pretty short (~20 minutes), and aims for a general energy of "hey here's some cool research hooks".

If you want to nerd-snipe technical researchers into thinking about alignment-relevant questions of agency, this talk is a short and relatively fun one to share.

Thankyou to Rory and Simon for organizing, and thankyou to Rory for getting the video posted publicly.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:08 AM

Thanks again John for giving this talk! I really enjoyed the talk at the time and was pleasantly surprised with the positive engagement from the audience. I'm also pleased that this turned into a resource that can be re-shared.

You wouldn't happen to have any recordings of a Q/A session after John's talk? I'd be interested to hear what things interested the audience, what things confused them etc.

I was CTO of a company called Agentis in 2006-2008. Agentis was founded by Michael Georgeff, who published the seminal paper on BDI agents. Agents backed with LLM will be amazing. Unfortunately, AI will never quite reach our level of creativity and ability to form new insights. But on compute and basic level emotional stages, it will be fairly remarkable. HMU if you want to discuss. steve@beselfevident.com