Research Engineer in AI Safety
Yes it is indeed a hybrid event!
I have now added the following text to the website:
The conference is hybrid in-person / virtual. All sessions will have remote dial-in facilities, so authors are able to present virtually and do not need to attend in-person.
This was in our draft copy for the website, I could have sworn it was on there but somehow it got missed out, my apologies!
Update: The submissions deadline for this Special Session has been extended to 13th March.
Hey, one of the co-organisers of this special session here (I was planning to make a post about this on LW myself but OP beat me to it!).
Clearly I am biased, but I would highly recommend the ALIFE conference (even outside the context of this special session). I published a paper there myself at ALIFE 2021 and really enjoyed the experience.
It has a diverse, open-minded and enthusiastic set of attendees from a wide range of academic disciplines, the topics are varied but interesting. Regarding being in touch with reality, this is harder to comment on but it does typically include a lot of practical and empirical research, for example computer simulations, as well as more theoretical and philosophical work.
We are arranging this special session because we think that Artificial Life as a field, and in particular attendees of this conference, may have a lot to contribute to AI safety, so we are excited about the potential overlap between these areas.
Please feel free to reach out to me directly if you have any questions.
I have been thinking about this for quite a while. In particular this paper which learns robust "agents" in Lenia seems very relevant to themes in alignment research: Learning Sensorimotor Agency in Cellular Automata
Continuous cellular automata have a few properties which in my view make them a potentially interesting testbed for agency research in AI alignment:
I am hoping to get the time at some point to explore some of these ideas using Lenia (I am working a full time job so it would have to be more of a side project). In particular I would like to re-implement the sensorimotor agency paper then see what avenues that opens. Perhaps trying to quantitatively measure abstraction within Lenia, for example can we come up with a measure of abstraction that can automatically identify these "agents". Or something along the lines of the information theory of individuality, to see whether optimizing globally for these measures (with gradient descent) actually produces something that we recognise as agents / individuals.
I will admit that a lot of my motivation for this is just that I find continuous cellular automata fascinating and fun, rather than considering this the most promising direction for alignment research. But I do also think it could be fruitful for alignment research.
This is a really interesting point that I hadn't thought of!
I'm not sure where I land on the conclusion though. My intuition is that two copies of the same mind emulation running simultaneously (assuming they are both deterministic and are therefore doing identical computations) would have more moral value than only a single copy, but I don't have a lot of confidence in that.