Joel Z. Leibo [1], Alexander Sasha Vezhnevets [1], William A. Cunningham [1, 2], Sébastien Krier [1], Manfred Diaz [3], Simon Osindero [1]
[1] Google DeepMind, [2] University of Toronto, [3] Mila Québec AI Institute
We published a more academically oriented version of this post on arxiv, available here.
Disclaimer: These are our own opinions; they do not represent the views of Google DeepMind as a whole or its broader community of safety researchers.
"We pragmatists think of moral progress as more like sewing together a very large, elaborate, polychrome quilt, than like getting a clearer vision of something true and deep." – Richard Rorty (2021) pg. 141
Quite a lot of thinking in AI alignment, particularly within the rationalist tradition, implicitly or explicitly appears to rest upon something we might...
Just want to add some context. I'm not going to respond to the specific arguments here, but I want to clarify a few things:
- As I wrote in the original post, the app (that I vibecoded in 10min)'s sole purpose was to illustrate the concept of comparative advantage: "This doesn't cover wages, income tax implications, or other important things - it's only to explain comparative advantage." It was a quick sketch to explain a single economic principle, not a comprehensive thesis on AGI.
- Nor did I claim this was sufficient or all there is, which the post implies in
... (read more)