Measuring out-of-context Schelling coordination capabilities in large language models.
Overview
This is my first foray into AI safety research, and is primarily exploratory. I present these findings with all humility, make no strong claims, and hope there is some benefit to others. I certainly learned a great deal in the process, and hope to learn more from any comments or criticism—all very welcome. A version of this article with some simple explanatory animations, less grainy graphs, and updates to the evaluation as new models are released can be found here. Github link is coming soon after a tidy and making the .eval files private or password protected.
Can two instances of the same model, without... (read 2668 more words →)