cubefox
Is LLM Translation Without Rosetta Stone possible?
Suppose astronomers detect a binary radio signal, an alien message, from a star system many light years away. The message contains a large text dump (conveniently, about GPT-4 training text data sized) composed in an alien language. Let's call it Alienese.[1] Unfortunately we don't understand Alienese. Until recently, it seemed...
Are Intelligence and Generality Orthogonal?
A common presupposition seems to be that intelligent systems can be classified on two axes: * Intelligence (low to high) * Generality (narrow to general) For example, AlphaGo is presumably fairly intelligent, but quite narrow, while humans are both quite intelligent and quite general. A natural hypothesis would be that...

Yeah. Though strictly speaking, only something like self-play (mentioned by Karl Krueger below) is strictly speaking a model improving itself. The more classic example of RSI is a model acting as an AI researcher, working on better ML algorithms (optimizer, architecture, objective function, etc), which doesn't directly improve the model (the AI ML researcher) itself, only ancestor models which are then trained using this improved ML algorithm. The latter form of RSI is nonetheless more powerful than something like self-play, which is stuck with its likely suboptimal architecture, meaning that it will eventually plateau. An automated ML researcher can theoretically improve up to technological maturity: the best ML algorithm possible.