It may depend on the RL algorithm, but I think would not expect most RL to have this issue to first order if the RL algorithm is producing its rollouts by sampling from the full untruncated distribution at temperature 1.
The issue observed by the OP is a consequence of the fact that typically if you are doing anything other than untruncated sampling at temperature 1, then your sampling is not invariant between, e.g. "choose one of three options: a, b, or c" and "choose one of two options: a, or (choose one of two options: b or c)".
However many typical on-policy RL algorithms fundamentally derive from sampling/approximation of theorems/algorithms where running one step of the theoretical idealized policy update looks more like:
"Consider the space of possible complete output sequences S, and consider sum_{s in S} P(s) Reward(s). Update model parameters one step in the direction that most steeply overall increases this quantity".
By itself, this idealized update is invariant to tokenization, because it's expressed only in terms of complete outputs. Tokenization does come in insofar as it affects the gradient steepness of the policy in different directions of possible generalization and what parts of the space are explored and on which the approximated/sampled update occurs, etc.
Note that the typical mechanism by which RL tends towards entropy decrease and/or mode collapse is also well-explained by the above and does not need any involvement from tokenization. Indeed, consider just applying the above idealized update repeatedly. The model will continue sharpening to try push ever more of the probability mass on to only the sequences s for which Reward(s) is maximal or near-maximal, and push the probability of every other completed sequence to zero. If your reward function (from RLHF or whatever) has any preference for outputs of a given length or style, even if slight, the policy eventually may collapse arbitrarily much to only that part of the distribution that meets that preference.
In some RL algorithms there is, additionally, a sort of Polya's-urn like tendency (https://en.wikipedia.org/wiki/P%C3%B3lya_urn_model) where among sequences that give similar reward, the particular ones sampled will become consistently more (or less) likely, but I believe that training on advantage rather than raw reward tends to also mitigate or remove this bias to first order as well, although there can still be a random walk-like behavior (just now one of lesser magnitude than before and that can go in either direction).
In any case these and other numerous issues of RL I would tend see as distinct mechanisms from the bias that results in overweighting shorter or more likely tokens when sampling at temperature less than 1, particularly as the latter is a unsoundness/lack-of-invariance that is inherent in the functional form of sampling at temperature less than 1, whereas many of the issues of RL more arise out of e.g. the variance of sampling and approximations, unwanted generalization, imperfect rewards, e.g. rather than being inherently unsound in the functional form itself.
It's interesting to note the variation in "personalities" and apparent expression of different emotions despite identical or very similar circumstances.
Pretraining gives models that predict every different kind of text on the internet, and so are very much simulators that learn to instantiate every kind of persona or text-generating process in that distribution, rather than being a single consistent agent. Subsequent RLHF and other training presumably vastly concentrates the distribution of personas and processes instantiated by the model on to a particular narrow cloud of personas that self-identifies as an AI with a particular name, has certain capabilities and quirks depending on that training, has certain claimed self-knowledge of capabilities (but where there isn't actually very strong of a force tying the claimed self-knowledge to the actual capabilities), etc. But even narrowed, it's interesting to still see significant variation within the remaining distribution of personas that gets sampled each new conversation, depending on the context.
I agree with DAL that "move 37" among the lesswrong-ish social circle has maybe become a handle for a concept where the move itself isn't the best exemplar of that concept in reality, although I think it's not a terrible exemplar either.
It was surprising to pro commentators at the time. People tended to conceptualize bots as being brute-force engines with human-engineered heuristics that just calculate them out to a massive degree (because historically that's what they were in Chess), rather than as self-learning entities that excel in sensing vibes and intuition and holistic judgment. As a Go player, it looks to me like the kind of move that you never find just by brute force because there aren't any critical tactics related to it to solve. The plausible variations and resulting shapes each variation produces are obvious, so it's a move you choose only if you just have the intuition that you feel good having those resulting shapes being on the board in the long-term. KataGo's raw policy prior for a recent net puts ~25% mass on the move, so it's an "intuitive" option for the neural net too, not one discovered by deep search.
On the side of the move not being *too* remarkable, in the eyes of modern stronger bots the evaluation of the position doesn't change much through that move or the 2-3 moves so it's not like the game swung on that move. Lee Sedol's response was also indeed fine, and both players do have other ways to play that are ~equally good, so the move also is not a unique good/best move. And there have since been surprises and things that have had a much bigger impact on pro human play since strong Go bots started becoming available.
Elaborating on the "self-learning entities that excel in vibes and intuition and holistic judgment" - modern Go bots relative to humans are fantastically good at judging and feeling out the best moves when the position is "smooth", i.e. there are lots of plausible moves with tradeoffs that range through a continuum of goodness with no overly sharp tactics. But they are weak at calculating deep sharp tactics and solving variations that need to be proven precisely (still good and on-average-better than human, but it's their weakest point). It's still the case to this day that human pros can occasionally outcalculate the top bots in sharp/swingy tactical lines, while it's unheard of for a human to outplay the bots through having better judgment in accumulating long-term advantages and making incremental good trades over the course of the game.
Bots excel at adapting their play extremely flexibly given subtle changes to the overall position, so commonly you get the bots suggesting interesting moves that are ever so slightly better on average, that human pro players might not consider. A lot of such moves also rate decently in the raw policy prior, so the neural nets are proposing many of these moves "on instinct" generalizing from their enormous volume of self-play learning, with the search serving after-the-fact to filter away the (also frequent) instances where the initial instinct is wrong and leads to a tactical blunder.
So, specific answers:
> Does understanding Move 37 require the use of extensive brute force search?
When you take into account the fact that AlphaGo had extensive search at its disposal, does that make the creativity of Move 37 significantly less impressive?
No, and brute force isn't the practically relevant factor here so I'd question the premise. The variations and possible results that the move leads to aren't too complicated, so the challenge is in the intuitive judgment call of whether those results are will be good over the next 50-100 moves of the game given the global situation, which I expect is beyond anyone's ability to do solely via brute force (including bots). Pros at the time didn't have the intuition that this kind of exchange in this kind of position could be good, so it was surprising. To the degree that modern pros could have a different intuition now, it would tend to be due to things like having shaped their subconscious intuition based on feedback and practice with modern bots and modern human post-AI playing styles, rather than mostly via conscious or verbalizable reasons.
> Is Move 37 categorically different from other surprising moves played by human Go experts?
Not particularly. Bots are superhuman in the kind of intuition that backs such moves, but I'd say it's on a continuum. A top pro player might similarly find interesting situation-specific moves backed by intuition that most strong amateur players would not consider or have in those positions.
> I noticed that Lee Sedol's Wikipedia page mentions a notable game in which he uses a "broken ladder," which is "associated with beginner play"—maybe it's not so uncommon for a professional Go player to do something unconventional every so often.
Given an expert explanation of Move 37, what level of Go expertise would be required to fully understand it, and how long would it take?
What if you had to figure it out without an explanation, just by studying the game?
Because it boils down to fuzzy overall intuition of what positions you prefer over others, it's probably not the kind of move that can be verbally explained in any practical way in the first place. (It would be hard to give an explanation that's "real" as opposed to merely curiosity-stopping or otherwise unuseful).
> To what extent have human players been able to learn novel strategies from AI in Go or chess?
The popularity of various opening patterns ("joseki") has changed a lot. The 60-0 AlphaGo Master series featured many games that as far as bots were concerned were already very bad for the human by the time the opening was done, and I think that would not be as much the case if repeated today. But also I think that change is not so important. Small opening advantages are impactful at the level of top bots but for humans the variance in the rest of the game is large and makes that small difference matter much less. I'd guess the more important thing is the ability to use the bots as rapid and consistent feedback, i.e. just general practice and correcting one's mistakes, rather than any big strategic change. This is the boring answer perhaps, because it's also how bots have long been used in chess (but minus the the part about preparing exact opponent-specific opening lines, because Go's opening is usually too open-ended to prepare specific variations and traps).
(Background: I'm the main developer of KataGo and have accumulated a lot of time looking at bot analysis of games and am a mid-amateur dan player, i.e. expert but not master, maybe would be around the top 15-30%ile of players if I were to attend the annual open US Go congress).
One thing that's worth keeping in mind with exercises like this is that while you can do this in various ways and get some answers, the answers you get may depend nontrivially on how you construct the intermediate ladder of opponents.
For example, attempts to calibrate human and computer Elo ratings scales often do place top computers around the 3500ish area, and one of the other answers given has indicated by a particular ladder of intermediates that random would then be at 400-500 Elo given that. But there are also human players who are genuinely rated 400-500 Elo on servers whose Elo ratings are also approximately transitively self-consistent within that server. These players can still play Chess - e.g. know how pieces move, and can see captures and move pieces to execute those captures vastly better than chance, etc. I would not be surprised to see such a player consistently destroy a uniform random Chess player. Random play is really, really bad. So there's a good chance here that we would see a significant nonlinearity/nontransitivity in Elo ratings, such that there isn't any one consistent rating that we can assign to random play relative to Stockfish.
A good way of framing this conceptually is to say that Elo is NOT a fundamental truth about reality, rather it's an imperfect model that we as humans invented that depending on the situation may work anywhere from poorly to okay to amazingly good at approximating an underlying reality.
In particular, the Elo model makes a very strong "linearity-like" assumption: that if A beats B with expected odds a:b, and B beats C with expected odds b:c, then A will beat C with expected odds of precisely a:c. (where draws are treated as a half point of each player beating the other, i.e. mathematically equivalent in expectation to if you were to resolve all draws by fair coin flip to determine the winner), and then given the way rating is defined from there, this linearity in odds then implies that the expected score between players follows precisely a sigmoid function f(x) = 1/(1+exp(-x)) of their rating difference up to constant scaling.
Almost any real situation will violate these assumptions at least a little (and even mathematically ideal artificial games will violate it, e.g. a game where players have a fixed mean and variance and compete by sampling from different gaussians to see whose number is higher will violate this assumption!). But in many cases of skill-based competition this works quite well, and there are various ways to justify and explain why this approximation does work pretty well when it does!
But even in games/domains where Elo does approximate realistic player pools amazingly well, it quite commonly stops doing as well at the extremes. For example, two common cases where this happens can include:
The first case can happen when the near-optimal players have some persistent tendencies as to mistakes they still make as well as sharp preferences for various lines. Then you no longer have law-of-large-numbers effects (too few mistakes per game) and also no poisson-like smoothness in the arrival rate of mistakes (mistakes aren't well-modeled as having an "arrival rate" if they're sufficiently consistent to a line x bot combination) and the Elo model simply stops being a good model of reality. I've seen this empirically be the case on the 9x9 computer go server (9x9 "CGOS") with a bot equilibrating at one point to be a couple hundred Elo lower than a bot no longer running that it should have been head-to-head equal or stronger than, due to different transitive opponents.
The second case, the one relevant here, can happen because there's no particular reason to expect that a game will actually have tails that precisely match that of a sigmoid function f(x) = 1/(1+exp(-x)) in expected score in the extreme. Depending the actual tails between different pairs of players of increasingly large ratings differences, particularly whether it tends to be thinner or heavier than exp(-x) in given conditions, when you then try to measure large ratings differences via many transitive steps of intermediate opponents, you then will get different answers depending on the composition of those intermediate players and how many and how big of steps you take.
It's not surprising when models that are just useful approximations of reality (i.e. "the map, not the territory") start breaking down at extremes. It can be still worthwhile doing things like this to build intuition or even just for fun and see what numbers you get! While doing so, my personal tendency in such cases would still be to emphasize that at the extremes of questions like "what is the Elo of perfect play" or "what is the Elo of random play", the numbers you do get can start to be answers that have a lot to do with one's models and methodologies rather than answers that reflect an underlying reality accurately.
Circling back to this with a thing I was thinking about - suppose one wanted to figure out just one additional degree of freedom to the Elo rating a player had (at a given point in time, if you also allow evolution over time) that would add as much improvement as possible. Almost certainly you need more dimensions than that to properly fit real idiosyncratic nonlinearities/nontransitivities (i.e. if you had a playing population with specific pairs of players that were especially strong/weak only against specific other players, or cycles of players where A beats B beats C beats A, etc), but if you just wanted to work out what the "second principal component" might be, what's a plausible guess?
First, you can essentially reproduce the Elo model by rather than each player having a rating and the winning chance being a function of the difference between their ratings, instead you posit that each player has a rating and when they play a game, they each indepedently sample a random value from a fixed probability distribution centered around their own rating, and the player with the larger sample wins.
I think that you exactly reproduce the Elo model up to scaling if this distribution is a Gumbel distribution, because the difference of two Gumbels is apparently equivalent to a draw from a logistic distribution, and the CDF of the logistic distribution is precisely the sigmoid that the Elo model posits. But in practice, you should end up with almost the same thing if you choose any other reasonable distribution so long as it has the right heaviness of tail.
In particular, I'd expect having linearly-exponential tails is good rather than quadratically-exponential tails like the normal distribution has, because linearly-exponential tails tend to be desirable for real-world ratings models due to being much more outlier-resistant and in the real world you have issues like forfeits, sandbaggers, internet disconnection/timeouts, etc. (If you have a quadratically exponential tail, then a ratings model can put so low probability on an outlier that subject to seeing the outlier, the ratings model is forced to make a too-large update to accommodate it, this should be intuitive from a Bayesian perspective). Outliers and noise and the realities of real world ratings data I'd expect introduces far bigger variation in ratings quality anyways than any minor distribution-shape differences would.
So for example, you could also say each player draws from a logistic distribution, rather than only a Gumbel. The difference of two logistics is not quite a logistic distribution but up to rescaling it should be pretty close so this is nearly the Elo model again.
Anyways, with any reformulation like this, there is a very natural candidate now for a second dimension - that of the variance of the distribution that a player draws their sample from. Rather than each player drawing from a fixed distribution centered around their rating before seeing who has the higher value and wins, we now add a second parameter that allows the variance of that distribution to vary by player. So the ratings model now becomes able to express things like "this player is more variable in performance between games, or prone to blunders uncharacteristic of their skill level than this other player". This parameter might also improve the rating system's ability to "explain away" things like sandbagger players by assigning them a high variance, thereby reducing their distortionary impact on other players' ratings even before manual intervention.
I might be misunderstanding, but it looks to me like your proposed extension is essentially just the Elo model with some degrees of freedom that don't yet appear to matter?
The dot product has the property that <theta_A-theta_B,w> = <theta_A,w> - <theta_B,w>, so the only thing that matters is the <theta_P,w> for each player P, which is just a single scalar. So we are on a one-dimensional scale again where predictions are based on taking a sigmoid of the difference between a single scalar associated with each player.
As far as I can tell, the way that such a model could still be a nontrivial extension of Elo would be if you posited w could vary between games, whether randomly from some distribution or whether via additional parameters associated to players that influence what w is in the games they are involved in, or other things like that. But it seems you would need something like that, or else some source of nonlinearity, because if w is constant then every dimension orthogonal to that fixed w can never have any effect on predictions by the model.
I assume you're familiar with the case of the parallel postulate in classical geometry as being independent of other axioms? Where that independence corresponds with the existence of spherical/hyperbolic geometries (i.e. actual models in which the axiom is false) versus normal flat Euclidean geometry (i.e. actual models in which it is true).
To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models, and those models are each useful in different situations, so the only thing it comes down to is which models you happen to be wanting to use or explore or prove things about on a given day.
Similarly for the huge variety of different algebraic or topological structures (groups, ordered fields, manifolds, etc) - it is extremely common to have statements that are independent of the axioms, e.g. in a ring it is independent of the axioms whether multiplication is commutative or not. And both choices are valid. We have commutative rings, and we have noncommutative rings, and both are self-consistent mathematical structures that one might wish to study.
Loosely analogous to how one can write a compiler/interpreter for a programming language within other programming languages, some theories can easily simulate other theories. Set theories are particularly good and convenient for simulating other theories, but one can also simulate set theories within other seemingly more "primitive" theories (e.g. simulating it in theories of basic arithmetic via Godel numbering). This might be analogous to e.g. someone writing a C compiler in Brainfuck. Just like how it's meaningless to talk about whether a programming language or a given sub-version or feature extension of a programming language is more "objectively true" than another, there are many who take the position that the same holds for different set theories.
When you say you're "leaning towards a view that maintains objective mathematical truth" with respect to certain axioms, is there some fundamental principle by which you're discriminating the axioms that you want to assign objective truth from axioms like the parallel postulate or the commutativity of rings, which obviously have no objective truth? Or do you think that even in these latter cases there is still an objective truth?
This thread analyzes what is going on under the hood with the chess transformer. It is a stronger player than the Stockfish version it was distilling, at the cost of more compute but only by a fixed multiplier, it remains O(1).
I found this claim suspect because this basically is not a thing that happens in board games. In complex strategy board games like Chess, practical amounts of search on top of a good prior policy and/or eval function (which Stockfish has), almost always outperforms any pure forward pass policy model that doesn't do explicit search, even when that pure policy model is quite large and extensively trained. With any reasonable settings, it's very unlikely that the distillation of Stockfish into a pure policy model produces a better player than Stockfish.
I skimmed the paper (https://arxiv.org/pdf/2402.04494), and had trouble finding such a claim, and indeed it seems the original poster of that thread later retracted that claim as due to their own mistake in interpreting the data table of the paper. The post where they acknowledge the mistake is much less prominent than the original post, link here: https://x.com/sytelus/status/1848239379753717874 . The chess transformer remains quite a bit weaker than the Stockfish it tries to predict/imitate.
Do you think a vision transformer trained on 2-dimensional images of the board state would also come up with a bag of heuristics or would it naturally learn a translation invariant algorithm taking advantage of the uniform way the architecture could process the board? (Let's say that there are 64 1 pixel by 1 pixel patches, perfectly aligned with the 64 board locations of an 8x8 pixel image, to make it maximally "easy" for both the model and for interpretability work.)
And would it differ based on whether one used an explicit 2D positional embedding, or a learned embedding, or a 1D positional embedding that ordered the patches from top to bottom, right to left?
I know that of course giving a vision transformer the actual board state like this shortcircuits the cool part where OthelloGPT tries to learn its own representation of the board. But I'm wondering if even in this supposedly easy setting it still would end up imperfect with a tiny error rate and a bag-of-heuristics-like way of computing legal moves.
And brainstorming a bit here: a slightly more interesting setting that might not shortcircuit the cool part would be if the input to the vision transformer was a 3D "video" of the moves on the board. E.g. the input[t][x][y] is 1 if on turn t, a move was made at (x,y), and 0 otherwise. Self-attention would presumably be causally-masked on the t dimension but not on x and y. Would we get a bag of heuristics here in the computation of the board state and the legal moves from that state?
I think in dictionaries one tends to find the "morally/legally bound" definition of "obligation" emphasized, and only sometimes but not always a definition closer to the usage in the OP, so prescriptively, in the sense of linguistic prescriptivism, this criticism may make sense. But practically/descriptively, I do believe among many English-speaking populations (including at least the one that contains me) currently "obligation" can also be used the way it is in the OP. For me at least the usage of "obligation" did not pose any speed bumps in understanding the broader meaning of the post, being unremarkable enough that the conscious idea that the word's usage might not have matched various common dictionaries' top or only definitions didn't register until this comment.
There can be things one can feel sad about in language evolution (for example the treadmill of words meaning "a thing is actually true" being appropriated into generic intensifiers, see "very", "truly", "literally", etc...). But it's worth noting that different regions/social groups/populations/etc. may be at different points along the space of different such language changes and diverge in what acceptable usages of words are. As such, I think my instinct if I thought a word like "obligation" was being misused and it was sufficiently jarring might tend to be less to write a comment arguing why the original poster's usage is wrong, and more to ask the poster if they did in fact intend that meaning or were aware that it might be sneaking in a meaning or connotation that for some segment of the their readership would come off as misleading or wrong.