Wiki Contributions

Comments

Also, the specific cycle attack doesn’t work against other engines I think? In the paper their adversary doesn’t transfer very well to LeelaZero, for example. So it’s more one particular AI having issues, than a fact about Go itself.

Sure, but refutations don't transfer to different openings either, right?  I feel like most game-winning insights are contingent in this sense, rather than being fundamental to the game.

EDIT: also, I think if you got arbitrary I/O access to a Magnus simulator, and then queried it millions of times in the course of doing AlphaZero style training to derive an adversarial example, I’d say it’s pretty borderline if it’s you beating beating him. Clearly there’s some level of engine skill where it’s no longer you playing!

This is a really interesting hypothetical, but I see it differently.

If the engine isn't feeding me moves over the board (which I certainly agree would be cheating), then it has to give me something I can memorize and use later.  But I can't memorize a whole dense game tree full of winning lines (and the AI can't calculate that anyway), so it has to give me something compressed (that is, abstract) that I can decompress and apply to a bunch of different board positions.  If a human trainer did that we'd call those compressed things "insights", "tactics", or "strategies", and I don't think making the trainer into a mostly-superhuman computer changes anything.  I had to learn all the tactics and insights, and I had to figure out how to apply them; what is chess, aside from that?

Also, I wouldn't expect that Carlsen has any flaws that a generally weak player, whether human or AI, could exploit the way the cyclic adversary exploits KataGo.  It would find flaws, and win consistently, but if the pattern in its play were comprehensible at all it would be the kind of thing that you have to be a super-GM yourself to take advantage of.  Maybe your intuition is different here?  In limit I'd definitely agree with you: if the adversarial AI spit out something like "Play 1. f3 2. Kf2 and he'll have a stroke and start playing randomly", then yeah, that can't really be called "beating Carlsen" anymore.  But the key point there, to me, isn't the strength of the trainer so much as the simplicity of the example; I'd have the same objection no matter how easy the exploit was to find.

Curious what you make of this.

Like I said, I feel like I hear it a lot, and in practice I don't think it's confusing because the games that get solved by theorists and the games that get "solved" by AIs are in such vastly different complexity regimes.  Like, if you heard that Arimaa had been solved, you'd immediately know which sense was meant, right?

Having said that, the voters clearly disagree and I'm not that attached to it, so I'm going to rename the post.  Can you think of a single adjective or short phrase that captures the quality that chess has, and Starcraft doesn't, WRT AI?  That's really what I want people to take away.

If I can't think of anything better I expect I'll go with "There are (probably) no superhuman Go AIs yet: ...".

I see where you're coming from, but I don't think the exploit search they did here is fundamentally different from other kinds of computer preparation. If I were going to play chess against Magnus Carlsen I'd definitely study his games with a computer, and if that computer found a stunning refutation to an opening he liked I'd definitely play it. Should we say, then, that the computer beat Carlsen, and not me? Or leave the computer aside: if I were prepping with a friend, and my friend found the winning line, should we say my friend beat Carlsen? What if I found the line in an opening book he hadn't read?

You win games of chess, or go, by knowing things your opponent doesn't. Where that knowledge comes from matters to the long term health of the game, but it isn't reflected in the match score, and I think it's the match score that matters most here.

To my embarrassment, I have not been paying much attention to adversarial RL at all. It is clearly past time to start!

In a game context you're right, of course.  But I often hear AI people casually say things like "chess is solved", meaning something like "we solved the problem of getting AIs to be superhumanly good at chess" (example).  For now I think we have to stop saying that about go, and instead talk about it more like how we talk about Starcraft.

Well, I'm hardly an expert, I've just read all the posts.  Marcello summed up my thinking pretty well.  I don't think I understand how you see it yet, though.  Is is that the adversary's exploit is evidence of a natural abstraction in Go that both AIs were more-or-less able to find, because it's expressible in the language of live groups and capturing races?

You can imagine the alternative, where the "exploit" is just the adversary making moves that seem reasonable but not optimal, but then KataGo doesn't respond well, and eventually the adversary wins without there ever being anything a human could point to and identify as a coherent strategy.

In the paper, David Wu hypothesized one other ingredient: the stones involved have to form a circle rather than a tree (that is, excluding races that involve the edge of the board).  I don't think I buy his proposed mechanism but it does seem to be true that the bait group has to be floating in order for the exploit to work.

Interesting point about the scaling hypothesis.  My initial take was that this was a slightly bad sign for natural abstractions: Go has a small set of fundamental abstractions, and this attack sure makes it look like KataGo didn't quite learn some of them (liberties and capturing races), even though it was trained on however many million games of self-play and has some customizations designed to make those specific things easier.  Then again, we care about Go exactly because it resisted traditional AI for so long, so maybe those abstractions aren't as natural in KataGo's space as they are in mine, and some other, more generally-useful architecture would be better behaved.

Definitely we're missing efficient lifelong learning, and it's not at all clear how to get there from current architectures.

The comments in that post are wrong, the exploit does not rely on a technicality or specific rule variant.  I explained how to do it in my post, cross-posted just now with Vanessa's post here.

Any time you get a data point about X, you get to update both on X and on the process that generated the data point.  If you get several data points in a row, then as your view of the data-generating process changes you have re-evaluate all of the data it gave it you earlier.  Examples:

  • If somebody gives me a strong-sounding argument for X and several weak-sounding arguments for X, I'm usually less persuaded than if I just heard a strong-sounding argument for X.  The weak-sounding arguments are evidence that the person I'm talking to can't evaluate arguments well, so it's relatively more likely that the strong-sounding argument has a flaw that I just haven't spotted.
  • If somebody gives me a strong-sounding argument for X and several reasonable-but-not-as-strong arguments against X, I'm more persuaded than just by the strong argument for X.  This is because the arguments against X are evidence that the data-generating process isn't filtered (there's an old Zack_M_Davis post about this but I can't find it).  But this only works to the extent that the arguments against X seem like real arguments and not strawmen: weak-enough arguments against X make me less persuaded again, because they're evidence of a deceptive data-generating process.
  • If I know someone wants to persuade me of X, I mostly update less on their arguments than I would if they were indifferent, because I expect them to filter and misrepresent the data (but this one is tricky: sometimes the strong arguments are hard to find, and only the enthusiasts will bother).
  • If I hear many arguments for X that seem very similar I don't update very much after the first one, since I suspect that all the arguments are secretly correlated.
  • On social media the strongest evidence is often false, because false claims can be better optimized for virality.  If I hear lots of different data points of similar strength, I'll update more strongly on each individual data point.

None of this is cheap to compute; there are a bunch of subtle, clashing considerations.  So if we don't have a lot of time, should we use the sum, or the average, or what?  Equivalently: what prior should we have over data-generating processes?  Here's how I think about it:

Sum: Use this when you think your data points are independent, and not filtered in any particular way -- or if you think you can precisely account for conditional dependence, selection, and so on.  Ideal, but sometimes impractical and too expensive to use all the time.

Max: Useful when your main concern is noise.  Probably what I use the most in my ordinary life.  The idea is that most of the data I get doesn't pertain to X at all, and the data that is about X is both subject to large random distortions and probably secretly correlated in a way that I can't quantify very well.  Nevertheless, if X is true you should expect to see signs of it, here and there, and tracking the max leaves you open to that evidence without having to worry about double-updating.  As a bonus, it's very memory efficient: you only have to remember the strongest data favoring X and the strongest data disfavoring it, and can forget all the rest.

Average: What I use when I'm evaluating an attempt at persuasion from someone I don't know well.  Averaging is a lousy way to evaluate arguments but a pretty-good-for-how-cheap-it-is way to evaluate argument-generating processes.  Data points that aren't arguments probably shouldn't ever be averaged.

Min: I don't think this one has any legitimate use at all.  Lots of data points are only very weakly about X, even when X is true.

All of these heuristics have cases where they abjectly fail, and none of them work well when your adversary is smarter than you are.

Answer by TaranFeb 03, 202392

Strictly speaking asymptotic analysis is not very demanding: if you have a function  that you can bound above in the limit as a function of , you can do asymptotic analysis to it.  In practice I mostly see asymptotic analysis used to evaluate counterfactuals: you have some function or process that's well-behaved for  inputs, and you want to know if it will still be well-enough behaved if you had  inputs instead, without actually doing the experiment.  You're rendering ten characters on the screen in your video game -- could you get away with rendering 20, or would you run out of graphics card memory?  Your web site is serving 100 requests per second with low latency -- if you suddenly had 1000 requests per second instead, would latency still be low?  Would the site still be available at all?  Can we make a large language model with the exact same architecture as GPT-3, but a book-sized context window?  Asymptotic analysis lets you answer questions like that without having to do experiments or think very hard -- so long as you understand  correctly.

When I'm reviewing software designs, I do this kind of analysis a lot. There, it's often useful to distinguish among average-case and worst-case analyses: when you're processing 100 million records in a big data analysis job you don't care that much about the variance in processing any individual record, but when you're rendering frames in a video game you work hard to make sure that every single frame gets done in less than 16 milliseconds, or whatever your budget is, even if that means your code is slower on average.

This makes it sound like a computer science thing, and for me it mostly is, but you can do the same thing to any scaling process. For example, in some cultures, when toasting before drinking, it's considered polite for each person to toast each other person, making eye contact with them, before anyone drinks for real.  If you're in a drinking party of  people, how many toasts should there be and how long should we expect them to take?  Well, clearly there are about  pairs of people, but you can do  toasts in parallel, so with good coordindation you should expect the toasts to take time proportional to ...

...except that usually these toasts are done in a circle, with everyone holding still, so there's an additional hard-to-model constraint around the shared toasting space in the circle.  That is to me a prototypical example of the way that asymptotic analysis can go wrong: our model of the toasting time as a function of the number of people was fine as far as it went, but it didn't capture all the relevant parts of the environment, so we got different scaling behavior than we expected.

(The other famous use of asymptotic analysis is in hardness proofs and complexity theory, e.g. in cryptography, but those aren't exactly "real world processes" even though cryptography is very real).

Load More