An open mind is like a fortress with its gates unbarred and unguarded.
thanks for pointing that out, I've added a note in the description
There's countries where cooperative firms are doing fine. Most of Denmark's supermarket chains are owned by the cooperative coop. Denmark's largest dairy producer Arla is a cooperative too. Both operate in a free market and are out-competing privately owned competitors.
Both also resort to many of the same dirty tricks traditionally structured firms are pulling. Arla, for example, has done tremendous harm to the plant-based industry through aggressive lobbying. Structuring firms as cooperatives doesn't magically make them aligned.
Cicero, as it is redirecting its entire fleet: 'What did you call me?'
Yeah, my original claim is wrong. It's clear that KataGo is just playing sub-optimally outside of distribution, rather than punished for playing optimally under a different ruleset than its being evaluated.
Actually this modification shouldn't matter. After looking into the definition of pass-alive, the dead stones in the adversarial attacks are clearly not pass-alive.
Under both unmodified and pass-alive modified tromp-taylor rules, KataGo would lose here and its surprising that self-play left such a weakness.
The authors are definitely onto something, and my original claim that the attack only works due to kataGo being trained under a different rule-set is incorrect.
No, the KataGo paper explicitly states at the start of page 4:
"Self play games used Tromp-Taylor rules  modified to not require capturing stones within pass-aliveterritory"
Had KataGo been trained on unmodified Tromp-Taylor rules, the attack would not have worked. The attack only works because the authors are having KataGo play under a different ruleset than it was trained on.
If I have the details right, I am honestly very confused about what the authors are trying to prove with this paper. Given their Twitter announcement claimed that the rulesets were the same my best guess is simply that it was an oversight on their part.
(EDIT: this modification doesn't matter, the authors are right, I am wrong. See my comment below)
As someone who plays a lot of go, this result looks very suspicious to me. To me it looks like the primary reason this attack works is due to an artifact of the automatic scoring system used in the attack. I don't think this attack would be replicable in other games, or even KataGo trained on a correct implementation.
In the example included on the website, KataGo (White) is passing because it correctly identifies the adversary's (Black) stones as dead meaning the entire outside would be its territory. Playing any move in KataGo's position would gain no points (and lose a point under Japanese scoring rules), so KataGo passes.
The game then ends and the automatic scoring system designates the outside as undecided, granting white 0 points and giving black the win.
If the match were to be played between two human players, they would have to agree whether the outside territory belongs to white or not. If black were to claim their outside stones are alive the game would continue until both players pass and agree about the status of all territory (see 'disputes' in the AGA ruleset).
But in the adversarial attack, the game ends after the pass and black gets the win due to the automatic scoring system deciding the outcome. But the only reason that KataGo passed is that it correctly inferred that it was in a winning position with no way to increase its winning probability! Claiming that to be a successful adversarial attack rings a bit hollow to me.
I wouldn't conclude anything from this attack, other than that Go is a game with a lot of edge-cases that need to be correctly handled.
EDIT: I just noticed the authors address this on the website, but I still think this significantly diminishes the 'impressiveness' of the adversarial attack. I don't know the exact ruleset KataGo is trained under, but unless it's the exact same as the ruleset used to evaluate the adversarial attack, the attack only works due to KataGo playing to win a different game than the adversary.
Evaluating the RCT is a chance to train the evaluation-muscle in a well-defined domain with feedback. I've generally found that the people who are best at evaluations in RCT'able domains, are better at evaluating the hard-to-evaluate claims as well.
Often the difficult to evaluate domains have ways of getting feedback, but if you're not in the habit of looking for it, you're less likely to find the creative ways to get data.
I think a much more common failure mode within this community, is that we get way overconfident beliefs about hard-to-evaluate domains, because there aren't many feedback loops and we aren't in the habit of looking for them.
Does anyone know of any zero-trust investigations on nuclear risk done in the EA/Rationalist community? Open phil has funded nuclear work, so they probably have an analysis somewhere that concluded it is a serious risk to civilization, but I haven't ever looked into these analyses.
if it makes it easier, I can add the questions to manifold if you provide a list of questions and resolution criteria.