social system designer http://aboutmako.makopool.com
(I'm aware of most of these games)
I made it pretty clear in the article that it isn't about purely cooperative games. (Though I wonder if they'd be easier to adapt. Cooperative + complications seems closer to the character of a cohabitive game than competitive + non-zero-sum score goals do...)
Gloomhaven seems, and describes itself as being a cooperative game. What competitive elements are you referring to?
The third tier is worth talking about. I think these sorts of games might, if you played them enough, teach the same skills, but I think you'd have to play them for a long time. My expectation is that basically all of them end with a ranking? as you said, first, second, third. The ranking isn't scored, (ie, we aren't told that being second is half as good as being first) so there's not much clarity about how much players should value them, which is one obstacle to learning. Rankings also keep the game zero sum on net, and zero sum dynamics between first and second or between first and the alliance have the focus of your attention most of the time. The fewer or the more limited mutually beneficial deals are, the less social learning there will be. Zero sum dynamics need to be discussed in cohabitive games, but the games will support more efficient learning if they're reduced.
And there really are a lot of people who think that the game that humans are playing in the real world is zero sum, that all real games are zero sum, so, I also suspect that these sorts of games might never teach the skill, because to teach the skill you have to show them a way out of that mindset, and all they do is reinforce it.
competitive [...] not usually permanent alliances are critical to victory: Diplomacy, Twilight Imperium (all of them), Cosmic Encounter
This category is really interesting, because the alliances expire and have to be remade multiple times per game, and I've been meaning to play some games from this category, but they're also a lot more foggy, the agreements are of poor quality, they invite only limited amounts of foresight and social creativity, in contrast, writing good legislation in the real world seems to require more social creativity than we can currently produce.
Imagining a pivotal act of generating very convincing arguments for like voting and parliamentary systems that would turn government into 1) an working democracy 2) that's capable of solving the problem. Citizens and congress read arguments, get fired up, problem is solved through proper channels.
Yeah.
Well that's the usual reason to invoke it, I was more talking about the reason it lands as a believable or interesting explanation.
Notably, Terra Ignota managed to produce a mcguffin by having the canner device be extremely illegal by having even knowledge of its existence be a threat to the world's information infrastructure, so I'd guess that's the reason, iirc, they only made one.
I'm guessing they mean that the performance curve seems to reach much lower loss before it begins to trail off, while MLPs lose momentum much sooner. So even if MLPs are faster per unit of performance at small parameter counts and data, there's no way they will be at scale, to the extent that it's almost not worth comparing in terms of compute? (which would be an inherently rough measure anyway because, as I touched on, the relative compute will change as soon as specialized spline hardware starts to be built. Due to specialization for matmul|relu the relative performance comparison today is probably absurdly unfair to any new architecture.)
Theoretically and em-
pirically, KANs possess faster neural scaling laws than MLPs
What do they mean by this? Isn't that contradicted by this recommendation to use the an ordinary architecture if you want fast training:
It seems like they mean faster per parameter, which is an... unclear claim given that each parameter or step, here, appears to represent more computation (there's no mention of flops) than a parameter/step in a matmul|relu would? Maybe you could buff that out with specialized hardware, but they don't discuss hardware.
One might worry that KANs are hopelessly expensive, since each MLP’s weight
parameter becomes KAN’s spline function. Fortunately, KANs usually allow much smaller compu-
tation graphs than MLPs. For example, we show that for PDE solving, a 2-Layer width-10 KAN
is 100 times more accurate than a 4-Layer width-100 MLP (10−7 vs 10−5 MSE) and 100 times
more parameter efficient (102 vs 104 parameters) [this must be a typo, this would only be 1.01 times more parameter efficient].
I'm not sure this answers the question. What are the parameters, anyway, are they just single floats? If they're not, pretty misleading.
often means "train the model harder and include more CoT/code in its training data" or "finetune the model to use an external reasoning aide", and not "replace parts of the neural network with human-understandable algorithms".
The intention of this part of the paragraph wasn't totally clear but you seem to be saying this wasn't great? From what I understand, these actually did all made the model far more interpretable?
Chain of thought is a wonderful thing, it clears a space where the model will just earnestly confess its inner thoughts and plans in a way that isn't subject to training pressure, and so it, in most ways, can't learn to be deceptive about it.
This is good! I would recommend it to a friend!
Some feedback.
But overall I think it addresses a certain audience who I know much better than my version of this that I hastily wrote last year when I was summoned to speak at a conference would have (and so I never showed it to them. Maybe one day I will show them yours.).
:( that isn't what cooperation would look like. The gazelles can reject a deal that would lead to their extinction (they have better alternatives) and impose a deal that would benefit both species.
Cooperation isn't purely submissive compliance.