Wiki Contributions

Comments

:( that isn't what cooperation would look like. The gazelles can reject a deal that would lead to their extinction (they have better alternatives) and impose a deal that would benefit both species.

Cooperation isn't purely submissive compliance.

(I'm aware of most of these games)

I made it pretty clear in the article that it isn't about purely cooperative games. (Though I wonder if they'd be easier to adapt. Cooperative + complications seems closer to the character of a cohabitive game than competitive + non-zero-sum score goals do...)

Gloomhaven seems, and describes itself as being a cooperative game. What competitive elements are you referring to?

The third tier is worth talking about. I think these sorts of games might, if you played them enough, teach the same skills, but I think you'd have to play them for a long time. My expectation is that basically all of them end with a ranking? as you said, first, second, third. The ranking isn't scored, (ie, we aren't told that being second is half as good as being first) so there's not much clarity about how much players should value them, which is one obstacle to learning. Rankings also keep the game zero sum on net, and zero sum dynamics between first and second or between first and the alliance have the focus of your attention most of the time. The fewer or the more limited mutually beneficial deals are, the less social learning there will be. Zero sum dynamics need to be discussed in cohabitive games, but the games will support more efficient learning if they're reduced.
And there really are a lot of people who think that the game that humans are playing in the real world is zero sum, that all real games are zero sum, so, I also suspect that these sorts of games might never teach the skill, because to teach the skill you have to show them a way out of that mindset, and all they do is reinforce it.

competitive [...] not usually permanent alliances are critical to victory: Diplomacy, Twilight Imperium (all of them), Cosmic Encounter

This category is really interesting, because the alliances expire and have to be remade multiple times per game, and I've been meaning to play some games from this category, but they're also a lot more foggy, the agreements are of poor quality, they invite only limited amounts of foresight and social creativity, in contrast, writing good legislation in the real world seems to require more social creativity than we can currently produce.

Imagining a pivotal act of generating very convincing arguments for like voting and parliamentary systems that would turn government into 1) an working democracy 2) that's capable of solving the problem. Citizens and congress read arguments, get fired up, problem is solved through proper channels.

Yeah.

Well that's the usual reason to invoke it, I was more talking about the reason it lands as a believable or interesting explanation.

Notably, Terra Ignota managed to produce a mcguffin by having the canner device be extremely illegal by having even knowledge of its existence be a threat to the world's information infrastructure, so I'd guess that's the reason, iirc, they only made one.

I'm guessing they mean that the performance curve seems to reach much lower loss before it begins to trail off, while MLPs lose momentum much sooner. So even if MLPs are faster per unit of performance at small parameter counts and data, there's no way they will be at scale, to the extent that it's almost not worth comparing in terms of compute? (which would be an inherently rough measure anyway because, as I touched on, the relative compute will change as soon as specialized spline hardware starts to be built. Due to specialization for matmul|relu the relative performance comparison today is probably absurdly unfair to any new architecture.)

Theoretically and em-
pirically, KANs possess faster neural scaling laws than MLPs

What do they mean by this? Isn't that contradicted by this recommendation to use the an ordinary architecture if you want fast training:

A section from their diagram where they disrecommend KANs if you want fast training

It seems like they mean faster per parameter, which is an... unclear claim given that each parameter or step, here, appears to represent more computation (there's no mention of flops) than a parameter/step in a matmul|relu would? Maybe you could buff that out with specialized hardware, but they don't discuss hardware.

One might worry that KANs are hopelessly expensive, since each MLP’s weight
parameter becomes KAN’s spline function. Fortunately, KANs usually allow much smaller compu-
tation graphs than MLPs. For example, we show that for PDE solving, a 2-Layer width-10 KAN
is 100 times more accurate than a 4-Layer width-100 MLP (10−7 vs 10−5 MSE) and 100 times
more parameter efficient (102 vs 104 parameters) [this must be a typo, this would only be 1.01 times more parameter efficient].

I'm not sure this answers the question. What are the parameters, anyway, are they just single floats? If they're not, pretty misleading.

often means "train the model harder and include more CoT/code in its training data" or "finetune the model to use an external reasoning aide", and not "replace parts of the neural network with human-understandable algorithms". 

The intention of this part of the paragraph wasn't totally clear but you seem to be saying this wasn't great? From what I understand, these actually did all made the model far more interpretable?

Chain of thought is a wonderful thing, it clears a space where the model will just earnestly confess its inner thoughts and plans in a way that isn't subject to training pressure, and so it, in most ways, can't learn to be deceptive about it.

This is good! I would recommend it to a friend!

Some feedback.

  • An individual human can be inhumane, but the aggregate of human values kind of visibly isn't and in most ways couldn't be: Human cultures are getting more humane reliably as transparency/reflection and coordination increases over time, but also inevitably if you aggregate a bunch of concave values it will produce a value system that treats all of the subjects of the aggregation pretty decently.
    A lot of the time, when people accuse us of conflating something, we equate those things because we have an argument that they're going to turn out to be equivalent.
    So emphasizing a difference between these two things could be really misleading, and possibly kinda harmful, given that it could undermine the implementation of the simplest/most arguably correct solutions to alignment (which are just aggregations of humans' values). This could be a whole conversation, but could we just not define humane values as being necessarily distinct from human values? How about this:
    • People are sometimes confused by 'Human values', as it seems to assume that all humans value the same things, but many humans have values that conflict with the preferences of other humans. When we say 'Humane values', we're defining a value system that does a decent job at balancing and reconciling the preferences of every human (Humans, Every one).
  • [graph point for "systems programmer with mlp shirt"] would it be funny if there were another point, "systems programmer without mlp shirt", and it was pareto-inferior
  • "What if System 2 is System 1". This is a great insight, I think it is, and I think the main reason nerdy types often fail to notice how permeable and continuous the boundary is a kind of tragic habitual cognitive autoimmune disease, and I have a post brewing about this after I used a repaired relationship with the unconscious bulk to cure my astigmatism (I'm going to let it sit for a year just to confirm that the method actually worked and myopia really was averted)
  • Exponential growth is usually not slow, and even if it were slow, it wouldn't entail that "we'll get "warning shots" & a chance to fight back", it only takes a small sustained advantage to be able to utterly win a war (though contemporary humans don't like to carry wars to completion these days, the 20th century should have been a clear lesson that such things are within our abilities at current tech levels). Even if progress in capabilities over time continued to be linear, impact over capabilities is not going to be linear, it never has been.

But overall I think it addresses a certain audience who I know much better than my version of this that I hastily wrote last year when I was summoned to speak at a conference would have (and so I never showed it to them. Maybe one day I will show them yours.).

Load More