Matthew Farrugia-Roberts
Matthew Farrugia-Roberts has not written any posts yet.

Matthew Farrugia-Roberts has not written any posts yet.

W. Ross Ashby's Law of Requisite Variety (1956) suggests fundamental limits to human control over more capable systems.
This law sounds super enticing and I want to understand it more. Could you spell out how the law suggests this?
I did a quick search of LessWrong and Wikipedia regarding this law.
I like this analogy, but there are a couple of features that I think make it hard to think about:
1. The human wants to play, not just to win. You stipulated that "the human aims to win, and instructs their AI teammate to prioritise winning above all else". The dilemma then arises because the aim to win cuts against the human having agency and control. Your takeaway is "Even perfectly aligned systems, genuinely pursuing human goals, might naturally evolve to restrict human agency."
So in this analogy, it seems that "winning" stands for the human's true goals. But (as you acknowledge) it seems like the human doesn't just want to win, but actually... (read 517 more words →)
There is a typo in the transcript. The name of the creator of singular learning theory is "Sumio Watanabe" rather than "Sumio Aranabe".
I think these are helpful clarifying questions and comments from Leon. I saw Liam's response. I can add to some of Liam's answers about some of the definitions of singular models and singularities.
1. Conditions of regularity: Identifiability vs. regular Fisher information matrix
... (read 939 more words →)Liam: A regular statistical model class is one which is identifiable (so implies that ), and has positive definite Fisher information matrix for all .
Leon: The rest of the article seems to mainly focus on the case of the Fisher information matrix. In particular, you didn't show an example of a non-regular model where the Fisher information matrix is positive definite everywhere.
Is it correct to assume models which are merely non-regular because the map from
Seems worth it 👾