Yes, I completely agree with the weaker formulation "irreducible using only THESE means", like e.g. Polynomials, MPTs, First-Order Logic etc.
I think the answer to the question "are there irreducibly complex statistical models?" is yes.
I agree that there are some sources of irreducible complexity, like 'truely random' events.
To me, the field of cognition does not pattern-match to 'irrecducibly complex', but more to 'We don't have good models. Yet, growth mindset'. So, unless you have some patterns where you can prove that they are irrreducible, I will stick with my priors I guess. The example you gave me,
For a very simple example, if you're trying to fit a continuous curve based on a finite number of data points, you can make the problem arbitrarily hard with functions that are nowhere differentiable.
falls squarely in the 'our models are bad'-category, e.g. the Weierstrass function can be stated pretty compactly with analyitic formulas.
But also, of course I can't prove the non-existence of such irreducible, important processes in the brain.
and my answer there is more like "I don't know, but I could believe so."
These are heuristic descriptions; these essays don’t make explicit how to test whether a model is interpretable or not. I think it probably has something to do with model size; is the model reducible to one with fewer parameters, or not?
If you use e.g. the Akaike Information Criterion for model evaluation, you get around the size problem in theory. Model size is then something you score explicitely.
Personally, I still have intuitive problems with this approach though: many phenomenological theories in physics are easier to interpret than Quantum Mechanics, and seem to be intuitively less complex, but are more complex in a formal sense (and thus get a worse AIC score, even if they predict the same thing).
How much of human thought and behavior is “irreducible” in this way, resembling the huge black-box models of contemporary machine learning? Plausibly a lot
'Irreducible' is a pretty strong stance. I agree that many things will be hard for humans to describe ein a way that other humans find satisfying. But do you think that an epistemically rational entity with unlimited computational power (something like a Solomonov Inductor) would be unable to do that?
I think this is strongly connected to the Typical Mind Fallacy.
I did a quick inventory of distortions that I recognize often [I live in a leftist techy-academic bubble full of socially and sexually permissive people].
Other people's properties that I overestimate because of my bubble:
Knowledge (expecting short inferential distances)
Available amount of leisure
Susceptibility to arguments and evidence
Stuff that is (relatively) easy for people in my bubble, but seems to be hard outside:
Reading a novel
Discussing sexuality & relationship styles
Casual non-sexual touching (hugging, cuddling etc.)
And these are just properties where I deviate pretty strongly from mainstream society.
The rule follows: for things that are private and rarely discussed, there may be a good deal of unacknowledged diversity.
I agree. The Hamming-style question I now ask myself is:
'Which unacknowledged diversity is creating the most problems for me in social interactions?'
You can approximate OKC as a two-person game: Weird = Honest, Polished=Dishonest. U(WW)=+3/+3, U(WP)=0/+4, U(PP)=0/0, then you have the usual Prisoner-Dilemma payoff (motivation: being honest will generate more long-term utility).
This is a bad approximation, as OkCupid is a multi-player-game, so it's more complicated than the classical 2-player Prisoner's Dilemma. That's where the tragedy of the commons comes in. In an environment where nearly everyone plays defect-bot, a lot of utility is destroyed. But, tit-for-tat players have an advantage if they meet another TFT player.
That's how I interpreted the use of concepts in the article, did you understand it differently?