MrMind

When I was younger...

MrMind's Comments

The Bus Ticket Theory of Genius

Isn't "just the right kind of obsession" a natural ability? It's not that you can orient your 'obsessions' at will...

Examples of Categories

Two of my favorite categories show that they really are everywhere: the free category on any graph and the presheaves of gamma.

The first: take any directed graph, unfocus your eyes and instead of arrows consider paths. That is a category!

The second: take any finite graph. Take sets and functions that realize this graph. This is a category, moreover you can make it dagger-compact, so you can do quantum mechanics with it. Take as the finite graph gamma, which is just two vertex with two arrows between them. Sets and functions that realize this graph are... any graph! So, CT allows you to do quantum mechanics with graphs.

Amazing!

What is category theory?

Lambda calculus is though the internal language of a very common kind of category, so, in a sense, category theory allows lambda calculus to do computations not only with functions, but also sets, topological spaces, manifolds, etc.

Introduction to Introduction to Category Theory

While I share your enthusiasm toward categories, I find suspicious the claim that CT is the correct framework from which to understand rationality. Around here, it's mainly equated with Bayesian Probability, and the categorial grasp of probability or even measure is less than impressive. The most interesting fact I've been able to dig up is that the Giry monad is the codensity monad of the inclusion of convex spaces into measure spaces, hardly an illuminating fact (basically a convoluted way of saying that probabilities are the most general ways of forming convex combinations out of measures).

I've searched and searched for categorial answers or hints about the problem of extending probabilities to other kinds of logic (or even simply extending it to classical predicate logic), but so far I've had no luck.

Odds are not easier

The difference between the two is literally a single summation, so... yeah?

Occam's Razor: In need of sharpening?

I'd like to point out a source of confusion around Occam's Razor that I see you're falling for, dispelling it will make things clearer: "you should not multiplicate entities without necessities!". This means that Occam's Razor helps decide between competing theories if and only if they have the same explanation and predictive power. But in the history of science, it was almost never the case that competing theories had the same power. Maybe it happened a couple of times (epicycles, the Copenhagen interpretation), but in all other instances a theory was selected not because it was simpler, but because it was much more powerful.

Contrary to popular misconception, Occam's razor gets to be used very, very rarely.

We do have, anyway, a formalization of that principle in algorithmic information theory: Solomonoff induction. A agent that, to predict the outcome of a sequence, places the highest probabilities in the shortest compatible programs, will eventually outperform every other class of predictor. The catch here is the word 'eventually': in every measure of complexity, there's a constant that offset the values due to the definition of the reference universal Turing machine. Different references will indicate different complexities for the same first programs, but all measure will converge after a finite amount.

This is also why I think that the problem explaining thunders with "Thor vs clouds" is such a poor example of Occam's razor: Solomonoff induction is a formalization of Occam razor for theories, not explanations. Due to the aforementioned constant, you cannot have absolutely simpler model of a finite sequence of event. There's no such a thing, it will always depend on the complexity of the starting Turing machine. However, you can have eventually simpler models of infinite sequence of events (infinite sequence predictor are equivalent to programs). In that case, the natural event program will prevail because it will allow to control better the outcomes.

Philosophy as low-energy approximation

I arrived at the same conclusion when I tried to make sense of the Metaethics Sequence. My summary of Eliezer's writings is: "morality is a bunch of mental computations shared between most human beings". Morality thus grew out of our evolutive history, and it should not be surprising that in extreme situations it might be incoherent or maladaptive.

Only if you believe that morality should be like systematic and universal and coherent, then you can say that extreme examples are uncovering something interesting about peoples' morality.

Otherwise, extreme situations are as interesting as saying that people cannot mentally factor long numbers.

What is our evidence that Bayesian Rationality makes people's lives significantly better?

First of all, the community around LW2.0 can only be loosely associated to a movement: I don't think there's anyone that explicitly endorses *every* technique or theory appeared here. LW is not CFAR, is not the Alignment forum, etc. So I would caution against enticing someone into LW by saying that the community supports this or that technique.

The main advantage of rationality, in its present stage, is defensive: if you're aspiring to be rational, you wouldn't waste time attending religious gatherings that you despise; you wouldn't waste money buying ineffective treatments (sugar pills, crystals, etc.); you wouldn't waste resources following people that mistake fiction for facts. At the moment, rationality is just a very good filter for every product, knowledge and praxis that society presents to you (hint: 99% of those things is crap).

On the other hand, what you can or should do with all the resources you're not wasting, is something rationality cannot answer in full today. Metaethics and akrasia are, after all, the greatest unsolved problems of our community.

There were notorious attempts (e.g. Torture vs Dust specks or the Basilisk), but nothing has emerged with the clarity and effectiveness of Bayesian reasoning. Effective Altruism and MIRI are perhaps the most famous examples of trying to solve the most pressing problems. A definitive framework though still eludes us.

1960: The Year The Singularity Was Cancelled

In Foerster's paper, he links the increase in productivity linearly with the increase in population. But Scott has also proposed that the rate of innovation is slowing down, due to a logarithmic increase of productivity from population. So maybe Foerster's model is still valid, and 1960 is only the year where we exhausted the almost linear part of progress (the "low hanging fruits").

Perhaps nowadays we combine the exponential growth of population from population with the logarithmic increase in productivity, to get the linear economic growth we see.

Why does category theory exist?
Answer by MrMindMay 07, 201910

Algebraic topology is the discipline that studies geometries by associating them with algebraic objects (usually, groups or vector spaces) and observing how changing the underlying space affects the related algebras. In 1941, two mathematicians working in that field sought to generalize a theorem that they discovered, and needed to show that their solution was still valid for a larger class of spaces, obtained by "natural" transformations. Natural, at that point, was a term lacking a precise definition, and only meant something like "avoiding arbitrary choices", in the same way a vector space is naturally isomorphic to its double dual, while it's isomorphic to its dual only through the choice of a basis.

The need to make precise the notion of naturality for algebraic topology led them to the definition of natural transformation, which in turn required the notion of functor which in turn required the notion of category.

This answers questions 1 and 2: category theory was born to give a precise definition of naturality, and was sought to generalize the "universal coefficient theorem" to a larger class of spaces.

This story is told with a lot of details in the first paragraphs of Riehl's wonderful "Category theory in context".

To answer n° 3, though, even if category theory was rapidly expanding during the '50s and the '60s, it was only with the work of Lawvere (who I consider a genius on par with Gödel) in the '70s that it became a foundational discipline: guided by his intuitions, category theory became the unifying language for every branch of mathematics, from geometry to computation to logic to algebras. Basically, it showed how the variety of mathematical disciplines are just different ways to say the same thing.

Load More