When I was younger...
Beware of the selection bias: even if veterans show more productivity, it could just be because the military training has selected those with higher discipline
The diagram at the beginning is very interesting. I'm curious about the arrow from relationship to results... care to explain? It refers to joint works or collaborations?
On the other hand, it's not surprising to me that AI alignment is a field that requires much more research and math than software writing skills... the field is completely new and not very well formalized yet, probably your skill set is misaligned with the need of the market
> The first thing that you must accept in order to seek sense properly is the claim that minds actually make sense
This is somewhat weird to me. Since Kahneman & Tverski, we know that system 2 is mostly good at rationalizing the actions taken by system 1, to create a self-coherent narrative. Not only thus minds generally don't make any sense, my minds in general lacks any sense. I'm here just because my system 1 is well adjusted to this modern environment, I don't *need* to make any sense.
From this perspective, "making sense" appears to be a tiring and pointless exercise...
Isn't "just the right kind of obsession" a natural ability? It's not that you can orient your 'obsessions' at will...
Two of my favorite categories show that they really are everywhere: the free category on any graph and the presheaves of gamma.
The first: take any directed graph, unfocus your eyes and instead of arrows consider paths. That is a category!
The second: take any finite graph. Take sets and functions that realize this graph. This is a category, moreover you can make it dagger-compact, so you can do quantum mechanics with it. Take as the finite graph gamma, which is just two vertex with two arrows between them. Sets and functions that realize this graph are... any graph! So, CT allows you to do quantum mechanics with graphs.
Lambda calculus is though the internal language of a very common kind of category, so, in a sense, category theory allows lambda calculus to do computations not only with functions, but also sets, topological spaces, manifolds, etc.
While I share your enthusiasm toward categories, I find suspicious the claim that CT is the correct framework from which to understand rationality. Around here, it's mainly equated with Bayesian Probability, and the categorial grasp of probability or even measure is less than impressive. The most interesting fact I've been able to dig up is that the Giry monad is the codensity monad of the inclusion of convex spaces into measure spaces, hardly an illuminating fact (basically a convoluted way of saying that probabilities are the most general ways of forming convex combinations out of measures).
I've searched and searched for categorial answers or hints about the problem of extending probabilities to other kinds of logic (or even simply extending it to classical predicate logic), but so far I've had no luck.
The difference between the two is literally a single summation, so... yeah?
I'd like to point out a source of confusion around Occam's Razor that I see you're falling for, dispelling it will make things clearer: "you should not multiplicate entities without necessities!". This means that Occam's Razor helps decide between competing theories if and only if they have the same explanation and predictive power. But in the history of science, it was almost never the case that competing theories had the same power. Maybe it happened a couple of times (epicycles, the Copenhagen interpretation), but in all other instances a theory was selected not because it was simpler, but because it was much more powerful.
Contrary to popular misconception, Occam's razor gets to be used very, very rarely.
We do have, anyway, a formalization of that principle in algorithmic information theory: Solomonoff induction. A agent that, to predict the outcome of a sequence, places the highest probabilities in the shortest compatible programs, will eventually outperform every other class of predictor. The catch here is the word 'eventually': in every measure of complexity, there's a constant that offset the values due to the definition of the reference universal Turing machine. Different references will indicate different complexities for the same first programs, but all measure will converge after a finite amount.
This is also why I think that the problem explaining thunders with "Thor vs clouds" is such a poor example of Occam's razor: Solomonoff induction is a formalization of Occam razor for theories, not explanations. Due to the aforementioned constant, you cannot have absolutely simpler model of a finite sequence of event. There's no such a thing, it will always depend on the complexity of the starting Turing machine. However, you can have eventually simpler models of infinite sequence of events (infinite sequence predictor are equivalent to programs). In that case, the natural event program will prevail because it will allow to control better the outcomes.
I arrived at the same conclusion when I tried to make sense of the Metaethics Sequence. My summary of Eliezer's writings is: "morality is a bunch of mental computations shared between most human beings". Morality thus grew out of our evolutive history, and it should not be surprising that in extreme situations it might be incoherent or maladaptive.
Only if you believe that morality should be like systematic and universal and coherent, then you can say that extreme examples are uncovering something interesting about peoples' morality.
Otherwise, extreme situations are as interesting as saying that people cannot mentally factor long numbers.