I would say that a part of compassion, and empathy, is to recognise that indeed those narratives are valid, or else there is some valid reason that people are as they are. Also, not everyone shares the moral value of optimising themselves or making themselves good at something. Disgust implies judgement that implies a lack of compassion.
Since you seem to be motivated at making yourself better, which I agree is a good motivation, why don't you challenge yourself to increase your compassion and humility?
Do you have compassion for yourself? What are you bad at that you are unable to make yourself good at? Do you feel disgust for yourself in those situations? Compassion begins with humility, which is something that you might want to work on
I’m not sure I agree that is is easy for humans to robustly understand proofs. I think it takes really a lot of training to get humans to that point.
There's the argument that increasing access to information creates competition for attention, which drives language to be more concise and readable, e.g. https://www.nature.com/articles/s44271-024-00117-1
In a post-scarcity world you probably want a lot of personal freedom.
Fun read. So, so many possible covariates. The causal web is very complicated here. Birth order affects lots and lots of other things, which can also affect the chance you become a cardinal. There are also lots of things that would affect the birth rate in a family and also affect the chance the children become cardinals.
I have a meta-view on this that you might think falls into the bucket of "feels intuitive based on the progress so far". To counter that, this isn't pure intuition. As a side note I don't believe that intuitions should be dismissed and should be at least a part of our belief updating process.
I can't tell you the fine details of what will happen and I'm suspicious of anyone who can because a) this is a very complex system b) no-one really knows how LLMs work, how human cognition works, or what is required for an intelligence takeoff.
However, I can say that for the last decade or so most predictions of AI progress have been on consistently longer timescales than what has happened. Things are happening quicker than the experts believe they will happen. Things are accelerating.
I also believe that there are many paths to AGI, and that given the amount of resources currently being put into the search for one of those paths, they will be found sooner rather than later.
The intelligence takeoff is already happening.
I agree with your point in general of efficiency vs rationality, but I don’t see the direct connection to the article. Can you explain? It seems to me that a representation along correlated values is more efficient, but I don’t see how it is any less rational.
I would describe this as a human-AI system. You are doing at least some of the cognitive work with the scaffolding you put in place through prompt engineering etc, which doesn’t generalise to novel types of problems.
I feel like the entire framing here is the problem. You cannot see "The Thing" because you are looking at it from a perspective where The Thing isn't apparent.
What is The Thing? It is having a partnership that you are both committed to. At its best this partnership becomes an aspect of your self, and your partner. The frame to see this in is that the partnership is an entity in its own right and is a part of the "I" that each partner identifies with. In this frame the question "what am I getting out of this relationship" is no longer entirely focussed on the individual "I" but also on the partnership "I". When you frame the question in terms of what the individual "I" gets out of it then you are entirely missing the point and are unable to see the real value proposition.
If we dip our toe back into the individualistic frame: you could describe the value of relationships being in the partial dissolution of self in the partnership, which not only feels amazing but also gives a deeper level of meaning to your life.