With my question I was referring to how humanity has less common sense then its people. I think to core issue lies in coordination en cooperation. You suggestions only address this indirectly. I fear chances like those will be too slow. Especially if we remain uncoordinated while implementing them.
So, how do we make humanity less stupid?
What if 2 or more people could come together and make something better? Something smarter, wiser, happier, more successful... Something coordinated, coherent, agentic, working together to achieve both their goals, cooperating.
If a skill like this could be discovered and learned, people would spontaneously start coming together in these macro-agents. They could live better lives like this.
And the giants would come together in turn, creating a unified humanity. Grown up and no longer fighting itself every step of the way.
It's my impression that a lot of the "promising new architectures" are indeed promising. IMO a lot of them could compete with transformers if you invest in them. It just isn't worth the risk while the transformer gold-mine is still open. Why do you disagree?
As far as I know the cyclic weakness in KataGo (the top Go AI) was addressed fairly quickly. We don't know a weird trick to beating the current version. (altough adverserial training might turn up another weakness). The AIs are superhuman at Go. The fact that humans could beat them by going out of distribution doesn't seem relevant to me.
Figgie may not be a good game but it's certainly better then poker, what game would be better then Figgie?
The Alexander technique claims your attention consists of 2 layers,
Attention control is about choosing your focus in the space of awareness. The Alexander technique is about controlling your awareness space.
https://expandingawareness.org/blog/what-is-the-alexander-technique
Becoming aware of for example, tension in your muscles can help improve posture.
https://www.johnnichollsat.com/2011/02/27/explaining-the-at-1/
Reading this improved my self control over night, strong upvote.
I've been mainly using it for improving posture and eating healthier.
Focusing your attention on stopping does wonders for breaking bad habits,
I can tell stopping gets easier after just one or two iterations.
The allegory of the dragon chapter in replacing guilt, about the difference between the value and the price of a life, complements this chapter well.
It's common knowledge on lesswrong that "0 and 1 are not probabilities". This means that if you update using Bayes law, your prior is 0 or 1 iff your posterior is 0 or 1. In other words the only way to be absolutely certain, is if you already were certain before considering any evidence. You obviously shouldn't be certain/unconvincable without evidence so you shouldn't ever be.
But only this week, I thought of an important corollary: precise measurement is impossible.
If X is a continuous random variable, then P(X=x)=0 for any x. Because 0 and 1 are not probabilities, this means you'll never know the precise value of X. You can never measure something continuous precisely.
I think it's suspicious to get this conclusion from an "armchair" argument. I've deduced something important about the world without taking into account reality, without evidence. Which I said you shouldn't ever do! So where did I go wrong?