That used to be the French model, which imo kick way above its (abysmal) low funding.
Steelmanning is about finding the truth, ITT is about convincing someone in a debate. Different aims.
There's a big gap between "you have to complete the task in exactly this way" and "mistake is a mistake, only the end result count".
I routinely gives full marks if the student made a small computation mistake but the reasoning is correct. My colleagues tend to be less lenient but follow the same principle. I always give full grade to correct reasoning even if it is not the method seen in class (but I quite insistently warn my students that they should not complain if they make mistakes using a different method).
I do exactly what you describe with my students, but sadly with extremely limited results.
"conveniently ignores the fact that the kids who didn't have a problem with the lecture were the ones who already knew all of that from some other source."
This is definitely not true in general and probably a rare case. N=1 of course, but I never had problems with maths lectures (or any other lectures) and I never was in the situation of knowing all of the maths before the lecture (I usually knew history and physics lessons in advance though). And it's the same thing with my current students : even the best ones are clearly unfamiliar with the material I cover.
I think the lesswrong crowd has in general a very unusual experience with both school and maths, even compared to the average gifted maths student. Beware of the typical mind fallacy.
Upvoted because it's a good intro discussion to a problem that I am personally involved with (as a maths teacher). But my personal experience is that what makes a good maths curriculum is much more complicated than that. In particular I'm pretty certain now that different students have such wildly different needs that any attempt at (universal) standardisation is doomed to fail (of course some curriculum are still better than others...).
Speaking as a Catholic, this won't have much impact but mostly because the Catholic Church as a whole is already extremely wary about AI. It's good that it is explicitly written at the highest level though (note that what you feel is vague is just Vatican-speak).
However there is still no understanding about how powerful new AI models will be. In particular Catholics in general are skeptical about the possibility of AGI (mainly for philosophical/theological reasons). Their concerns will side more with AI-ethic rather than AI-alignment, but they will be natural allies for any "pause" or "slow-down" movement.
The pope has advisors. Some may even be young !
The Catholic Church has a long intellectual tradition even if it's very different from the one on lesswrong, and it has always been wary of potential misuses of new technologies. So nothing really surprising here for those who are used to Vatican-speak.
I've remarked that I've recently begin to strong upvote more and I think it's a bad habit. How often would you say you upvote vs strong upvote?
In retrospect Alpha0 was really the wake up call for me, not because it was so strong at chess but because it looked so human playing chess.