"there is some threshold of general capability such that if someone is above this threshold, they can eventually solve any problem that an arbitrarily intelligent system could solve"
This is a very interesting assumption. Is there research or discussions on this?
"discovering that you're wrong about something should, in expectation, reduce your confidence in X"
This logic seems flawed. Suppose X is whether humans go extinctinct. You have an estimate of the distribution of X (for a bernoulli process it would be some probability p). Take the joint distribution of X and the factors on which X depends (p is now a function of those factors). Your best estimate of p is the mean of the joint distribution and the variance measures how uncertain you're about the factors. Discovering that you're wrong about something means becoming more uncertain about some of the factors. This would increase the variance of the joint distribution. I don't see any reason to expect the mean to move in any particular direction.
Or maybe I'm making a mistake. In any case, I'm not convinced.
When outside, I'm usually tracking location and direction on a mental map. This doesn't seem like a big deal to me but in my experience few people do it. On some occasions I am able to tell which way we need to go while others are confused.
Given that hardware advancements are very likely going to continue, delaying general AI would favor what Nick Bostrom calls a fast takeoff. This makes me uncertain as to whether delaying general AI is a good strategy.
I expected to read more about actively contributing to AI safety rather than about reactivively adapting to whatever is happening.
"There doesn't seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time"
- a human
If there is such a thing, what would a human observe?