Seems like the king could still understand with a decent explanation, particularly if he bothers to ask about the effects before using it.
He could understand with time, but by the end the rate of progress outstrips his rate of learning. The abstractions which have meaning for him become forever disconnected from what is known.
Sounds like you're assuming progress goes on forever. I'd think it would slow down just based on physical limitations. And your supposed ASI is terrible at explanations.
I doubt anything of import is going to work that way. You don't seem to make an argument for it. I see how the exponential suggests it but most exponentials are really s-curves based on limiting factors.
I think it is far more reasonable to operate under the idea that progress might go on forever then the idea that there is a bottom, especially a bottom just barely past what we already know. Seems to me like what you are suggesting is the default position most people throughout history incorrectly suggested.
Also, if there is a bottom, understanding the concept of 0 and 1 doesn't mean you automatically understand all the concepts which a computer can encode in data. "The concepts that can be generated from 0 and 1 will slow down because of physical limitations" makes no sense to assert.
The bandwidth a single human can learn at is very very tightly constrained. The bandwidth an ASI could generate new meaningful data at is just incomparably larger. No matter how good the explanation is there is a very obvious problem of scale.
How fast can you read with comprehension? Now how fast can you read with comprehension when you don't know the definitions of all the words? How fast can you read the definitions of the new words in order to move forward with learning the main concept? How many new words are in those definitions which require you to also read more definitions just to understand the parent definition? How much progress has been made on other new concepts while you spent all this time reading? How many definitions have changed before you even get back to the main concept you were learning?
Worrying that ASI will do bad stuff because we told it to without bothering to understand the consequences is pretty far down my list of things to worry about. I can understand "eliminates the world as we know it" without understanding the physics by which it does this. Summaries and simplifications are a thing. I'm gonna ask "so hey what consequences would this have that I'd care about" and the ASI, because it's super-smart, will answer in terms I can understand.
If it doesn't, I'll stick to asking it to do things I can understand. Like improving its ability to summarize and communicate.
You haven't addressed my point that a smart ASI will be good at summarizing and simplifying.
Maybe you're not concerned with practical dangers, just the possibility that humans won't always understand everying ASIs come up with. In which case, that's fine; I'm worried about everyone dying long before we get the opportunity to be limited by our understanding. Not being able to fully appreciate everything an ASI is coming up with might be a limitation, but it's far beyond the level of success we can imagine, so I'm putting it in the category of planning the victory party before working out a plan to win.
You haven't addressed my point that a smart ASI will be good at summarizing and simplifying.
The story itself is entirely about how this doesn't matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.
The King has a bow and arrow.
He understands exactly how they work, having made them in his youth as part of his training.
The King has a stealth aircraft with a nuclear bomb.
He understands the abstraction, but teams of thousands of people created it. At the bottom of the hierarchy a specialist understands the details of each component, but all actual implementation gets abstracted and combined long before it gets to the King.
The King has a hypersonic drone swarm.
He understands the abstraction, but AI designed most of it. AI created new materials, generated the airframe, wrote the software, and controls the swarm in flight. The AI explains the project to the King in high level abstractions.
The King has a mass-phase inhibitor cell.
He kind of... maybe knows some of the words in the abstraction. He is told by the AI he will need to spend about 2 hours in order to understand how to use it. It is based on concept eeaf7aa9d8b6ddca which forces widespread antimatter annihilation that generates a bubble of ionizing radiation causing muons within its radius to... wait, sorry this is irrelevant now because...
The King has a 9fe8fedc4391c064.
It uses 3514a173ef0ecd73 which I am not sure how to explain to you without you first knowing about 67ca42582c291683 which is something like d08352eb9f7ebb4a combined with 455b7127b7bc6f74... ah but wait, this was all far surpassed before I even got the first word out because...
e633a4300bef912c 813d8e888db839e1 e5d293bf0a36186b 9ca4aa4a70c6d960 f727e152a4eec00b
140416c4d32abecd ee8c9bc092e3eed6 6cd7c6e301ce9c11 6b9f79e03ef63a2a 437dd1c2214f27ef 0f9a4edca4a5f2fb 3ef767db8c4d5c2c ace06b7286e3dabc bac37e448fd38a8e fa4b16639f634e22 d3a79faa5d45dbf2 3032dcd9a8977e6c 469dc8d787d7530c e1507ddf6307a901 a303f7605204f4ca ddc53382db24a86b 468a285712e2cf75 6b8ae7dbb1ac7fcf