I think this scenario is not even remotely realistic. If things really go this way (which is far from granted), the government will massively expand police and other security services (and they will have money for that due to AI productivity gains). When a large percentage of population are cops, riots aren't that big problem.
I think it doesn't work this way, because jobs and housing are not constant. First, if the entire economy shrinks due to lower working population, there would be less well-paid jobs. Second, housing is expensive in places where there are jobs, and it is likely that jobs would concentrate in smaller number of cities as population shrinks.
Imagine for a second what the reversed version of those authoritarian and dystopian efforts would even look like. Realize that this too would be and is in the young adult dystopia book section. Also realize it has not happened, at least not in a long time.
Romanian dictator Nicolae Ceaușescu did exactly that. He was overthrown and executed, and this may be one of reasons why no other dictator tried similar policies so far.
My claim is different - that there is no defined threshold for significance, but on the spectrum from useless to world-changing some technologies which looked very promising decades ago still lie closer to lower end. So it is possible that in 2053 AI products would be about as important as MRI scanners and GMO crops in 2023.
I think that it is wrong. If instead of dropping nukes on mostly wooden cities they used them against enemy troops (or ships, or even cities which aren't built out of bamboo), the conclusion would be that a nuke is a not that powerful and cost-inefficient weapon.
As for "significant impact" - what impact counts as "significant"? Here are some technologies which on my opinion had no significant impact so far:
It is totally possible that AI goes into the same bag.
I think the defining feature of "weak pivotal act" idea was that it should be safe due to its weakness. So, any pivotal act that depends on aligned AGI (and would fail catastrophically if it is not aligned) is not weak.
I assumed that ice layer is supposed to be a few feet thick, and given figures are just for illustration that that amount of ice is trivial to make. If the plan really is to build an artificial glacier hundreds of feet thick, that creates a different set of problems, the first being that described structure wouldn't do it. Depending on temperature and wind speed, ice will either be carried away by wind, form an ice hill that would grow until it blocks nozzles, or accumulate on scaffolding until it collapses under its weight.
The problem with heavy iceboat is that its weight has to be distributed evenly on numerous skates, because otherwise skates that are more heavily loaded dig deeper and friction increases drastically. Such design was never built.
Your calculation of expenses relies on three assumptions: that this is an end-to-end route, that it takes 120 hours, and that it takes one pilot to drive an iceship (of this size and in these conditions). All of these are wrong. As for refrigeration - a much larger fraction of cargo types doesn't tolerate freezing.
I have an opposite impression. "Alignment" is usually interpreted as "do whatever a person who gave the order expected", and what author calls "strong alignment" is aligned AGI ordered to implement CEV.
All of these are simply wrong.
First, ice accumulated over terrain would not be flat. You can search how Alaskan glaciers look like, no way an ice ship can move on that. It is not necessary to pave the road - the problem is that to make an even surface, huge amount of ground has to be moved.
Second, yes, no recorded iceboat carried more than few tons of cargo, including crew.
Third, crew pay is around 5% of operating cost of a container ship. Even if it takes 50 iceships to replace it (due to higher speed and shorter way), then crew pay alone would more than double total operating cost. As for capital cost, lots of small ships are more expensive than one big ship.
The Milgram experiments demonstrated Didn't it fall victim to the replication crisis? I have read somewhere that with different groups outcomes of those experiments are wildly different.