I see three issues with your argument, two that don't change anything meaningful and one that does.
The two minor points:
You presume a world without preference for human made works. I posit this is a world that cannot exist under any circumstances where humans also exist. We are a species that pays a premium for art made by elephants and other animals. That shows off photos of our children and their accomplishments to people who we know don't care. We had pet rocks. The drive to value that which is valueless is, for whatever reason, deeply embedded in us. It is not going anywhere. More importantly, acknowledging this does not in any way weaken your point, it does complicate the math a little, but the outcomes are all the same. Denying this point, however, makes you appear to be fundamentally off base on human psychology and that does weaken your persuasiveness.
Second, you posit the AI would rent the machine at the exact cost of its output making zero profit. That needs an explanation to me. Presuming the AI has a goal other than "use all current funds to make potatoes but don't grow the amount produced over time" it will want some level of profit to achieve that goal. Even maximizing potato output wants to save up for more machines in the future and be prepared for inflation, market changes etc. If it really has no profit and loses the ability to rent the machine the first time the market swings, and thus goes bankrupt, it's not a very smart super intelligence. I assume it will predict swings and keep the bare minimum needed for its goals, so razor thin margins that look crazy to us could be generous to it. That's fine. But zero needs justification. 49,500 is functionally the same as 50,000 for your core argument, but resolves this.
The one that seems to matter though is "eke out a living on 5 potatoes a day". The Amish are doing better than that. So are the Menonites, the Inuit, the Sentilinese, etc. People can and will carve out enclaves where life works. Special economic zones where AI doesn't exist. Maybe that looks like North Korea, and maybe it looks like Pennsylvania, and maybe it looks like a patchwork of everything in between. Also, energy and logistics are a hard problem. We could not implement full robotics today, even if the tech were 100% ready, because most of the world doesn't have access to reliable electricity. Even in the developed world, we don't have spare capacity. You seem to need additional bullets that cover: robotics and energy production are solved such that no part of the economy is constrained by either; enclaves like the Amish are not included in this assessment, etc. Your scenario only addresses those humans who try to compete with AI and not those who walk away and go off the grid making their own economy. They already exist, why do you assume they will stop existing? Maybe this is two issues also.
This is a linkpost to a blogpost I've written about wages under superintelligence, responding to recent discussion among economists.
TLDR: Under stylized assumptions, I argue that, if there is a superintelligence that generates more output per unit of capital than humans do across all tasks, human wages could decline relative to today, because humans will be priced out of capital markets. At that point, human workers will be reduced to the wage we can get with our bare hands: we won’t be able to afford complementary capital. This result holds even if there is rapid capital accumulation from AI production. To avoid horrible outcomes for labor, we would need redistribution or other political reforms. I also discuss situations where my argument doesn’t go through.