I did not know this! And it's quite an update for me regarding Mochizuki's credibility on the matter.
This seems like nonsense. If there's any way to formalize what Mochizuki claims, he could and should do this to achieve what might be the greatest intellectual upset in history. On the other hand, he's likely just wrong about something and his proof wouldn't go through, so there's no use in trying to settle this with a proof assistant.
I'm always wondering whether there's something going on here, where - by definition - we can rationally understand how high-value a utopia would be, but since we can't really tell for sure where things will end up, we may be assigning a way to high intuitive probability to it.
It feels like your implicitly framing is that one should welcome technological progress and be active about adapting to it, but I'm lacking the perspective that one (sometimes) should be active about shaping the course of progress itself.
Also, most of the time people who are seriously discussing a matter likely don't talk about whether a technology is good or bad as a whole, but refer to the way it's currently being realized in the world, so that seems like a strawman.
If they changed their mind not immediately after the election, and signaled credibly that they did so for concrete reasons after having looked into/engaged with the issue, then this'd probably be fine in some cases? (Ideally, if they're right, they can convince you to change your mind too)
OK, so here we should give some credit to Eliezer for having thought this through and who has been making a big deal about this precise thing (although partly through his recent fanfic that he keeps alluding to, so maybe that doesn't really count).
I tend to find him a bit annoying on these things because surely people wouldn't seriously try to pin substantial hopes on these kind of ideas, but what do I know.
My personal take is that this is already an area that people at large are ca. appropriately worried about. It can also lead you into politically polarized territory and people may reasonably prefer to avoid that unless there's a good reason.
I'm not super happy with my phrasing, and Ben's "glory" mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I'm pointing at - whether you perceive a problem as interesting or not is often related to the social context and potential payoff.
What I'm specifically suggesting that if you took this factor out of ML, it wouldn't be much more interesting than many other fields with a similar balance of empirical and theoretical components.
What you're pointing at applies if AI merely makes most work obsolete without significantly disturbing the social order otherwise, but you're not considering (also historically common) replacement/displacement scenarios. It is clearly bad from my perspective if (e.g.) either:
1) Controllable strong AI gets used to takeover the world and in time replace the human population by the dictators offspring.
2) Humans get displaced by AIs.
In either case, the surviving parties may well look back on the current state of affairs and consider their world much improved, but it's likely we wouldn't on reflection.
Your dig against pick-up artists as it's stated doesn't seem to amount to much more than "these guys feel icky", which is most likely just reflecting it being low status. (There's also separately a bunch of toxic behavior related to the pick-up mindset one could rightfully criticize.)