Centralization of power (as is likely to result from many possible government interventions) is bad
Suppose that you expected AI research to rapidly reach the point of being able to build Einstein/Von Neumann level intelligence and thereafter rapidly stagnate. In this world, would you be able to see why centralization is bad?
It seems like you're not doing a very good Ideological Turing Test if you can't answer that question in detail.
The number 7 supercomputer is built using Chinese natively developed chips, which still demonstrates the quality/quantity tradeoff.
Also, saying "sanctions will bite in the future" is only persuasive if you have long timelines (and expect sanctions to hold up over those timelines). If you think AGI is imminent, or you think sanctions will weaken over time, future sanctions matter less.
China does not have access to the computational resources[1] (compute, here specifically data centre-grade GPUs) needed for large-scale training runs of large language models.
While it's true that Chinese semiconductor fabs are a decade behind TSMC (and will probably remain so for some time), that doesn't seem to have stopped them from building 162 of the top 500 largest supercomputers in the world.
There are two inputs to building a large supercomputer: quality and quantity, and China seems more than willing to make up in quantity what they lack in quality.
The CCP is not interested in reaching AGI by scaling LLMs.
For a country that is "not interested" in scaling LLMs, they sure do seem to do a lot of research into large language models.
It's also worth noting that China currently has the best open-source text-to-video model, has trained a state of the art text-to-image model, was the first to introduce AI in a mass consumer product, and is miles ahead of the west in terms of facial recognition.
I suspect that "China is not racing for AGI" will end up in the same historical bin as "Russia has no plans to invade Ukraine", a claim that requires us to believe the Chinese stated preferences while completely ignoring their revealed ones.
I do agree that if the US and China were both racing, the US would handily win the race given current conditions. But if the US stops racing, there's absolutely no reason to think the Chinese response would be anything other than "thanks, we'll go ahead without you".
--edit--
If a Chinese developer ever releases an LLM that is so powerful it inevitably oversteps censorship rules at some point, the Chinese government will block it and crack down on the company that released it.
This is a bit of a weird take to have if you are worried about AGI Doom. If your belief is "people will inevitably notice that powerful systems are misaligned and refuse to deploy them", why are you worried about Doom in the first place?
Is the claim that China due to its all-powerful bureaucracy is somehow less prone to alignment failure and hard-left-turns than reckless American corporations? If so, I suggest you consider the possibility that Xi Jinping isn't all that smart.
- As I'd mentioned, we often apply non-LPE-based environment-solving to constrain the space of heuristics over which we search, as in the tic-tac-toe and math examples. Indeed, it seems that scientific research would be impossible without that.
- LPE-based learning does not work in domains where failure is lethal, by definition. However, we have some success navigating them anyway.
I think this is a strawman of LPE. People who point out you need real world experience don't say that you need 0 theory, but that you have to have some contact with reality, even in deadly domains.
Outside of a handful of domains like computer science and pure mathematics, contact with reality is necessary because the laws of physics dictate that we can only know things up to a limited precision. Moreover, it is the experience of experts in a wide variety of domains that "try the thing out and see what happens" is a ridiculously effective heuristic.
Even in mathematics, the one domain where LPE should in principal be unnecessary, trying things out is one of the main ways that mathematicians gain intuitions for what new results are/aren't likely to hold.
I also note that your post doesn't give a single example of a major engineering/technology breakthrough that was done without LPE (in a domain that interacts with physical reality).
It is, in fact, possible to make strong predictions about OOD events like AGI Ruin — if you've studied the problem exhaustively enough to infer its structure despite lacking the hands-on experience.
This is literally the one specific thing LPE advocates think you need to learn from experience about, and you're just asserting it as true?
To summarize:
Domains where "pure thought" is enough:
Domains where LPE is necessary:
You are correct. Free trade in general produces winners/losers and while on average people become better off there is no guarantee that individuals will become richer absent some form of redistribution.
In practice humans have the ability to learn new skills/shift jobs so we mostly ignore the redistribution part, but in an absolute worst case there should be some kind of UBI to accommodate the losers of competition with AGI (perhaps paid out of the "future commons" tax).
you should expect to update in the direction of the truth as the evidence comes in
I think this was addressed later on, but this is not at all true. With the waterfall example, every mile that passes without a waterfall you update downwards, but if there's a waterfall at the very end you've been updating against the truth the whole time.
Another case. Suppose you're trying to predict the outcome of a Gaussian random walk. Each step, you update whatever direction the walk took, but 50% of these steps are "away" from the truth.
you probably shouldn’t be able to predict that this pattern will happen to you in the future.
Again addressed later on, but one can easily come up with stories in which one predictably updates either "in favor of" or "against" AI doom.
Suppose you think there's a 1% chance of AI doom every year, and AI Doom will arrive by 2050 or never. Then you predictably update downwards every year (unless Doom occurs).
Suppose on the other hand that you expect AI to stagnate at some level below AGI, but if AGI is developed then Doom occurs with 100% certainty. Then each year AI fails to stagnate you update upwards (until AI actually stagnates).
I think maybe you misunderstand the word "crux". Crux is a point where you and another person disagree. If you're saying you can't understand why Libertarians think centralization is bad, that IS a crux and trying to understand it would be a potentially useful exercise.