If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
Quantum theory and simulation arguments both suggest that there are many copies of myself in the multiverse. From a first person subjective anticipation perspective, experiencing death as nothingness seems impossible so it seems like I should either anticipate my subjective experience continuing as one of the surviving copies, or the whole concept of subjective anticipation is confused. From a third person / God's view, death can be thought of some of the copies being destroyed or a reduction in my "measure", but I don't seem to fear this, just as I didn't jump in joy to learn about having a huge number of copies in the first place. The situation seems too abstract or remote or foreign to trigger my fear (or joy) response.
If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn't not seem very hard to do.
What motive does a centralized dominant power have to allow any progress?
A culture/ideology that says the ruler is supposed to be benevolent and try to improve their subjects' lives, which of course was not literally followed, but would make it hard to fully suppress things that could clearly make people's lives better, like many kinds of technological progress. And historically, AFAIK few if any of the Chinese emperors tried to directly suppress technological innovation, they just didn't encourage it like the West did, through things like patent laws and scientific institutions.
The entire world would likely look more like North Korea.
Yes, directionally it would look more like North Korea, but I think the controls would not have to be as total or harsh, because there is less of a threat that outside ideas could rush in and overturn the existing culture/ideology the moment you let your guard down.
We can do adversarial training against other AIs, but ancestral humans didn't have to contend with animals whose goal was to trick them into not reproducing by any means necessary
We did have to contend with memes that tried to hijack our minds to spread them horizontally (as opposed to vertically, by having more kids), but unfortunately (or fortunately) such "adversarial training" wasn't powerful enough to instill a robust desire to maximize reproductive fitness. Our adversarial training for AI could also be very limited compared to the adversaries or natural distributional shifts the AI will face in the future.
Our fear of death is therefore much more robust than our desire to maximize reproductive fitness
My fear of death has been much reduced after learning about ideas like quantum immortality and simulation arguments, so it doesn't seem that much more robust. Its apparent robustness in others looks like an accidental effect of most people not paying attention or being able to fully understand such ideas, which does not seem to have a relevant analogy for AI safety.
I think extensive use of LLM should be flagged at the beginning of a post, but "uses an LLM in any part of its production process whatsoever" would probably result in the majority of posts being flagged and make the flag useless for filtering. For example I routinely use LLMs to check my posts for errors (that the LLM can detect), and I imagine most other people do so as well (or should, if they don't already).
Unfortunately this kind of self flagging/reporting is ultimately not going to work, as far as individually or societally protecting against AI-powered manipulation, and I doubt there will be a technical solution (e.g. AI content detector or other kind of defense) either (short of solving metaphilosophy). I'm not sure it will do more good than harm even in the short run because it can give a false sense of security and punish the honest / reward the dishonest, but still lean towards trying to establish "extensive use of LLM should be flagged at the beginning of a post" as a norm.
It's based on the idea that Keju created a long-term selective pressure for intelligence.
(The following is written by AI (Gemini 2.5 Pro) but I think it correctly captured my position.)
You're right to point out that I'm using a highly stylized and simplified model of "Chinese civilization." The reality, with its dynastic cycles, periods of division, and foreign rule, was far messier and more brutal than my short comment could convey.
My point, however, isn't about a specific, unbroken political entity. It's about a civilizational attractor state. The remarkable thing about the system described in "Romance of the Three Kingdoms" is not that it fell apart, but that it repeatedly put itself back together into a centralized, bureaucratic, agrarian empire, whereas post-Roman Europe fragmented permanently. Even foreign conquerors like the Manchus were largely assimilated by this system, adopting its institutions and governing philosophy (the "sinicization" thesis).
Regarding the Keju, the argument isn't for intentional eugenics, but a de facto one. The mechanism is simple: if (1) success in the exams correlates with heritable intelligence, and (2) success confers immense wealth and reproductive opportunity (e.g., supporting multiple wives and children who survive to adulthood), then over a millennium you have created a powerful, systematic selective pressure for those traits.
The core of the thought experiment remains: is a civilization that structurally, even if unintentionally, prioritizes stability and slow biological enhancement over rapid, disruptive technological innovation better positioned to handle long-term existential risks?
Maybe Chinese civilization was (unintentionally) on the right path: discourage or at least don't encourage technological innovation but don't stop it completely, run a de facto eugenics program (Keju, or Imperial Examination System) to slowly improve human intelligence, and centralize control over governance and culture to prevent drift from these policies. If the West hadn't jumped the gun with its Industrial Revolution, by the time China got to AI, human intelligence would be a lot higher and we might be in a much better position to solve alignment.
This was inspired by @dsj's complaint about centralization, using the example of it being impossible for a centralized power or authority to deal with the Industrial Revolution in a positive way. The contrarian in my mind piped up with "Maybe the problem isn't with centralization, but with the Industrial Revolution!" If the world had more centralization, such that the Industrial Revolution never started in an uncontrolled way, perhaps it would have been better off in the long run.
One unknown is what would the trajectory of philosophical progress look like in this centralized world, compared to a more decentralized world like ours. The West seems to have better philosophy than China, but it's not universal (e.g. analytical vs Continental philosophy). (Actually "not universal" is a big understatement given how little attention most people pay to good philosophy, aside from a few exceptional bubbles like LW.) Presumably in the centralized world there is a strong incentive to stifle philosophical progress (similar to China historically), for the sake of stability, but what happens when average human IQ reaches 150 or 200?
If people started trying earnestly to convert wealth/income into more kids, we'd come under Malthusian constraints again, and before that much backsliding in living standards and downward social mobility for most people, which would trigger a lot of cultural upheaval and potential backlash (e.g., calls for more welfare/redistribution and attempts to turn culture back against "eugenics"/"social Darwinism", which will probably succeed just like they succeeded before). It seems ethically pretty fraught to try to push the world in that direction, to say the least, and it has a lot of other downsides, so I think at this point a much better plan to increase human intelligence is to make available genetic enhancements that parents can voluntarily choose for their kids, government-subsidized if necessary to make them affordable for everyone, which avoids most of these problems.