Some books you might like to read:
All of these books to various degrees tackle the things you are describing from a holistic perspective. Hope this helps.
It loads past conversations (or parts of them) into context, so it could change behaviour.
A lesson from the book System Effects: Complexity in Political and Social Life by Robert Jervis, and also from the book The Trading Game: A Confession by Gary Stevenson.
When people talk about planning for the future, there is often a thought chain like this:
But of course the moment you start working at making X happen you have already destroyed the premise. There are no longer two equal worlds held in expectation, one with X and one with no X. There is now the world without X (in the past), and the world where you are trying to make X happen (the present). And very often the path to attaining X creates a world much less preferable for you than the world before you started, long before you reach X itself.
For example:
However, the price can be much more volatile than you expect, especially if you are taking out big positions in a relatively iliquid market. Thus you may find that three months in your paper losses are so large that you reach your pain threshold and back out of the trade for fear that your original prediction was wrong. At the end of the five months, you may have predicted the price correctly, but all you did was lose a large sum of money in the interim.
For another example:
Of course, in the process of trying to raise awareness of this issue, you might first create a world where a small subset of the population (mostly policy and AI people) are suddenly very clued-in to the possibility of the race dynamics. There people are also in a very good position to create, maintain, and capitalize on those dynamics (whether consciously or not), including using them to raise large amounts of cash. Now suddenly the risk of race dynamics is much larger than before, and the world is in a more precarious state.
There isn't really a foolproof way to get around this problem. However, one tactic might be to look at your theory of change, and instead of comparing the world state before and after the plan, look at the world state along each step of the path to change, and consciously weigh up the changes and tradeoffs at each step. If one of those steps looks like it would break a moral, social, or pain-related threshold, maybe reconsider that theory of change.
Addendum: I think this is also why systems/ecosystems/plans which rely on establishing positive or negative feedback loops are so powerful. They've set things up so that each stage incrementally moves towards the goal, so that even if there are setbacks you have room to fall back instead of breaching a pain threshold.
[I think this comment is too aggressive and I don't really want to shoulder an argument right now]
With apologies to @Garrett Baker .
From the mechanise blogpost:
Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite international efforts to limit nuclear proliferation.
Yet the number of nuclear weapons in the world has decreased from its peak during the cold war. Furthermore, we've somehow stopped ourselves from using them, which suggests that some amount of steering is possible.
With regards to the blogpost as a whole, humanity fits their picture the most when it is uncoordinated and trapped in isolated clades, each of which is in a molochian red queen's race with the other, requiring people to rapidly upgrade to new tech if only to keep pace with their opponents in commerce or war. But this really isn't the only way we can organise ourselves. Many societies made do fairly well for long periods in isolation without "rising up the tech tree" (e.g. Japan post-sengoku jidai).
And even if it is inevitable... You can stop a car going at 60 mph by slowly hitting the brakes or by ramming it into a wall. Even if stopping is "inevitable", it does not follow that the wall and the gentle decceleration are identically preferable for the humans inside.
I think a working continual learning implementation would mess with convergence-based results which have a relatively fixed training data distribution and only modify the starting seeds. This is mostly because a continual learning system is constantly drifting "off the base distribution" and incorporating new data. In other words, the car model has seen data from places and distributions the attacker's base model never will.
Addendum for the future: Concepts like agents, agency, and choice only make sense at the systemic macroscale. If you had total atomic knowledge (complete knowledge of every single particle interaction in a human - which, as we discussed, basically requires complete knowledge of every single particle interaction in the universe), the determinists are right. There is no choice. It's only neurons firing and chemicals bonding. But we operate at a higher level, with noise and uncertainty. Then preferences and policies make sense as things to talk about.
I've heard it quite a few times when discussing emergence and complexity topics.
Which period of "chinese civilisation" are you referring to? I think it would be hard to point to any isolated "chinese civilisation" just minding its own business and keeping a firm grip on a unified cultural and ethnic population. Over 3500+ years of written history the territory occupied by China today had multiple periods of unity and division, sometimes splitting up into 10 or more states, often with multiple empires and dynasties coexisting in various levels of war and peace and very loosely ruled areas in between. (This is IMO a central theme of Chinese history: the first line of the Romance of the Three Kingdoms reads "Of matters under heaven, we can say that what is long united must divide, what is long divided must unite". At various points the "Chinese Empire" looked more like the Holy Roman Empire, e.g. during the late Zhou dynasty leading into the Spring and Autumn period)
The "chinese lands" were taken over by the Mongols and the Manchu during the Yuan and Qing dynasties (the latter one being the last dynasty before the 20th century), and at various points the borders of the Chinese empire would grow and shrink to encompass what we today recognise as Korea, Japan, South East Asia, Tibet... There are 56 recognised ethnic groups in China today. The importance and purpose of the Keju system also changed throughout the periods it was in use, and I have no idea where you got the eugenics thing from. I also think you would have a hard time building a case for any intentional or centralised control of scientific research beyond that of the European states at the time, mostly because the idea of scientific research is itself a very modern one (is alchemical research science?). As far as I can understand it you're taking the "vibe" of a strong, unified, centralised state that people recognise today in the People's Republic of China and then stretching it backwards to create some kind of artificial historical throughline.
My best argument as to why coarse-graining and "going up a layer" when describing complex systems are necessary:
Often we hear a reductionist case against ideas like emergence which goes something like this: "If we could simply track all the particles in e.g. a human body, we'd be able to predict what they did perfectly with no need for larger-scale simplified models of organs, cells, minds, personalities etc.". However, this kind of total knowledge is actually impossible given the bounds of the computational power available to us.
First of all, when we attempt to track billions of particle interactions we very quickly end up with a chaotic system, such that tiny errors in measurements and setting up initial states quickly compound into massive prediction errors (A metaphor I like is that you're "using up" the decimal points in your measurement: in a three body system the first timestep depends mostly on the value of the non-decimal portions of the starting velocity measurements. A few timesteps down changing .15 to .16 makes a big difference, and by the 10000th timestep the difference between a starting velocity of .15983849549 and .15983849548 is noticeable). This is the classic problem with weather prediction.
Second of all, tracking "every particle" means that the scope of the particles you need to track explodes out of the system you're trying to monitor into the interactions the system has with neighbouring particles, and then the neighbours of neighbours, so on and so forth. In the human case, you need to track every particle in the body, but also every particle the body touches or ingests (could be a virus), and then the particles that those particles touch... This continues until you reach the point where "to understand the baking process of an apple pie you must first track the position of every particle in the universe"
The emergence/systems solution to both problems is to essentially go up a level. Instead of tracking particles, you should track cells, organs, individual humans, systems etc. At each level (following Erik Hoel's Causal Emergence framework) you trade microscale precision for predictive power i.e. the size of the system you can predict for a given amount of computational power. Often this means collapsing large amounts of microscale interactions into random noise - a slot machine could in theory be deterministically predicted by tracking every element in the randomiser mechanism/chip, but in practice it's easier to model as a machine with an output distribution set by the operating company. Similarly, we trade Feynman diagrams for brownian motion and Langevin dynamics.