LESSWRONG
LW

179
testingthewaters
1131131390
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6testingthewaters's Shortform
8mo
26
testingthewaters's Shortform
testingthewaters15d116

My best argument as to why coarse-graining and "going up a layer" when describing complex systems are necessary:

Often we hear a reductionist case against ideas like emergence which goes something like this: "If we could simply track all the particles in e.g. a human body, we'd be able to predict what they did perfectly with no need for larger-scale simplified models of organs, cells, minds, personalities etc.". However, this kind of total knowledge is actually impossible given the bounds of the computational power available to us.

  • First of all, when we attempt to track billions of particle interactions we very quickly end up with a chaotic system, such that tiny errors in measurements and setting up initial states quickly compound into massive prediction errors (A metaphor I like is that you're "using up" the decimal points in your measurement: in a three body system the first timestep depends mostly on the value of the non-decimal portions of the starting velocity measurements. A few timesteps down changing .15 to .16 makes a big difference, and by the 10000th timestep the difference between a starting velocity of .15983849549 and .15983849548 is noticeable). This is the classic problem with weather prediction.

  • Second of all, tracking "every particle" means that the scope of the particles you need to track explodes out of the system you're trying to monitor into the interactions the system has with neighbouring particles, and then the neighbours of neighbours, so on and so forth. In the human case, you need to track every particle in the body, but also every particle the body touches or ingests (could be a virus), and then the particles that those particles touch... This continues until you reach the point where "to understand the baking process of an apple pie you must first track the position of every particle in the universe"

The emergence/systems solution to both problems is to essentially go up a level. Instead of tracking particles, you should track cells, organs, individual humans, systems etc. At each level (following Erik Hoel's Causal Emergence framework) you trade microscale precision for predictive power i.e. the size of the system you can predict for a given amount of computational power. Often this means collapsing large amounts of microscale interactions into random noise - a slot machine could in theory be deterministically predicted by tracking every element in the randomiser mechanism/chip, but in practice it's easier to model as a machine with an output distribution set by the operating company. Similarly, we trade Feynman diagrams for brownian motion and Langevin dynamics.

Reply
leogao's Shortform
testingthewaters7h72

Some books you might like to read:

  • Seeing Like a State by James C Scott (I've read most of it, I liked it)
  • Bullshit Jobs, The Dawn of Everything, most books by David Graeber (I've read and liked long extracts of his work)
  • The End: Hitler's Germany 1944–45 by Sir Ian Kershaw (I've read all of it and found it very valuable as a complete picture of a society melting down)
  • Open Letters by Vaclav Havel (I've read a lot of it, I like it a lot. He was the first president of Czechoslovakia and a famous communist dissident and his writing sketches out both what he finds soul-destroying about that system and what he thinks are the principles of good societies)
  • System Effects: Complexity in Political and Social Life by Robert Jervis (I'm reading this now, very good case studies about non-obvious phenomena in international relations)
  • Broken Code: Inside Facebook and the fight to expose its toxic secrets by Jeff Horwitz (Very good book about how social media platforms like Facebook shape and are shaped by modern civilisation)

All of these books to various degrees tackle the things you are describing from a holistic perspective. Hope this helps.

Reply
eggsyntax's Shortform
testingthewaters3d40

It loads past conversations (or parts of them) into context, so it could change behaviour.

Reply1
testingthewaters's Shortform
testingthewaters9d42

A lesson from the book System Effects: Complexity in Political and Social Life by Robert Jervis, and also from the book The Trading Game: A Confession by Gary Stevenson.

When people talk about planning for the future, there is often a thought chain like this:

  • All other things being equal, a world with thing/organisation/project X is preferable compared to a world without thing/organisation/project X
  • Therefore, I should try to make X happen
  • I will form a theory of change and start to work at making X happen

But of course the moment you start working at making X happen you have already destroyed the premise. There are no longer two equal worlds held in expectation, one with X and one with no X. There is now the world without X (in the past), and the world where you are trying to make X happen (the present). And very often the path to attaining X creates a world much less preferable for you than the world before you started, long before you reach X itself.

For example:

  • I can see a lucrative trade opportunity where by the end of five months, the price for some commodity will settle at a new, higher point which I can forecast clearly. All other things being equal, if I take this trade I will make a lot of money.
  • Therefore, I should try and make this trade.
  • I will take out a large position, and double down if in the interim the price moves in the "wrong" direction.

However, the price can be much more volatile than you expect, especially if you are taking out big positions in a relatively iliquid market. Thus you may find that three months in your paper losses are so large that you reach your pain threshold and back out of the trade for fear that your original prediction was wrong. At the end of the five months, you may have predicted the price correctly, but all you did was lose a large sum of money in the interim.

For another example:

  • All other things being equal, a world with an awareness of potential race dynamics around AGI is preferable compared to a world without such an awareness.
  • Therefore, I should try to raise awareness of race dynamics.
  • I will write a piece about race dynamics and make my arguments very persuasive, to increase the world's awareness of this issue.

Of course, in the process of trying to raise awareness of this issue, you might first create a world where a small subset of the population (mostly policy and AI people) are suddenly very clued-in to the possibility of the race dynamics. There people are also in a very good position to create, maintain, and capitalize on those dynamics (whether consciously or not), including using them to raise large amounts of cash. Now suddenly the risk of race dynamics is much larger than before, and the world is in a more precarious state.

There isn't really a foolproof way to get around this problem. However, one tactic might be to look at your theory of change, and instead of comparing the world state before and after the plan, look at the world state along each step of the path to change, and consciously weigh up the changes and tradeoffs at each step. If one of those steps looks like it would break a moral, social, or pain-related threshold, maybe reconsider that theory of change.

Addendum: I think this is also why systems/ecosystems/plans which rely on establishing positive or negative feedback loops are so powerful. They've set things up so that each stage incrementally moves towards the goal, so that even if there are setbacks you have room to fall back instead of breaching a pain threshold.

Reply
Jan_Kulveit's Shortform
testingthewaters11d0-4

[I think this comment is too aggressive and I don't really want to shoulder an argument right now]

With apologies to @Garrett Baker .

[This comment is no longer endorsed by its author]Reply11
Jan_Kulveit's Shortform
testingthewaters12d2211

From the mechanise blogpost:

Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite international efforts to limit nuclear proliferation.

Yet the number of nuclear weapons in the world has decreased from its peak during the cold war. Furthermore, we've somehow stopped ourselves from using them, which suggests that some amount of steering is possible.

With regards to the blogpost as a whole, humanity fits their picture the most when it is uncoordinated and trapped in isolated clades, each of which is in a molochian red queen's race with the other, requiring people to rapidly upgrade to new tech if only to keep pace with their opponents in commerce or war. But this really isn't the only way we can organise ourselves. Many societies made do fairly well for long periods in isolation without "rising up the tech tree" (e.g. Japan post-sengoku jidai).

And even if it is inevitable... You can stop a car going at 60 mph by slowly hitting the brakes or by ramming it into a wall. Even if stopping is "inevitable", it does not follow that the wall and the gentle decceleration are identically preferable for the humans inside.

Reply
Subliminal Learning, the Lottery-Ticket Hypothesis, and Mode Connectivity
testingthewaters13d*40

I think a working continual learning implementation would mess with convergence-based results which have a relatively fixed training data distribution and only modify the starting seeds. This is mostly because a continual learning system is constantly drifting "off the base distribution" and incorporating new data. In other words, the car model has seen data from places and distributions the attacker's base model never will.

Reply
testingthewaters's Shortform
testingthewaters14d20

Addendum for the future: Concepts like agents, agency, and choice only make sense at the systemic macroscale. If you had total atomic knowledge (complete knowledge of every single particle interaction in a human - which, as we discussed, basically requires complete knowledge of every single particle interaction in the universe), the determinists are right. There is no choice. It's only neurons firing and chemicals bonding. But we operate at a higher level, with noise and uncertainty. Then preferences and policies make sense as things to talk about.

Reply
testingthewaters's Shortform
testingthewaters15d30

I've heard it quite a few times when discussing emergence and complexity topics.

Reply
Wei Dai's Shortform
testingthewaters16d30

Which period of "chinese civilisation" are you referring to? I think it would be hard to point to any isolated "chinese civilisation" just minding its own business and keeping a firm grip on a unified cultural and ethnic population. Over 3500+ years of written history the territory occupied by China today had multiple periods of unity and division, sometimes splitting up into 10 or more states, often with multiple empires and dynasties coexisting in various levels of war and peace and very loosely ruled areas in between. (This is IMO a central theme of Chinese history: the first line of the Romance of the Three Kingdoms reads "Of matters under heaven, we can say that what is long united must divide, what is long divided must unite". At various points the "Chinese Empire" looked more like the Holy Roman Empire, e.g. during the late Zhou dynasty leading into the Spring and Autumn period)

The "chinese lands" were taken over by the Mongols and the Manchu during the Yuan and Qing dynasties (the latter one being the last dynasty before the 20th century), and at various points the borders of the Chinese empire would grow and shrink to encompass what we today recognise as Korea, Japan, South East Asia, Tibet... There are 56 recognised ethnic groups in China today. The importance and purpose of the Keju system also changed throughout the periods it was in use, and I have no idea where you got the eugenics thing from. I also think you would have a hard time building a case for any intentional or centralised control of scientific research beyond that of the European states at the time, mostly because the idea of scientific research is itself a very modern one (is alchemical research science?). As far as I can understand it you're taking the "vibe" of a strong, unified, centralised state that people recognise today in the People's Republic of China and then stretching it backwards to create some kind of artificial historical throughline.

Reply
Load More
-8On the Function of Faith in A Probably-Simulated Universe
2mo
12
6Do model evaluations fall prey to the Good(er) Regulator Theorem?
2mo
1
252I am worried about near-term non-LLM AI developments
3mo
56
2A Letter to His Highness Louis XV, the King of France
6mo
0
14The Fork in the Road
7mo
12
6testingthewaters's Shortform
8mo
26
4A concise definition of what it means to win
9mo
1
35The Monster in Our Heads
9mo
4
13Some Comments on Recent AI Safety Developments
1y
1
2Changing the Mind of an LLM
1y
0
Load More