LESSWRONG
LW

1386
testingthewaters
1173131510
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
0testingthewaters's Shortform
9mo
34
testingthewaters's Shortform
testingthewaters1mo116

My best argument as to why coarse-graining and "going up a layer" when describing complex systems are necessary:

Often we hear a reductionist case against ideas like emergence which goes something like this: "If we could simply track all the particles in e.g. a human body, we'd be able to predict what they did perfectly with no need for larger-scale simplified models of organs, cells, minds, personalities etc.". However, this kind of total knowledge is actually impossible given the bounds of the computational power available to us.

  • First of all, when we attempt to track billions of particle interactions we very quickly end up with a chaotic system, such that tiny errors in measurements and setting up initial states quickly compound into massive prediction errors (A metaphor I like is that you're "using up" the decimal points in your measurement: in a three body system the first timestep depends mostly on the value of the non-decimal portions of the starting velocity measurements. A few timesteps down changing .15 to .16 makes a big difference, and by the 10000th timestep the difference between a starting velocity of .15983849549 and .15983849548 is noticeable). This is the classic problem with weather prediction.

  • Second of all, tracking "every particle" means that the scope of the particles you need to track explodes out of the system you're trying to monitor into the interactions the system has with neighbouring particles, and then the neighbours of neighbours, so on and so forth. In the human case, you need to track every particle in the body, but also every particle the body touches or ingests (could be a virus), and then the particles that those particles touch... This continues until you reach the point where "to understand the baking process of an apple pie you must first track the position of every particle in the universe"

The emergence/systems solution to both problems is to essentially go up a level. Instead of tracking particles, you should track cells, organs, individual humans, systems etc. At each level (following Erik Hoel's Causal Emergence framework) you trade microscale precision for predictive power i.e. the size of the system you can predict for a given amount of computational power. Often this means collapsing large amounts of microscale interactions into random noise - a slot machine could in theory be deterministically predicted by tracking every element in the randomiser mechanism/chip, but in practice it's easier to model as a machine with an output distribution set by the operating company. Similarly, we trade Feynman diagrams for brownian motion and Langevin dynamics.

Reply
Mo Putera's Shortform
testingthewaters3d20

Yeah the next level of the question is something like "we can prove something to a small circle of experts, now how do we communicate the reasoning and the implications to policymakers/interested parties/the public in general"

Reply
Mo Putera's Shortform
testingthewaters4d20

To be honest, this makes me quite worried. Suppose that someone working with mathematical methods proves something of dire importance to society (lets say he comes up with a definitive formula for measuring probability of disaster in a given year, or the minimum conditions for AI takeoff). How will this be communicated to other mathematicians, much less the public?

Reply
Book Review: How Minds Change
testingthewaters5d20

Great review and post, leaves me with a lot more hope for positive, non-coercive, and non-guilting/brow-beating change in beliefs. I read the book before reading your review and agree with your summary, and I would go so far as thanking you for raising/summarising points made in the book that I didn't get during my own read-through. At this point I have a pretty firm belief that (as they say in Inception) positive motivation is stronger than negative motivation, at least for the purposes of long-term, intentional activities like cultivating an open attitude to facts and reason in the self.

Reply
Jemist's Shortform
testingthewaters7d40

See also this paper about plasticity as dual to empowerment https://arxiv.org/pdf/2505.10361v2

Reply
Leaving Open Philanthropy, going to Anthropic
testingthewaters9d*1610

Um, I really like a lot of your writing. But I think the parts of your post that are in bold paint a very different picture to the parts that aren't in bold.

Reply1
Mo Putera's Shortform
testingthewaters12d*40

That would be a pleasant fantasy for people who cannot abide the notion that history depends on small little changes or that people can really be different from other people.

I think both of those are true, but it does not follow that history is made of individuals solving individual math problems and pushing out papers which get stacked into the intellectual tower of babel. History as far as I can see is made out of systems or ensembles of people moving around in different configurations.

Yudkowsky couldn't do what he did without ET Jaynes, who in turn relied on the progenitors of probability and rationality including Thomas Bayes and William of Ockham. But he was also influenced "sideways" by the people who he learned from and defined himself against, the people in SL4 and the people he called idiots and the venture capitalists he once idolised for their competence and Peter Thiel and Demis Hassabis and his family. They shape (at the very least) his emotional worldview, which then shapes how he takes in information and integrates it at a deep and fundamental level. This is true insofar as it is true for any human who lives in a society. When I write anything I can feel the hands of writers past and present shaping my action space. They shape both what I write about and how I choose to write.

So yes if he was gone everything would be different. But it would also be the same, people would love and fight and struggle and cooperate. The sameness of trends manifests at a higher level of coarsegraining, the level where the systemic forces and the long dreams and molochian demons live. And none of this diminishes what he did, does, will do, or could have done. It's just the way things are, because we can't run randomised control trials on society.

Reply
testingthewaters's Shortform
testingthewaters13d20

No worries at all, I know I've had my fair share of bitter moments around AI as well. I hope you have a nice rest of your day :)

Reply
testingthewaters's Shortform
testingthewaters13d20

For what it's worth I'm not white and I come primarily from an AI ethics background, my formal training is in the humanities. I do think its sad that people only fret about bias the moment it affects them, however, and I would rather the issue be taken seriously from the start.

Reply
testingthewaters's Shortform
testingthewaters13d152

There appears to be a distaste/disregard for AI ethics (mostly here referring to bias and discrimination) research in LW. Generally the idea is that such research misses the point, or is not focused on the correct kind of misalignment (i.e. the existential kind). I think AI ethics research is important (beyond its real world implications) just like RL reward hacking in video game settings. In both cases we are showing that models learn unintended priorities, behaviours, and tendencies from the training process. Actually understanding how these tendencies form during training will be important for improving our understanding of SL and RL more generally.

Reply
Load More
-8On the Function of Faith in A Probably-Simulated Universe
3mo
12
6Do model evaluations fall prey to the Good(er) Regulator Theorem?
3mo
1
255I am worried about near-term non-LLM AI developments
3mo
56
2A Letter to His Highness Louis XV, the King of France
7mo
0
14The Fork in the Road
8mo
12
0testingthewaters's Shortform
9mo
34
4A concise definition of what it means to win
10mo
1
35The Monster in Our Heads
10mo
4
13Some Comments on Recent AI Safety Developments
1y
1
2Changing the Mind of an LLM
1y
0
Load More