Charlie Steiner

LW1.0 username Manfred. Day job is condensed matter physics, hobby is thinking I know how to assign anthropic probabilities.

Sequences

Philosophy Corner

Comments

Comparing Utilities

One think I'd also ask about is: what about ecology / iterated games? I'm not very sure at all whether there are relevant iterated games here, so I'm curious what you think.

How about an ecology where there are both people and communities - the communities have different aggregation rules, and the people can join different communities. There's some set of options that are chosen by the communities, but it's the people who actually care about what option gets chosen and choose how to move between communities based on what happens with the options - the communities just choose their aggregation rule to get lots of people to join them.

How can we set up this game so that interesting behavior emerges? Well, people shouldn't just seek out the community that most closely matches their own preferences, because then everyone would fracture into communities of size 1. Instead, there must be some benefit to being in a community. I have two ideas about this: one is that the people could care to some extent about what happens in all communities, so they will join a community if they think they can shift its preferences on the important things while conceding the unimportant things. Another is that there could be some crude advantage to being in a community that looks like a scaling term (monotonically increasing with community size) on how effective they are at satisfying their peoples' preferences.

What happens if you drink acetone?

I'm curious about the comparison to drinking isopropyl alcohol (rubbing alcohol) instead, which is gradually metabolized into acetone (the actual psychoactive ingredient) inside the body. If you drink the same amount then gradual seems safer, but I'm not sure if it actually has a bigger difference between active dose and LD50 (or active dose and severe gastrointestinal inflammation).

Egan's Theorem?

Right, it's a little tricky to specify exactly what this "relationship" is. Is the notion that you should be able to compress the approximate model, given an oracle for the code of the best one (i.e. that they share pieces?). Because most Turing machines don't compress well, and so it's easy to find counterexamples (the most straightforward class is where the approximate model is already extremely simple).

Anyhow, like I said, hard to capture the spirit of the problem. But when I *do* try to formalize the problem, it tends to not have the property, which is definitely driving my intuition.

Egan's Theorem?

If by "account for that" you mean not be in direct conflict with earlier sense data, then sure. All tautologies about the data will continue to be true. Suppose some data can be predicted by classical mechanics with 75% accuracy. This is a tautology given the data itself, and no future theory will somehow make classical mechanics stop giving 75% accurate predictions for that past data.

Maybe that's all you meant?

I'd sort of interpreted you as asking questions about properties of the theory. E.g. "this data is really well explained by the classical mechanics of point particles, therefore any future theory should have a particularly simple relationship to the point particle ontology." It seems like there shouldn't be a guaranteed relationship that's much simpler than reconstructing the data and recomputing the inferred point particles.

I spent a little while trying to phrase this in terms of Turing machines but I don't think I quite managed to capture the spirit.

Egan's Theorem?

The answer to the question you actually asked is no, there is no ironclad guarantee of properties continuing, nor any guarantee that there will be a simple mapping between theories. With some effort you can construct some perverse Turing machines with bad behavior.

But the answer the more generalized question is yes, simple properties can be expected (in a probabilistic sense) to generalize even if the model is incomplete. This is basically Minimum Message Length prediction, which you can put on the theoretical basis of the Solomonoff prior (It's somewhere in Li and Vitanyi - chapter 5?).

Sunday September 6, 12pm (PT) — Casual hanging out with the LessWrong community

Looks like nobody showed up - must be because gathertown is actually sufficiently stable for use now.

Li and Vitanyi's bad scholarship

Well, yes, it's not a perfect summary. I have no idea why they'd say Popper was working on Bayesianism - unless maybe "the problem" in that clause was the problem of induction, and something got lost in an edit.

But sometimes nitpicks aren't that important. Like, for example, it's spelled Vitanyi. But this isn't really a crushing refutation of your post (though it is a very convenient illustration). You shouldn't sweat this too much, because their textbook really is worth reading about algorithmic information theory.

Sunday September 6, 12pm (PT) — Casual hanging out with the LessWrong community

Actually, is it okay if I'm in charge of the Zoom call? I would like to set up one with different rooms and cohostify people, so it's not everyone locked in together.

Introduction To The Infra-Bayesianism Sequence

Could you defend worst-case reasoning a little more? Worst cases can be arbitrarily different from the average case - so maybe having worst-case guarantees can be reassuring, but actually choosing policies by explicit reference to the worst case seems suspicious. (In the human context, we might suppose that worst case, I have a stroke in the next few seconds and die. But I'm not in the business of picking policies by how they do in that case.)

You might say "we don't have an average case," but if there are possible hypotheses outside your considered space you don't have the worst case either - the problem of estimating a property of a non-realizable hypothesis space is simplified, but not gone.

Anyhow, still looking forward to working my way through this series :)

Load More