Wiki Contributions

Comments

Thanks for writing this! I tried to find materials about monetary policy suitable for someone with zero prior knowledge and this is the clearest introduction I have found.

It's good to know that I'm not going crazy thinking that everyone else sees the obvious reason why statistical mechanics works while I don't but it's a bit disappointing, I have to say.

Thanks for the link to the reference, the introduction was great and I'll dig more into it. If you have any ways to find more work done in this area (keywords, authors, specific university departments) I would be grateful if you could share them!

Thanks, the point about observers eventually agreeing makes sense. To make entropy really observer independent we’d have to have a notion of how precise we can be with measurements in principle. Maybe it’s less of a problem in quantum mechanics?

The phrase “in equilibrium” seems to be doing a lot of work here. This would make sense to me if there were general theorems saying that systems evolve towards equilibrium - there probably are?

I'm understanding that by "interpretability" you mean "we can attach values meaningful to people to internal nodes of the model"[1].

My guess is that logical/probabilistic models are regarded as more interpretable than DNNs mostly for two reasons:

  • They tend to have small number of inputs and the inputs are heavily engineered features (so the inputs themselves are already meaningful to people).
  • Internal nodes combine features in quite simple ways, particularly when number of inputs is small (the meaning in the inputs cannot be distorted/diluted too much in the internal nodes, if you allow).

I think what you are saying is: let's assume that inputs are not engineered and have no high-level meaning to people (e.g. raw pixel data). But the output does, e.g. it detects cats in pictures. The question is: can we find parts of the model which correspond to some human understandable categories (e.g. ear detector)?

In this case, I agree that seems equally hard regardless of the model, holding complexity constant. I just wouldn't call hardness of doing this specific thing "uninterpretability".

[1] Is that definition standard? I'm not a fan, I'd go closer to "interpretable model" = "model humans can reason about, other than by running black-box experiments on the model".

That seems intuitively right for unexamined or superficially examined lies, my point was mostly that if the liar is pressed hard enough he's going to get outcomputed, having much harder problem to solve - constructing self-consistent counter-factual world vs merely verifying the self-consistency.

Interestingly, a large quantity of unexamined lies change the balance - it's cheap for liars to add new lie to the existing ones but hard for an honest person to determine what is true, the computational complexity shifts away from liars. (We need to assume that getting caught in a lie is a low consequence event and probably bunch of other things I'm forgetting to make this work but I hope the point makes sense)

I've heard someone referring to this as Bullshit Asymmetry problem, where refuting low-effort lies (aka bullshit) is harder than generating bullshit.

I have never seen a good explanation of why statistical mechanics produces good experimental predictions (in the classical mechanics regime). I'll try to explain why I find the fact that it does predict experimental outcomes well weird and unintuitive.

Statistical mechanics makes sense to me as a mathematical theory - I can follow (uh, mostly) the derivation of model properties, given the assumptions etc. The assumptions which relate the theory to reality is what bothers me.

There are usually in the form of "if we have a system with given macroscopic constraints, we'll assume microstate distribution which maximises entropy". I understand the need to put an assumption of this form into a theory which tries to predict behavior of systems when we don't have a full knowledge of their state. Still, there are some weird things about it:

  • Maximum entropy assumption gives us a probability distribution over microstates which has a sensible interpretation for finite number of states (it's just a uniform distribution) but in continuous case it makes no sense to me - you need to assume extra structure on your space (I think you need a metric?) and I don't see a natural choice.
  • The probability distribution over microstates is observer-dependent, that makes sense, someone else may know much more about the system than I do. But it doesn't feel observer dependent: if you take a box filled with gas and measure kinetic energies of individual particles, you'll get distribution predicted by maximum entropy assumption. There must be a general argument why real systems tend to behave that way, surely?
  • The definition of temperature depends on entropy, which depends on the observer. What do thermometers measure then? Is it correct to say they measure the quantity we define using entropy? When is it equivalent?

I'm super confused about this and I'm struggling to make progress here, most textbooks I've seen don't tackle these issues or give some hand wave-y explanation why there's nothing to worry about.

This is kinda trivial but for some reason seems profound to me: world (or reality or whatever you want to call it) is self-consistent.

If someone telling the truth, it's computationally cheap for them - it's just reporting events. If someone's lying, each probing question requires them to infer the consequences of their made up events. And there's a lot of them. What's worse, all it takes for the lie to fall apart is a single inconsistency!

There's a point somewhere about memory being imperfect etc but the liar also have to know when to say "I don't remember" in a way which is consistent with what they said previously and so on. I think the main point still stands, whatever the point is.

Hardness of lying seems connected to the impossibility of counterfactual words - you cannot take a state of the world at one point in time, modify it arbitrarily and press play - the state from now on will be inconsistent with the historical states.

Your summary of the memory reconsilidation process in context of writing reminded me of the How to learn soft skills post - soft skill books tend to describe their main ideas through many stories which are hopefully resonating with the reader (2nd point) and then try to update it by presenting them through whatever lens that particular book wants you to apply (3rd point).