I think that apocalypse insurance isn't as satisfactory as you imply, and I'd like to explain why below.
First: what's a hardline libertarian? I'll say that a hardline libertarian is in favour of people doing stuff with markets, and there being courts that enforce laws that say you can't harm people in some small, pre-defined, clear-cut set of ways. So in this world you're not allowed to punch people but you are allowed to dress in ways other people don't like.
Why would you be a hardline libertarian? If you're me, the answer is that (a) markets and freedom etc are pretty good, (b) you need ground rules to make them good, and (c) government power tends to creep and expand in ill-advised ways, which is why you've got to somehow rein it in to only do a small set of clearly good things.
If you're a hardline libertarian for these reasons, you're kind of unsatisfied with this proposal, because it's sort of subjective - you're punishing people not because they've caused harm, but because you think they're going to cause harm. So how do you assess the damages? Without further details, it sounds like this is going to involve giving a bunch of discretion to a lawmaker to determine how to punish people - discretion that could easily be abused to punish a variety of activities that should thrive in a free society.
There's probably some version that works, if you have a way of figuring out which activities cause how much expected harm that's legibly rational in a way that's broadly agreeable. But that seems pretty far-off and hard. And in the interim, applying some hack that you think works doesn't seem very libertarian.
If two variables, I and O, both make one value of E more likely than the other, that means the probability of I conditional on some value of E is different from the probability of I because I explains some of that value of E
That's correct
but if you also know O, than this explains some of that value of E as well, and so P(I|E=x, O) should bd different.
This is generically true but doesn't have to be true for every value of x.
Here's one way to see why the graph in the post is right: look at all other casual graphs, and you will see they either fail to imply that I and O are independent (as our graph does), or imply independences or conditional independences that don't exist in the data.
I think they do correspond to the causal graph in the way described in the post. Your script simulates something more specific than the causal graph: you can fit the causal graph without being able to be fitted by the script.
It is weird that I is independent of O given not E, given that from that graph you wouldn't expect conditional independence, but I is not independent of O given E, so that's OK and consistent with the graph.
The data is generated in a way that doesn't show a correspondence to this graph if you follow a procedure identical to the one described in the post
When I calculate the conditional probability tables you get from the table of numbers in the post and multiply them out to get the joint distribution, my answers basically match with the table of numbers in the post.
Eliezer says the data shows that Overweight and Internet both make exercise less likely.
I don't think he actually says that? He just says they both causally affect exercise:
Which says that weight and Internet use exert causal effects on exercise, but exercise doesn't causally affect either.
[EDIT: I'm totally, wrong, he does say "we now realize that being overweight and spending time on the Internet both cause you to exercise less"]
Regarding your second point:
That would imply that P(O|I & not E) should be less than P(O|not E); and that P(I|O & not E) should be less than P(I|not E).
FWIW, I don't take the sentence "overweight and internet both make exercise less likely" to imply that - just to imply that p(E | O) < p(E | not O) and p(E | I) < p(E | not I). The interaction terms could be complicated.
He was probably kinda sleep deprived and rushed, which could explain inessential words being added.
Does this not essentially amount to just assuming that the inductive bias of neural networks in fact matches the prior that we (as humans) have about the world?
No? It amounts to assuming that smaller neural networks are a better match for the actual data generating process of the world.
One argument sketch using SLT that NNs are biased towards low complexity solutions: suppose reality is generated by a width 3 network, and you're modelling it with a width 4 network. Then, along with the generic symmetries, optional solutions also have continuous symmetries where you can switch which neuron is turned off.
Roughly, say neurons 3 and 4 have the same input weight vectors (so their activations are the same), but neuron 4's output weight vector is all zeros. Then you can continuously scale up the output vector of neuron 4 while simultaneously scaling down the output vector of neuron 3 to leave the network computing the same function. Also, when neuron 4 has zero weights as inputs and outputs you can arbitrarily change the inputs or the outputs but not both.
Anyway, this means that when the data is generated by a slim neural net, optimal nets will have a good RLCT, but when it's generated by a neural net of the right width, optimal nets will have a bad RLCT. So nets can learn simple data, and it's easier for them to learn simple data than complex data - assuming thin neural nets count as simple.
This is basically a justification of something like your point 1, but AFAICT it's closer to a proof in the SLT setting than in your setting.
The post gives one example of how it can be true: the probabilities are compatible with the causal graph, I is independent of O given E = no, but I is not independent of O given E = yes.
Have you tried this exercise?