Dweomite

Posts

Sorted by New

Wiki Contributions

Comments

The general point that you need to update on the evidence that failed to materialize is in the sequences and is exactly where I expected you to go based on your introductory section.

When you "simulate random universes," what distribution are you randomizing over?

Seems like the simulations only help if you somehow already know the true probability distribution from which the actual universe was selected.

I think there's a subtle but important difference between saying that time travel can be represented by a DAG, and saying that you can compute legal time travel timelines using a DAG.

There's one possible story you can tell about time turners where the future "actually" affects the past, which is conceptually simple but non-causal.

There's also a second possible story you can tell about time turners where some process implementing the universe "imagines" a bunch of possible futures and then prunes the ones that aren't consistent with the time turner rules.  This computation is causal, and from the inside it's indistinguishable from the first story.

But if reality is like the second story, it seems very strange to me that the rules used for imagining and pruning just happen to implement the first story.  Why does it keep only the possible futures that look like time travel, if no actual time travel is occurring?

The first story is parsimonious in a way that the second story is not, because it supposes that the rules governing which timelines are allowed to exist are a result of how the timelines are implemented, rather than being an arbitrary restriction applied to a vastly-more-powerful architecture that could in principle have much more permissive rules.

So I think the first story can be criticized for being non-causal, and the second can be criticized for being non-parsimonious, and it's important to keep them in separate mental buckets so that you don't accidentally do an equivocation fallacy where you use the second story to defend against the first criticism and the first story to defend against the second.

Aside from the amount of fan-in, another difference that seems important to me is that a "normal" simulation is guaranteed to have exactly one continuation.  If you do the thing where you simulate a bunch of possible futures and then prune the contradictory ones then there's no intrinsic reason you couldn't end up with multiple self-consistent futures--or with zero!

Bruce Schneier has posted something like a retraction on his blog, saying he focused on the comparisons to pandemics and nuclear war and not on the word "extinction".

The new Support icon looks less like a trash can than the one pictured in the OP, but still looks kinda like a trash can to me. Making it taller/narrower might help, or making the top/bottom pieces look more different from the body. Or maybe a 3D view that lets you see that the top is solid rather than hollow.

(Disclaimer: I am not an artist.)

Would it be possible to fix that by making the hover tooltip appear to the right when pointing at the right column and to the left when pointing at the left column?

I'm not sure whether this is helpful, but this reminds me of Error Correction Codes, a way of transmitting information through a noisy channel that trades bandwidth for reliability by encoding the intended message redundantly.

An explanation that I found helpful when learning about them was that you can think of a packet of N bits as specifying one corner of an N-dimensional hypercube, and an ECC as saying that you'll only intentionally transmit certain corners and not others.  If you select a subset of corners such that no allowed corner is adjacent to any other, then a 1-bit error will always land you on a disallowed corner, so the receiver will know an error occurred.  If all allowed corners are some distance from all other allowed corners, then you can guess the most likely intended corner based on distance from the corner received.

An XOR of all bits is maximally noisy because every corner with a value of "1" is surrounded by corners with a value of "0" and vice-versa.  The corners corresponding to a given answer are maximally dispersed, so every error changes the result.

The inverse of that strategy is to designate exactly-opposite corners as "0" and "1", and then map all the remaining corners by which of those they're closer to.  In other words, slice the hypercube in half, and then assign the same value to all corners in a given half.  (The "majority of bits" function does exactly this.)

I don't think I can personally convert that into an answer to the stated problem, but maybe it will give someone else a thread to pull on?

The cost I'd be concerned about is making the example significantly more complicated.

I'm also not sure the unintuitiveness is actually bad in this case.  I think there's value in understanding examples where your intuitions don't work, and I wouldn't want someone to walk away with the mistaken impression that the folk theorems only predict intuitive things.

The given example involves punishing behavior that is predicted to lower utility for all players, given the current strategies of all players.  Does that sound bad in any way at all?

Load More