interstice

Wiki Contributions

Comments

IMO the awkwardness of introducing an unnecessary new element to the theory's ontology, plus the fact that this new element interacts non-locally with the wave function, means that MWI is still simpler/more philosophically satisfying overall. Also not convinced that pilot-wave actually makes embedded agency any simpler.

On the linked page, I see "[anonymous]" for all values of the user field.

Noting that the author deleted a critical comment which was somewhat rude but IMO made some reasonable points. That's fair enough, but in conjunction with the way the site handles deletions this strikes me as bad, since there's no way of (a) seeing which user posted the deleted comment (this might be a bug?) (b) examining the text of deleted comments. Together this means that you can't distinguish cases where a post attracts no criticism(an important signal) and cases where there were critical comments that were deleted, and you can't examine deleted criticisms.

Yes. But I don't see why that's a problem? Which preimage a given x would be assigned to is random. The hope is that repeated trials would give the same preimage frequently enough for it to be a meaningful partition of the input space. How well it would work depends on the details of the ECC but I suspect it would work reasonably well in many cases. You could also just apply the decoder directly to the string x but I thought that might be a bit more unnatural since in reality Bob will never see the full string.

Here's a possible construction, building on Dweomite's suggestion to use error correction codes. We treat the selection of the random variables that Bob sees as a binary erasure channel with probability of erasure. Choose some error-correcting code for the BEC. Now, Alice constructs a function in the following way: For each input , randomly choose a subset of size . Apply the decoder for our error-correcting code to the subset; we obtain some string [1]. Alice chooses a function which is constant on pre-images of a given string under this mapping. She could either choose such an uniformly at random, or possibly treat the selection of inputs as another binary erasure channel(on sequences of 1s and 0s indexed by the string ) and use another error-correcting code.


  1. You could also repeat this procedure several times and take a majority vote. ↩︎

This is the same as Eliezer's definition, no? It only keeps information about the order induced by the utility function.

The point is that a random Turing Machine’s output is technically uncomputable

What do you mean? The output of any Turing machine is computable by definition. Do you mean solving the halting problem for a random Turing machine? Or a random oracle?

computers that can legitimately claim to go beyond Turing Machines with known physics aren’t useful computers due to the No Free Lunch theorems

Non-sequitur, the no-free-lunch theorems don't have anything to do with the physical realizability of hypercomputers.

In the car case I think it's obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we're teaching people to drive. From that perspective romeosteven's comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there's a way in which the author's personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)

More the latter. Or more like, doing things like this technique too much/too hard could be dangerous.

Load More