Links for Nov 2020

My favorite link this time around was the baseball antitrust one, although the qntm series has been really good.

The Exploitability/Explainability Frontier

Assume you're at the frontier of being able to do research in that area and have similar abilities to others in that reference class. The total amount of effort most of those people will put in is the same, but it will be split across these two factors differently. The system being unexploitable corresponds to the sum here being constant.

There can be examples where both sides are difficult, which are out of the frontier.

Re politics, there are some issues that are difficult, some issues that are value judgments, and some that are fairly simple in the sense that spending a week seriously researching is enough to be pretty confident of the direction policy should be moved in.

The Exploitability/Explainability Frontier

My point is that it's rare and therefore difficult to discover.

The kinds that are less rare are easier to discover but harder to convince others of, or at least harder to convince people that they matter.

I was drawing off this example, by the way: https://econjwatch.org/articles/recalculating-gravity-a-correction-of-bergstrands-1985-frictionless-case

A 35 year old model had a simple typo in it that got repeated in papers that built on it. Very easy to convince people that this is the case, but very difficult to discover such errors - most such papers don't have those errors so you need to replicate a lot of correct papers to find the one that's wrong.

If it's difficult to show that the typo actually matters, that's part of the difficulty of discovering it. My point is you should expect the sum of the difficulty in explaining and the difficulty in discovery to be roughly constant.

Why is there a "clogged drainpipe" effect in idea generation?
Answer by ikeNov 20, 202022

Your mind tracks the idea so as not to forget it. This reduces the effective working memory space, which makes it harder to think.

The Presumptuous Philosopher, self-locating information, and Solomonoff induction

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation. 


Down with Solomonoff Induction, up with the Presumptuous Philosopher

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation. 


Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes

I've been trying to understand, but your model appears underspecified and I haven't been able to get clarification. I'll try again. 

treat perspectives as fundamental axioms

Have you laid out the axioms anywhere? None of the posts I've seen go into enough detail for me to be able to independently apply your model. 

like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite

This is not clear at all. In this comment you wrote 

the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.

In the earlier comment:

from the first-person perspective it is primevally clear the other copy is not me.

I don't know how these should be interpreted other than implying that you know you're not a clone (if you're not). If there's another interpretation, please clarify. It also seems obviously false, because "I don't know which person I am among several subjectively indistinguishable persons" is basically tautological. 

 If MWI does not require perspective-independent reality. Then what is the universal wave function describing?

It's a model that's useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don't postulate reality, because I find the concept incoherent. 

But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...

That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn't say much. If you disagree with their conception of CI then my comment doesn't apply. 

Your position that SIA is the “natural choice” and paradox free is a very strong claim.

It seems natural to me, and none of the paradoxes I've seen are convincing. 

what is the framework

Start with a standard universal prior, plus the assumption that if an entity "exists" in both worlds A and B and world A "exists" with probability P(A) and P(B) for world B, then the relative probability of me "being" that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I "can" be given this knowledge. 

Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works - in the end, it spits out probabilities and that's what gets used. 

If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.

I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you've acknowledged that there are some questions your theory will refuse to answer, even though there's a simple observation that can be done to answer the question. This is highly counter-intuitive to me.

Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes

I'm trying to understand your critiques, but I haven't seen any that present an issue for my model of SIA, MWI, or anything else. Either you're critiquing something other than what I mean by SIA etc, or you're explaining them bad, or I'm not understanding the critiques correctly. I don't think it should take ten posts to explain your issues with them, but even so I've read through your posts and couldn't figure it out. 

It might help if you explained what you take SIA and MWI to mean. When you gave a list of assumptions you believed to be entailed by MWI, I said I didn't agree with that. Something similar may be going on with SIA. A fully worked out example showing what SIA and what your proposed alternative say for various scenarios would also help. What statements does PBR say are meaningful? When is a probability meaningful? 

Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes

From my point of view, you keep making new posts building on your theory/critique of standard anthropic thinking without really responding to the issues. I've tried to get clarifications and failed. 

In the above post, I explained the problem of SSA and SIA: they assume a specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given.

I have no idea what this means. 

Re paradoxes, you appear to not understand how SIA would apply to those cases using the framework I laid out. I asked you why those paradoxes apply and you didn't answer. If there are particular SIA advocates that believe the paradoxes apply, you haven't pointed at any of them. 

In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.

You gave no explanation for why MWI would imply those statements, why am I expected to spend more time proving a negative than you spent arguing for the positive? You asserted MWI implies those postulates, I asserted otherwise. I've written two posts here arguing for a form of verificationism in which those postulates end up incoherent. 


Instead of adding more and more posts to your theory, I think you should single in on one or two points of disagreement and defend that. Your scenarios and your perspective based theory are poorly defined, and I can't tell what the theory says in any given case. 

Load More