Wei Dai

Comments

Another piece of evidence against EMH: Coal commodity spot and futures prices have been moving up for several months, with coal stock prices naturally following, but today one analyst raised his price targets on several met coal stocks based on higher expected coal prices, and almost every US coal stock rose another 3-10%. (See BTU, CEIX, ARCH, AMR, HCC.) But there was no new private information released or any change in fundamentals compared to yesterday (futures markets are essentially flat). It's just a pure change in valuation.

Even more damningly, CEIX is up 3.5% (was up 5% intraday) even though it was not upgraded by this analyst, due to the fact that it mines thermal coal and the upgrades were based on higher met coal prices.

Thanks, I've set a reminder to attend your talk. In case I miss it, can you please record it and post a link here?

But, the gist of your post seems to be: "Since coming up with UDT, we ran into these problems, made no progress, and are apparently at a dead end. Therefore, UDT might have been the wrong turn entirely."

This is a bit stronger than how I would phrase it, but basically yes.

On the other hand, my view is: Since coming up with those problems, we made a lot of progress on agent theory within the LTA

I tend to be pretty skeptical of new ideas. (This backfired spectacularly once, when I didn't pay much attention to Satoshi when he contacted me about Bitcoin, but I think in general has served me well.) My experience with philosophical questions is that even when some approach looks a stone's throw away from a final solution to some problem, a bunch of new problems pop up and show that we're still quite far away. With an approach that is still as early as yours, I just think there's quite a good chance it doesn't work out in the end, or gets stuck somewhere on a hard problem. (Also some people who have digged into the details don't seem as optimistic that it is the right approach.) So I'm reluctant to decrease my probability of "UDT was a wrong turn" too much based on it.

The rest of your discussion about 2TDT-1CDT seems plausible to me, although of course depends on whether the math works out, doing something about monotonicity, and also a solution to the problem of how to choose one's IBH prior. (If the solution was something like "it's subjective/arbitrary" that would be pretty unsatisfying from my perspective.)

Do you think part of it might be that even people with graduate philosophy educations are too prone to being wedded to their own ideas, or don't like to poke holes at them as much as they should? Because part of what contributes to my wanting to go more meta is being dissatisfied with my own object-level solutions and finding more and more open problems that I don't know how to solve. I haven't read much academic philosophy literature, but did read some anthropic reasoning and decision theory literature earlier, and the impression I got is that most of the authors weren't trying that hard to poke holes in their own ideas.

I don't understand your ideas in detail (am interested but don't have the time/ability/inclination to dig into the mathematical details), but from the informal writeups/reviews/critiques I've seen of your overall approach, as well as my sense from reading this comment of how far away you are from a full solution to the problems I listed in the OP, I'm still comfortable sticking with "most are wide open". :)

On the object level, maybe we can just focus on Problem 4 for now. What do you think actually happens in a 2IBH-1CDT game? Presumably CDT still plays D, and what do the IBH agents do? And how does that imply that the puzzle is resolved?

As a reminder, the puzzle I see is that this problem shows that a CDT agent doesn't necessarily want to become more UDT-like, and for seemingly good reason, so on what basis can we say that UDT is a clear advancement in decision theory? If CDT agents similarly don't want to become more IBH-like, isn't there the same puzzle? (Or do they?) This seems different from the playing chicken with a rock example, because a rock is not a decision theory so that example doesn't seem to offer the same puzzle.

ETA: Oh, I think you're saying that the CDT agent could turn into a IBH agent but with a different prior from the other IBH agents, that ends up allowing it to still play D while the other two still play C, so it's not made worse off by switching to IBH. Can you walk this through in more detail? How does the CDT agent choose what prior to use when switching to IBH, and how do the different priors actual imply a CCD outcome in the end?

I think I kind of get what you're saying, but it doesn't seem right to model TDT as caring about all other TDT agents, as they would exploit other TDT agents if they could do so without negative consequences to themselves, e.g., if a TDT AI was in a one-shot game where they unilaterally decide whether to attack and take over another TDT AI or not.

Maybe you could argue that the TDT agent would refrain from doing this because of considerations like its decision to attack being correlated with other AIs' decisions to potentially attack it in other situations/universes, but that's still not the same as caring for other TDT agents. I mean the chain of reasoning/computation you would go through in the two cases seem very different.

Also it's not clear to me what implications your idea has even if it was correct, like what does it suggest about what the right decision theory is?

BTW do you have any thoughts on Vanessa Kosoy's decision theory ideas?

I'm not aware of good reasons to think that it's wrong, it's more that I'm just not sure it's the right approach. I mean we can say that it's a matter of preferences, problem solved, but unless we can also show that we should be anti-realist about these preferences, or what the right preferences are, the problem isn't really solved. Until we do have a definitive full solution, it seems hard to be confident that any particular approach is the right one.

It seems plausible that treating anthropic reasoning as a matter of preferences makes it harder to fully solve the problem. I wrote "In general, Updateless Decision Theory converts anthropic reasoning problems into ethical problems." in the linked post, but we don't have a great track record of solving ethical problems...

Even items 1, 3, 4, and 6 are covered by your research agenda? If so, can you quickly sketch what you expect the solutions to look like?

The general hope is that slight differences in source code (or even large differences, as long as they're all using UDT or something close to it) wouldn't be enough to make a UDT agents defect against another UDT agent (i.e. the logical correlation between their decisions would be high enough), otherwise "UDT agents cooperate with each other in one-shot PD" would be false or not have much practical implications, since why would all UDT agents have the exact same source code?

Load More