In order to do this, the agent needs to be able to reason approximately about the results of their own computations, which is where logical uncertainty comes in
Why does being updateless require thinking through all possibilities in advance? Can you not make a general commitment to follow UDT, but wait until you actually face the decision problem to figure out which specific action UDT recommends taking?
Well, it's been 8 years; how close are ML researchers to a "proto-AGI" with the capabilities listed? (embarassingly, I have no idea what the answer is)
Apparently an LW user did a series of interviews with AI researchers in 2011, some of which included a similar question. I know most LW users have probably seen this, but I only found it today and thought it was worth flagging here.
What are the competing explanations for high time preference?
A better way to phrase my confusion: How do we know the current time preference is higher than what we would see in a society that was genuinely at peace?
The competing explanations I was thinking of were along the lines of "we instinctively prefer having stuff now to having stuff later"
Yeah, I was implicitly assuming that initiating a successor agent would force Omega to update its predictions about the new agent (and put the $1m in the box). As you say, that's actually not very relevant, because it's a property of a specific decision problem rather than CDT or son-of-CDT.
(I apologize in advance if this is too far afield of the intended purpose of this post)
How does the claim that "group agents require membranes" interact with the widespread support for dramatically reducing or eliminating restrictions to immigration ("open borders" for short) within the EA/LW community? I can think of several possibilities, but I'm not sure which is true:
Context: I have an intuition that reduced/eliminated immigration restrictions reduce global coordination, and this post helped me crystallize it (if nations have less group agency, it's harder to coordinate)
Would trying to become less confused about commitment races before building a superintelligent AI count as a metaphilosophical approach or a decision theoretic one (or neither)? I'm not sure I understand the dividing line between the two.
if you're interested in anything in particular, I'll be happy to answer.
I very much appreciate the offer! I can't think of anything specific, though; the comments of yours that I find most valuable tend to be "unknown unknowns" that suggest a hypothesis I wouldn't previously have been able to articulate.
Have you written anything like "cousin_it's life advice"? I often find your comments extremely insightful in a way that combines the best of LW ideas with wisdom from other areas, and would love to read more.