Since I currently have the slack to do so, I'm going to try getting into a balanced biphasic schedule to start with. If I actually manage to pull it off I'll make another post about it.
If we consider the TM to be "infinitely more valuable" than the rest of our life as I suggested might make sense in the post, then we would accept whenever . We will never accept if i.e. accepting does not decrease the description length of the TM.
Right. I think that if we assign measure inverse to the exponent of the shortest description length and assume that the probability increases the description length of the physically instantiated TM by (because the probability is implemented through reality branching which means more bits are needed to specify the location of the TM, or something like that), then this actually has a numerical solution depending on what the description lengths end up being and how much we value this TM compared to the rest of our life.
Say is the description length of our universe and is the length of the description of the TM's location in our universe when the lottery is accepted, is the description length of the location of "the rest of our life" from that point when the lottery is accepted, is the next shortest description of the TM that doesn't rely on embedding in our universe, is how much we value the TM and is how much we value the rest of our life. Then we should accept the lottery for any , if I did that right.
I see. When I wrote
such a TM embedded in our physical universe at some point in the future (supposing such a thing is possible)
I implicitly meant that the embedded TM was unbounded, because in the thought experiment our physics turned out to support such a thing.
physicality of the initial states of a TM doesn't make its states from sufficiently distant future any more physically computed
I'm not sure what you mean by this.
Let's suppose the description length of our universe + bits needed to specify the location of the TM was shorter than any other way you might wish to describe such a TM. So with the lottery, you are in some sense choosing whether this TM gets a shorter or longer description.
Suppose I further specify the "win condition" to be that you are, through some strange sequence of events, able to be uploaded in such a TM embedded in our physical universe at some point in the future (supposing such a thing is possible), and that if you do not accept the lottery then no such TM will ever come to be embedded in our universe. The point being that accepting the lottery increases the measure of the TM. What's your answer then?
Sure, having just a little bit more general optimization power lets you search slightly deeper into abstract structures, opening up tons of options. Among human professions, this may be especially apparent in mathematics. But that doesn't make it any less scary?
Like, I could have said something similar about the best vs. average programmers/"hackers" instead; there's a similarly huge range of variation there too. Perhaps that would have been a better analogy, since the very best hackers have some more obviously scary capabilities (e.g. ability to find security vulnerabilities).
It's certainly plausible that something like this pumps in quite a bit of variation on top of the genetics, but I don't think it detracts much from the core argument: if you push just a little harder on a general optimizer, you get a lot more capabilities out.
Specialization on different topics likely explains much more than algorithmic tweaks explain.
That the very best mathematicians are generally less specialized than their more average peers suggests otherwise.
From section 3.1.2:
These credences feel borderline contradictory to me. M implies you believe that, conditional on no laws being passed which would make it illegal in any place he'd consider moving to, Jurgen Schmidhuber in particular has a >50% chance of building dangerously advanced AI within 20 years or so. Since you also believe the EU has a 90% chance of passing such a law before the creation of dangerously advanced AI, this implies you believe the EU has a >80% chance of outlawing the creation of dangerously advanced AI within 20 years or so. In fact, if we assume a uniform distribution over when JS builds dangerously advanced AI (such that it's cumulatively 50% 20 years from now), that requires us to be nearly certain the EU would pass such a law within 10 years if we make it that long before JS succeeds. From where does such high confidence stem?
(Meta: I'm also not convinced it's generally a good policy to be "naming names" of AGI researchers who are relatively unconcerned about the risks in serious discussions about AGI x-risk, since this could provoke a defensive response, "doubling down", etc.)