All of Abhimanyu Pallavi Sudhir's Comments + Replies

I think that the philosophical questions you're describing actually evaporate and turn out to be meaningless once you think enough about them, because they have a very anthropic flavour.

I don't think that's exactly true. But why do you think that follows from what I wrote?

1mruwnik2mo
I find that if I keep recursing deep enough, after a while I get to a point where I try to work out why I believe that I can believe that logic works. At which point I bounce off a wall, seeing as I'm trying to logically come up with a reason for it. Solipsism is similar - how do you know that you're not a brain in a vat? Or in general Descartes' demon. From my (admissively most likely confused) understanding, this would be another example of self reference, albeit in a roundabout way.
2the gears to ascenscion2mo
HUH. iiiiinteresting...

It's really not, that's the point I made about semantics.

Eh that's kind-of right, my original comment there was dumb.

You overstate your case. The universe contains a finite amount of incompressible information, which is strictly less than the information contained in . That self-reference applies to the universe is obvious, because the universe contains computer programs.

The point is the universe is certainly a computer program, and that incompleteness applies to all computer programs (to all things with only finite incompressible information). In any case, I explained Godel  with an explicitly empirical example, so I'm not sure what your point is. 

0shminux2mo
That's about as much of an argument as saying that the universe is contained in the decimal expansion of Pi, therefore Pi has all the information one needs.

I agree, and one could think of this in terms of markets: a market cannot capture all information about the world, because it is part of the world.

But I disagree that this is fundamentally unrelated -- here too the issue is that it would need to represent states of the world corresponding to what belief it expresses. Ultimately mathematics is supposed to represent the real world.

0shminux2mo
Well, I think a better way to put it is that mathematics is sometimes a part of some models of the world. The relationship is world -> inputs -> models <-> math. Whether the part of mathematics that deals with self-reference and soundness and completeness of formal systems corresponds to an accurate and useful model of the world is not at all obvious. So, yeah, some parts of mathematics lossily represent some parts of the world. But it is a pretty weak statement.

No, it doesn't. There is no 1/4 chance of anything once you've found yourself in Room A1.

You do acknowledge that the payout for the agent in room B (if it exists) from your actions is the same as the payout for you from your own actions, which if the coin came up tails is $3, yes?

I don't understand what you are saying. If you find yourself in Room A1, you simply eliminate the last two possibilities so the total payout of Tails becomes 6.

If you find yourself in Room A1, you do find yourself in a world where you are allowed to bet. It doesn't make sense to consider the counterfactual, because you already have gotten new information.

That's not important at all. The agents in rooms A1 and A2 themselves would do better to choose tails than to choose heads. They really are being harmed by the information.

1Dagon2y
It's totally important. The knowledge that you get paid for guessing T in the cases you're never asked the question is extremely relevant here. It changes the EV from 1/3 * 3 = 1 to 1/3 * 3 + 1/4 * 3 = 1.75.

I see, that is indeed the same principle (and also simpler/we don't need to worry about whether we "control" symmetric situations).

2Charlie Steiner2y
Yeah I'm still not sure how to think about this sort of thing short of going full UDT and saying something like "well, imagine this whole situation was a game - what would be the globally winning strategy?"

I don't think this is right. A superrational agent exploits the symmetry between A1 and A2, correct? So it must reason that an identical agent in A2 will reason the same way as it does, and if it bets heads, so will the other agent. That's the point of bringing up EDT.

1player_032y
Oh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall. It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On average, this maximizes winnings. Except this isn't the same situation at all. With group 4 eliminated from the get go, the remaining teams can do even better than $4 or $3. Teammates in room A2 knows for a fact that the coin landed heads, and they automatically earn $1. Teammates in room A1 are no longer responsible for their teammates' decisions, so they go for the $3. Thus teams 1 and 2 both take home $1 while team 3 takes home $3, for a total of $5. Maybe that's the difference. Even if you know for a fact that you aren't on team 4, you also aren't in a world where team 4 was eliminated from the start. The team still needs to factor into your calculations... somehow. Maybe it means your teammate isn't really making the same decision you are? But it's perfectly symmetrical information. Maybe you don't get to eliminate team 4 unless your teammate does? But the proof is right in front of you. Maybe the information isn't symmetrical because your teammate could be in room B? I don't know. I feel like there's an answer in here somewhere, but I've spent several hours on this post and I have other things to do today.

Wait, but can't the AI also choose to adopt the strategy "build another computer with a larger largest computable number"?

4Donald Hobson2y
If the computer has a finite amount of memory and can't build more, this puts a 2n bound on how long it can weight. If it can build more, it will. The point is that it needs to pick some long running computation that it can be fairly sure halts eventually. This gets into details about exactly how the AI is handling logical uncertainty.

I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?

3Donald Hobson2y
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was "The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom. An AI asked to output the largest number it can, is playing a game of name the largest computable number.

Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever.

I don't think the question can be dismissed that easily.

It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.

1Dagon2y
The answer remains the same - as far as we know, the universe is finite and quantized. At any t, there is a probability of reaching t+epsilon, making the standard expected utility calculation (probability X reward) useful.

Infinite t does not necessarily deliver infinite utility.

Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.

It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.

(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)

3Andrew Kao2y
I see what you mean now, thanks for clarifying. I'm not personally aware of any "best" or "correct" solutions, and I would be quite surprised if there were one (mathematically, at least, we know there's no single maximizer). But I think concretely speaking, you can restrict the choice set of t to a compact set of size (0, 1 - \epsilon] and develop the appropriate bounds for the analysis you're interested in. Maybe not the most satisfying answer, but I guess that's Analysis in a nutshell.

I see. So the answer is that it is indeed true that Godel's statement is true in all models of second-order PA, but unprovable nonetheless since Godel's completeness theorem isn't true for second-order logic?

4Kutta3y
Yes. To expand a bit, in fact the straightforward way to show that second-order arithmetic isn't complete in the first sense is by using the Gödel sentence G. G says via an encoding that G is not provable in second-order arithmetic. Since the only model (up to isomorphism) is the model with the standard natural numbers, an internal statement which talks about encoded proofs is interpreted in the semantics as a statement which talks about actual proof objects of second-order arithmetic. This is in contrast to first-order arithmetic where we can interpret an internal statement about encoded proofs as ranging over nonstandard numbers as well, and such numbers do not encode actual proof objects. Therefore, when we interpret second-order G, we always get the semantic statement "G is not provable in second-order arithmetic". From this and the soundness of the proof system, it follows that G is not provable. Hence, G holds in all models (recall that there is just one model up to isomorphism), but is not provable.

This seems to be relevant to calculations of climate change externalities, where the research is almost always based on the direct costs of climate change if no one modified their behaviour, rather than the cost of building a sea wall, or planting trees.

Disagree. Daria considers the colour of the sky an important issue because it is socially important, not because it is of actual cognitive importance. Ferris recognizes that it doesn't truly change much about his beliefs, since their society doesn't have any actual scientific theories predicting the colour of the sky (if they did, the alliances would not be on uncorrelated issues like taxes and marriage), and bothers with things he finds to be genuinely more important.

One can absolutely construct a utility function for the robot. It's a "shooting-blue maximizer". Just because the appearing utility function is wrong doesn't mean there isn't a utility function.

I'm not sure your interpretation of logical positivism is what the positivists actually say. They don't argue against having a mental model that is metaphysical, they point out that this mental model is simply a "gauge", and that anything physical is invariant under changes of this gauge.

Interesting. Did they promise to do so beforehand?

In any case, I'm not surprised the Soviets did something like this, but I guess the point is really "Why isn't this more widespread?" And also: "why does this not happen with goals other than staying in power?" E.g. why has no one tried to pass a bill that says "Roko condition AND we implement this-and-this policy". Because otherwise it seems that the stuff the Soviets did was motivated by something other than Roko's basilisk.

0avturchin3y
It was not promised, but anyone who read the story of previous revolutions, like French one, could guess.

But that's not Roko's basilisk. Whether or not you individually vote for the candidate does not affect you as long as the candidate wins.

3avturchin3y
In early Soviet history they actually checked if a person actually supported the winning party by looking of what you did 10-20 years ago. If the person was a member of wrong party in 1917, he could be prosecuted in 1930th.

The "Dutch books" example is not restricted to improper priors. I don't have time to transform this into the language of your problem, but the basically similar two-envelopes problem can arise from the prior distribution:

f(x) = 1/4*(3/4)^n where x = 2^n (n >=0), 0 if x cannot be written in this form

Considering this as a prior on the amount of money in an envelope, the expectation of the envelope you didn't choose is always 8/7 of the envelope you did choose.

There is no actual mathematical contradiction with this sort of thing -- wit... (read more)