All of Abhimanyu Pallavi Sudhir's Comments + Replies

The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

Aren't you just talking about implied priors? AFAIK no one has calculated the implied prior of a neural network.

A way to beat superrational/EDT agents?

No, it doesn't. There is no 1/4 chance of anything once you've found yourself in Room A1.

You do acknowledge that the payout for the agent in room B (if it exists) from your actions is the same as the payout for you from your own actions, which if the coin came up tails is $3, yes?

A way to beat superrational/EDT agents?

I don't understand what you are saying. If you find yourself in Room A1, you simply eliminate the last two possibilities so the total payout of Tails becomes 6.

If you find yourself in Room A1, you do find yourself in a world where you are allowed to bet. It doesn't make sense to consider the counterfactual, because you already have gotten new information.

A way to beat superrational/EDT agents?

That's not important at all. The agents in rooms A1 and A2 themselves would do better to choose tails than to choose heads. They really are being harmed by the information.

1Dagon8moIt's totally important. The knowledge that you get paid for guessing T in the cases you're never asked the question is extremely relevant here. It changes the EV from 1/3 * 3 = 1 to 1/3 * 3 + 1/4 * 3 = 1.75.
A way to beat superrational/EDT agents?

I see, that is indeed the same principle (and also simpler/we don't need to worry about whether we "control" symmetric situations).

2Charlie Steiner8moYeah I'm still not sure how to think about this sort of thing short of going full UDT and saying something like "well, imagine this whole situation was a game - what would be the globally winning strategy?"
A way to beat superrational/EDT agents?

I don't think this is right. A superrational agent exploits the symmetry between A1 and A2, correct? So it must reason that an identical agent in A2 will reason the same way as it does, and if it bets heads, so will the other agent. That's the point of bringing up EDT.

1player_038moOh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall. It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On average, this maximizes winnings. Except this isn't the same situation at all. With group 4 eliminated from the get go, the remaining teams can do even better than $4 or $3. Teammates in room A2 knows for a fact that the coin landed heads, and they automatically earn $1. Teammates in room A1 are no longer responsible for their teammates' decisions, so they go for the $3. Thus teams 1 and 2 both take home $1 while team 3 takes home $3, for a total of $5. Maybe that's the difference. Even if you know for a fact that you aren't on team 4, you also aren't in a world where team 4 was eliminated from the start. The team still needs to factor into your calculations... somehow. Maybe it means your teammate isn't really making the same decision you are? But it's perfectly symmetrical information. Maybe you don't get to eliminate team 4 unless your teammate does? But the proof is right in front of you. Maybe the information isn't symmetrical because your teammate could be in room B? I don't know. I feel like there's an answer in here somewhere, but I've spent several hours on this post and I have other things to do today.
Utility functions without a maximum

Wait, but can't the AI also choose to adopt the strategy "build another computer with a larger largest computable number"?

4Donald Hobson8moIf the computer has a finite amount of memory and can't build more, this puts a 2n bound on how long it can weight. If it can build more, it will. The point is that it needs to pick some long running computation that it can be fairly sure halts eventually. This gets into details about exactly how the AI is handling logical uncertainty.
Utility functions without a maximum

I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?

3Donald Hobson8moAny number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was "The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom. An AI asked to output the largest number it can, is playing a game of name the largest computable number.
Utility functions without a maximum

Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever.

I don't think the question can be dismissed that easily.

Utility functions without a maximum

It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.

1Dagon8moThe answer remains the same - as far as we know, the universe is finite and quantized. At any t, there is a probability of reaching t+epsilon, making the standard expected utility calculation (probability X reward) useful.
Utility functions without a maximum

Infinite t does not necessarily deliver infinite utility.

Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.

It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.

(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)

3Andrew Kao8moI see what you mean now, thanks for clarifying. I'm not personally aware of any "best" or "correct" solutions, and I would be quite surprised if there were one (mathematically, at least, we know there's no single maximizer). But I think concretely speaking, you can restrict the choice set of t to a compact set of size (0, 1 - \epsilon] and develop the appropriate bounds for the analysis you're interested in. Maybe not the most satisfying answer, but I guess that's Analysis in a nutshell.
Godel in second-order logic?

I see. So the answer is that it is indeed true that Godel's statement is true in all models of second-order PA, but unprovable nonetheless since Godel's completeness theorem isn't true for second-order logic?

4Kutta8moYes. To expand a bit, in fact the straightforward way to show that second-order arithmetic isn't complete in the first sense is by using the Gödel sentence G. G says via an encoding that G is not provable in second-order arithmetic. Since the only model (up to isomorphism) is the model with the standard natural numbers, an internal statement which talks about encoded proofs is interpreted in the semantics as a statement which talks about actual proof objects of second-order arithmetic. This is in contrast to first-order arithmetic where we can interpret an internal statement about encoded proofs as ranging over nonstandard numbers as well, and such numbers do not encode actual proof objects. Therefore, when we interpret second-order G, we always get the semantic statement "G is not provable in second-order arithmetic". From this and the soundness of the proof system, it follows that G is not provable. Hence, G holds in all models (recall that there is just one model up to isomorphism), but is not provable.
Six economics misconceptions of mine which I've resolved over the last few years

This seems to be relevant to calculations of climate change externalities, where the research is almost always based on the direct costs of climate change if no one modified their behaviour, rather than the cost of building a sea wall, or planting trees.

A Fable of Science and Politics

Disagree. Daria considers the colour of the sky an important issue because it is socially important, not because it is of actual cognitive importance. Ferris recognizes that it doesn't truly change much about his beliefs, since their society doesn't have any actual scientific theories predicting the colour of the sky (if they did, the alliances would not be on uncorrelated issues like taxes and marriage), and bothers with things he finds to be genuinely more important.

The Blue-Minimizing Robot

One can absolutely construct a utility function for the robot. It's a "shooting-blue maximizer". Just because the appearing utility function is wrong doesn't mean there isn't a utility function.

No Logical Positivist I

I'm not sure your interpretation of logical positivism is what the positivists actually say. They don't argue against having a mental model that is metaphysical, they point out that this mental model is simply a "gauge", and that anything physical is invariant under changes of this gauge.

Political Roko's basilisk

Interesting. Did they promise to do so beforehand?

In any case, I'm not surprised the Soviets did something like this, but I guess the point is really "Why isn't this more widespread?" And also: "why does this not happen with goals other than staying in power?" E.g. why has no one tried to pass a bill that says "Roko condition AND we implement this-and-this policy". Because otherwise it seems that the stuff the Soviets did was motivated by something other than Roko's basilisk.

0avturchin1yIt was not promised, but anyone who read the story of previous revolutions, like French one, could guess.
Political Roko's basilisk

But that's not Roko's basilisk. Whether or not you individually vote for the candidate does not affect you as long as the candidate wins.

3avturchin1yIn early Soviet history they actually checked if a person actually supported the winning party by looking of what you did 10-20 years ago. If the person was a member of wrong party in 1917, he could be prosecuted in 1930th.
Against improper priors

The "Dutch books" example is not restricted to improper priors. I don't have time to transform this into the language of your problem, but the basically similar two-envelopes problem can arise from the prior distribution:

f(x) = 1/4*(3/4)^n where x = 2^n (n >=0), 0 if x cannot be written in this form

Considering this as a prior on the amount of money in an envelope, the expectation of the envelope you didn't choose is always 8/7 of the envelope you did choose.

There is no actual mathematical contradiction with this sort of thing -- wit... (read more)