LESSWRONG
LW

612
David Joshua Sartor
11101017
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
Reasons against donating to Lightcone Infrastructure
David Joshua Sartor2d10

I changed my mind; at least in the case of my sharing information with you, if you were perfectly trustworthy you'd totally just defer to my beliefs for not making me worse off as a result. But, as you said, plausibly even in this easy case being perfect is way too hobbling for humans 'cause of infohazards.

Reply
Reasons against donating to Lightcone Infrastructure
David Joshua Sartor3d30

Oliver said "The promise that Mikhail asked me to make was, as far as I understood it, to 'not use any of the information in the conversation in any kind of adversarial way towards the people who the information is about'.".

Oliver understood you to be asking him not to use the information to hurt anyone involved, which is way more restrictive, and in fact impossible for a human to do perfectly.
Unless he meant something more specific by "any kind of adversarial way", which promise wouldn't get you what you want.

If you meant the reasonable thing, and said it clearly, I agree Oliver's misunderstanding is surprising and probably symptomatic of not reading planecrash.

Reply
Reasons against donating to Lightcone Infrastructure
David Joshua Sartor4d10

I agree that promise is overly restrictive.
'Don't make my helping you have been a bad idea for me' is a more reasonable version, but I assume you're already doing that in your expectation, and it makes sense for different people to take the other's expectation into account different amounts for this purpose.

Reply
Shortform
David Joshua Sartor6d10

I agree none of this is relevant to anything, I was just looking for intrinsically interesting thoughts about optimal chess.

I thought at least CDT could be approximated pretty well with a bounded variant; causal reasoning is a normal thing to do. FDT is harder, but some humans seem to find it a useful perspective, so presumably you can have algorithms meaningfully closer or further, and that is a useful proxy for something.
Actually never mind, I have no experience with the formalisms.

I guess "choose the move that maximises your expected value" is technically compatible with FDT, you're right.
It seems like the obvious way to describe what CDT does, and a really unnatural way to describe what FDT does, so I got confused.

Reply
Shortform
David Joshua Sartor8d10

Your description of EVGOO is incorrect; you describe a Causal Decision Theory algorithm, but (assuming the opponent also knows your strategy 'cause otherwise you're cheating) what you want is LDT.
(Assuming they only see each others' policy for that game, so an agent acting as eg CDT is indistinguishable from real CDT, then LDT is optimal even against such fantastic pathological opponents as "Minimax if my opponent looks like it's following the algorithm that you the reader are hoping is optimal, otherwise resign" (or, if they can see each others' policy for the whole universe of agents you're testing, then LDT at least gets the maximum aggregate score).)

Reply
Framing Practicum: Comparative Advantage
David Joshua Sartor16d30

There are two ways this sort of “trade” can’t be made:

  • One site is already maximally specialized. For instance, if Zion is already fully specialized in growing apples, then there are no further banana or coconut groves to replace with apple trees.
  • The two sites trade off in exactly the same ratios. For instance, Xenia and Zion both trade off apples:bananas at a ratio of 1:0.5, so we can’t achieve a pareto gain with a little more specialization in those two fruits between those two sites.

If Zion's fully specialized in growing apples, it can still replace apples with other things.

Note that “multiple goals” might really mean “multiple sub-goals” - e.g. Fruit Co might ultimately want to maximize profit, but producing more apples is a subgoal, producing more bananas is another subgoal, etc.

In that case I think utility is basically linear in each fruit, which makes it a bad example 'cause you don't need comparative advantage for that. (At least for me, an example of using a concept doesn't much help remembering it unless it's helpful to me for that example.)
IIUC it's only useful for subgoals when utility's sublinear in them.

Reply
The IABIED statement is not literally true
David Joshua Sartor16d10

"the Palestinians get control of Palestine, or the Israelis maintain control of Israel"

I think in these cases opposing ASIs work together to maintain the existence of the disputed land and/or people, and use RNG to decide who gets control.
Of course zero-sum conflicts do exist, but IIUC only in cases where goals are exactly opposed (at least between just two ASIs).

Reply
Education on My Homeworld
David Joshua Sartor18d10

A median earthling on another world would make lots and lots of errors in imagining Earth, and I think it would make sense to be biased toward errors that make things more legible.

Like how, in planecrash, it's said that smarter-than-average dath ilani doing the exercise still have their medianworld share dath ilan's high-end of intelligence, since "what kinds of innovations might superhuman geniuses make?" is not at all the point of the exercise. (Of course Eliezer ignored that rule...)

Reply
Nook Nature
David Joshua Sartor18d10

Actually, varying size seems a very good way to represent centrality, since it shows that a node surrounded by central nodes will also be central.

Reply
Education on My Homeworld
David Joshua Sartor23d20

The home of humanity is its ancestral environment, modern Earth only our residence.

It makes sense that in an exercise that asks what would produce you, your process should be biased toward the things you're adapted for, even when you're not especially.

Reply
Load More
Twelfth Virtue (the)
5 months ago
(+8/-7)
Overconfidence
a year ago
(+36)
Mind projection fallacy
3 years ago
(-4)
Correspondence Bias
3 years ago
(-4)
Gratitude
3 years ago
(+9/-9)
Gratitude
3 years ago
(-4)
Reality Is Normal
3 years ago
(-4)