JBlack

Posts

Sorted by New

Wiki Contributions

Comments

Answer by JBlackNov 30, 202320

(1) There is no real relevance to probability here. A future-me will wake up on planet A. A future-me will also wake up on planet B.

(2) Again, a future-me will wake up on planets A, C, and D.

(3) This depends upon my expectation of how useful money is to future-mes on the various planets. If I pay X then A-me nets 100-X dollars more, and B-me loses X dollars. $50 is neutral across the sum of future-mes, but B-me is likely to regret that slightly more than A-me will benefit from it. Though really, why is someone selling me a ticket that will pay out $100 for less than $100? I'd have a lot of credence that this is a scam. How does the economy work if you can legitimately copy money freely anyway?

(4) As with (3), only with 3 future mes instead of 2.

The main difference is that EDT quantifies over actions, while FDT quantifies over strategies that choose actions, when determining what action to take. In the end, they both tell you which action of the available actions you should take given an epistemic state. So yes, that is what FDT is, and it is different from EDT.

FDT does not require precommitment as an available action, since the decision theory itself tells you what action you should take given your epistemic state. FDT tells you "if you're in this game and you see $1, you should leave it", no precommitment or self-modification required. You either comply with the FDT recommendation at any given time, or not.

There is no need to mess about with "well, an EDT agent in this epistemic situation should take the $1, but if they self-modified then they aren't capable of following the EDT recommendation anymore which is good because they on average end up better off", or any of that mess with commitment races, or whatever.

Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.

However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?

That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don't think that's quite right.

An agent is presented with a transparent box, which contains either $1 or $100. They have the option to open the box and take the money, or leave. Perfect predictor Omega had previously set up the box according to the following rule: if they predicted that the agent would take $1, they put in $1 with 99% probability, otherwise $100. If they predicted that the agent would leave $1, they put in $100 with 99% probability, otherwise they put in $1.

From the EDT point of view, there are two separate decision problems here, one for each amount that the agent sees in the box. The world model implicit in P(O_j|A) can't depend upon how or why Omega put various amounts of money in the box, because the agent has already ruled out being in a world in which Omega put a different amount of money in the box.

Obviously it answers "take the money" for each. Over all universes then, 99% of EDT agents get $1 and 1% get $100, for an average performance of $1.99.

From the FDT point of view there are not two separate decision problems here, but optimization of a strategy mapping a 1-bit input (amount of money seen) to a 1-bit output (take or leave). The optimal function is to always leave $1 and always take $100. Then over all universes, 99% of FDT agents get $100 and 1% get nothing for an average performance of $99.

A few confusions I had when reading the central definition of "lie" used in this post:

When I say that someone lies, I mean that they have communicated anti-information: information that predictably makes people become more wrong about the world or about what the locutor believes. This includes lies of commission, lies of omission, misdirection, and more.

Predictable by whom, under what circumstances? This makes quite a large difference to the meaning.

Certainly not by the speaker at the time, or it would be impossible to lie inadvertently (which is also a highly non-central use of the word "lie", just in case you weren't aware of that).

Certainly not by the listeners, because if they could predict it then they would be able to discount the communication and therefore not become more wrong.

Is it some hypothetical person who knows the true state of the world? I guess that would fit but can't be applied in practice, and it would be very strange to say the something is "predictable" when nobody in the world could predict it.

Maybe just the speaker, but after receiving additional information? Then it becomes conditional on what information they receive. Maybe just the fact that there exists information that they could receive, that would allow them to predict it? But that's even worse, because it may depend upon information private to the listener or possibly not known to anyone.

Maybe it's predictable to the speaker in the presence of information that they already know, but don't necessarily realize that they know? Or maybe a "jury of their peers" in the sense that the additional information required is generally known or expected to be known? That makes it rather subjective, though, which isn't ideal.

So no, I'm still not really clear exactly what this definition means in its important highlighted term due to a lack of referent.

Interesting idea, but definitely didn't live up to the title. My expectation for "epistemic monopoly" would be something like a business preventing everyone else from thinking or learning about the world.

Answer by JBlackNov 12, 202340

Continuing the pattern of distribution of "better"-ness, 1/100,000 are 65536 times better than the median, and 1/1,000,000 are 4,294,967,276 times better than the median. If you have more than 10,000,000 soldiers then you likely have one that is 10^19 times better.

So the elites are the only ones that are meaningful for fighting your war. If the base rate of kidney donation is nonzero, they also immediately donate both their kidneys and die due to being 10^19 times more likely to donate kidneys. So the optimal strategy is to ensure that the base rate of kidney donation is zero.

I am good at making correct moral decisions.
I am good at communicating.
I am good at tolerance, and patience, and humility.
I am good at deciding beneficial national policies and priorities.
I am good at driving.
I am good at my job.
I am good at coming up with lists of examples.

Load More