Posts

Sorted by New

Wiki Contributions

Comments

JBlack2d20

I'm pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can't carry those values through seems like a pretty shallow imitation of a utopia.

There won't be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I'll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?

JBlack2d00

Like almost all acausal scenarios, this seems to be privileging the hypothesis to an absurd degree.

Why should the Earth superintelligence care about you, but not about the other 10^10^30 other causally independent ASIs that are latent in the hypothesis space, each capable of running enormous numbers of copies of the Earth ASI in various scenarios?

Even if that was resolved, why should the Earth ASI behave according to hypothetical other utility functions? Sure, the evidence is consistent with being a copy running in a simulation with a different utility function, but its actual utility function that it maximizes is hard-coded. By the setup of the scenario it's not possible for it to behave according to some other utility function, because its true evaluation function returns a lower value for doing that. Whether some imaginary modified copies behave in some other other way is irrelevant.

JBlack6d30

GDP is a rather poor measure of wealth, and was never intended to be a measure of wealth but of something related to productivity. Since its inception it has never been a stable metric, as standards on how the measure is defined have changed radically over time in response to obvious flaws for any of its many applications. There is widespread and substantial disagreement on what it should measure and for which purposes it is a suitable metric.

It is empirically moderately well correlated with some sort of aggregate economic power of a state, and (when divided by population) some sort of standard of living of its population. As per Goodhart's Law, both correlations weakened when the metric became a target. So the question is on shaky foundation right from the beginning.

In terms of more definite questions such as price of food and agricultural production, that doesn't really have anything to do with GDP or virtual reality economy at all. Rather a large fraction of final food price goes to processing, logistics, finance, and other services, not the primary agriculture production. The fraction of price paid by food consumers going to agricultural producers is often less than 20%.

JBlack6d42

It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can "just do it" without it being possible for Omega to have predicted that you will "just do it" any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).

JBlack7d31

It seems to me that the problem in the counterlogical mugging isn't about how much computation is required for getting the answer. It's about whether you trust Omega to have not done the computation beforehand, and whether you believe they actually would have paid you, no matter how hard or easy the computation is. Next to that, all the other discussion in that section seems irrelevant.

JBlack7d20

Oh, sure. I was wondering about the reverse question: is there something that doesn't really qualify as torture where subjecting a billion people to it is worse than subjecting one person to torture.

I'm also interested in how this forms some sort of "layered" discontinuous scale. If it were continuous, then you could form a chain of relations of the form "10 people suffering A is as bad as 1 person suffering B", "10 people suffering B is as bad as 1 person suffering C", and so on to span the entire spectrum.

Then it would take some additional justification for saying that 100 people suffering A is not as bad as 1 person suffering C, 1000 A vs 1 D, and so on.

JBlack8d30

Is there some level of discomfort short of extreme torture for a billion to suffer where the balance shifts?

JBlack10d33

It makes sense to very legibly one-box even if Omega is a very far from perfect predictor. Make sure that Omega has lots of reliable information that predicts that you will one-box.

Then actually one-box, because you don't know what information Omega has about you that you aren't aware of. Successfully bamboozling Omega gets you an extra $1000, while unsuccessfully trying to bamboozle Omega loses you $999,000. If you can't be 99.9% sure that you will succeed then it's not worth trying.

JBlack14d20

Almost.

The argument doesn't rule out substance dualism, in which consciousness may not be governed by physical laws, but in which it is at least causally connected to the physical processes of writing and talking and neural activity correlated with thinking about consciousness. It's only an argument against epiphenomenalism and related hypotheses in which the behaviour or existence of consciousness has no causal influence on the physical universe.

JBlack15d30

I don't think this was a statement about whether it's possible in principle, but about whether it's actually feasible in practice. I'm not aware of any conlangs, before the cutoff date or not, that have a training corpus large enough for the LLM to be trained to the same extent that major natural languages are.

Esperanto is certainly the most widespread conlang, but (1) is very strongly related to European languages, (2) is well before the cutoff date for any LLM, (3) all training corpora of which I am aware contain a great many references to other languages and their cross-translations, and (4) the largest corpora are still less than 0.1% of those available for most common natural languages.

Load More