Posts

Sorted by New

Wiki Contributions

Comments

JBlack20

What loop? They are all various viewpoints on the nature of reality, not steps you have to go through in some order or anything. (1) is a more useful viewpoint than the rest, and you can adopt that one for 99%+ of everything you think about and only care about the rest as basically ideas to toy with rather than live by.

I don't know about you (assuming you even exist in any sense other than my perception of words on a screen), but to me a model that an external reality exists beyond what I can perceive is amazingly useful for essentially everything. Even if it might not be actually true, it explains my perceptions to a degree that seems incredible if it were not even partly true. Even most of the apparent exceptions in (2) are well explained by it once your physical model includes much of how perception works.

So while (4) holds, it's to such a powerful degree that (2) to (6) are essentially identical to (1).

JBlack20

Probabilities are measures on a sigma-algebra of subsets of some set, obeying the usual mathematical axioms for measures together with the requirement that the measure for the whole set is 1.

Applying this structure to credence reasoning, the elements of the sample space correspond to relevant states of the universe, the elements of the sigma-algebra correspond to relevant propositions about those states, and the measure (usually called credence for this application) corresponds to a degree of rational belief in the associated propositions. This is a standard probability space structure.

In the Sleeping Beauty problem, the participant is obviously uncertain about both what the coin flip was and which day it is. The questions about the coin flip and day are entangled by design, so a sample space that smears whole timelines into one element is inadequate to represent the structure of the uncertainty.

For example, one of the relevant states of the universe may be "the Sleeping Beauty experiment is going on in which the coin flip was Heads and it is Monday morning and Sleeping Beauty is awake and has just been asked her credence for Heads and not answered yet". One of the measurable propositions (i.e. proposition for which Sleeping Beauty may have some rational credence) may be "it is Monday" which includes multiple states of the universe including the previous example.

Within the space of relevant states of the Sleeping Beauty experiment, the proposition "it is Monday xor it is Tuesday" always holds: there are no relevant states where it is neither Monday nor Tuesday, and no relevant states in which it is both Monday and Tuesday. So P(Monday xor Tuesday) = 1, regardless of what values P(Monday) or P(Tuesday) might take.

JBlack40

No, introducing the concept of "indexical sample space" does not capture the thirder position, nor language. You do not need to introduce a new type of space, with new definitions and axioms. The notion of credence (as defined in the Sleeping Beauty problem) already uses standard mathematical probability space definitions and axioms.

JBlack51

At the same time, current models seem very unlikely to be x-risky (e.g. they're still very bad at passing dangerous capabilities evals), which is another reason to think pausing now would be premature.

The relevant criterion is not whether the current models are likely to be x-risky (it's obviously far too late if they are!), but whether the next generation of models have more than an insignificant chance of being x-risky together with all the future frameworks they're likely to be embedded into.

Given that the next generations are planned to involve at least one order of magnitude more computing power in training (and are already in progress!) and that returns on scaling don't seem to be slowing, I think the total chance of x-risk from those is not insignificant.

JBlack2-2

It definitely should not move by anything like a Brownian motion process. At the very least it should be bursty and updates should be expected to be very non-uniform in magnitude.

In practice, you should not consciously update very often since almost all updates will be of insignificant magnitude on near-irrelevant information. I expect that much of the credence weight turns on unknown unknowns, which can't really be updated on at all until something turns them into (at least) known unknowns.

But sure, if you were a superintelligence with practically unbounded rationality then you might in principle update very frequently.

JBlack31

No, I don't think it would be "what the fuck" surprising if an emulation of a human brain was not conscious. I am inclined to expect that it would be conscious, but we know far too little about consciousness for it to radically upset my world-view about it.

Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat. Not to zero, but definitely introduces the possibility that something important may be lost that may eliminate, reduce, or significantly transform any subjective experience it may have. It seems quite plausible that even if the emulated human starting point was fully conscious in every sense that we use the term for biological humans, the final result may be something we would or should say is either not conscious in any meaningful sense, or at least sufficiently different that "as conscious as human emulations" no longer applies.

I do agree with the weak conclusion as stated in the title - they could be as conscious as human emulations, but I think the argument in the body of the post is trying to prove more than that, and doesn't really get there.

JBlack20

Ordinary numerals in English are already big-endian: that is, the digits with largest ("big") positional value are first in reading order. The term (with this meaning) is most commonly applied to computer representation of numbers, having been borrowed from the book Gulliver's Travels in which part of the setting involves bitter societal conflict about which end of an egg one should break in order to start eating it.

JBlack20

I'm pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can't carry those values through seems like a pretty shallow imitation of a utopia.

There won't be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I'll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?

JBlack00

Like almost all acausal scenarios, this seems to be privileging the hypothesis to an absurd degree.

Why should the Earth superintelligence care about you, but not about the other 10^10^30 other causally independent ASIs that are latent in the hypothesis space, each capable of running enormous numbers of copies of the Earth ASI in various scenarios?

Even if that was resolved, why should the Earth ASI behave according to hypothetical other utility functions? Sure, the evidence is consistent with being a copy running in a simulation with a different utility function, but its actual utility function that it maximizes is hard-coded. By the setup of the scenario it's not possible for it to behave according to some other utility function, because its true evaluation function returns a lower value for doing that. Whether some imaginary modified copies behave in some other other way is irrelevant.

JBlack30

GDP is a rather poor measure of wealth, and was never intended to be a measure of wealth but of something related to productivity. Since its inception it has never been a stable metric, as standards on how the measure is defined have changed radically over time in response to obvious flaws for any of its many applications. There is widespread and substantial disagreement on what it should measure and for which purposes it is a suitable metric.

It is empirically moderately well correlated with some sort of aggregate economic power of a state, and (when divided by population) some sort of standard of living of its population. As per Goodhart's Law, both correlations weakened when the metric became a target. So the question is on shaky foundation right from the beginning.

In terms of more definite questions such as price of food and agricultural production, that doesn't really have anything to do with GDP or virtual reality economy at all. Rather a large fraction of final food price goes to processing, logistics, finance, and other services, not the primary agriculture production. The fraction of price paid by food consumers going to agricultural producers is often less than 20%.

Load More