Posts

Sorted by New

Wiki Contributions

Comments

JBlack20

More specifically this is the experiment where awakenings on Monday and Tuesday are mutually exclusive during one trial, such as No-Coin-Toss or Single-Awakening

No, I specifically was referring to the Sleeping Beauty experiment. Re-read my comment. Or not. At this point you appear to be deliberately missing the point from my point of view, and not even commenting on the parts where I make the point. There is no need to reply to this comment, as I probably won't continue participating in this discussion any further.

JBlack20

Not just any set.

Almost any set: only the empty set is excluded. The identities of the elements themselves are irrelevant to the mathematical structure. Any further restrictions are not a part of mathematical definition of probability space, but some particular application you may have in mind.

If elementary event {A} has P(A) = 0, then we can simply not include outcome A into the sample space for simplicity sake.

In some cases this is reasonable, but in others it is impossible. For example, when defining continuous probability distributions you can't eliminate sample set elements having measure zero or you will be left with the empty set.

There is a potential source of confusion in the "credence" category. Either you mean it as a synonym for probability, and then it follows all the properties of probability, including the fact that it can only measure formally defined events from the event space, which have stable truth value during an iteration of probability experiment.

It is a synonym for probability in the sense that it is a mathematical probability: that is, a measure over a sigma-algebra for which the axioms of a probability space are satisfied. I use a different term here to denote this application of the mathematical concept to a particular real-world purpose. Beside which, the Sleeping Beauty problem explicit uses the word.

I also don't quite know what you mean by the phrase "stable truth value". As defined, a universe state either satisfies or does not satisfy a proposition. If you're referring to propositions that may vary over space or time, then when modelling a given situation you have two choices: either restrict the universe states in your set to locations or time regions over which all selected propositions have a definite truth value, or restrict the propositions to those that have a definite truth value over the selected universe states. Either way works.

Semantic statement "Today is Monday" is not a well-defined event in the Sleeping Beauty problem.

Of course it is. I described the structure under which it is, and you can verify that it does in fact satisfy the axioms of a probability space. As you're looking for a crux, this is probably it.

Universe states can be distinguished by time information, and in problems like this where time is part of the world-model, they should be. The mathematical structure of a probability space has nothing to do with it, as the mathematical formalism doesn't care what the elements of a sample space are.

Otherwise you can't model even a non-coin flip form of the Sleeping Beauty problem in which Beauty is always awoken twice. If the problem asks "what should be Beauty's credence that it is Monday" then you can't even model the question without distinguishing universe states by time.

JBlack20

What loop? They are all various viewpoints on the nature of reality, not steps you have to go through in some order or anything. (1) is a more useful viewpoint than the rest, and you can adopt that one for 99%+ of everything you think about and only care about the rest as basically ideas to toy with rather than live by.

I don't know about you (assuming you even exist in any sense other than my perception of words on a screen), but to me a model that an external reality exists beyond what I can perceive is amazingly useful for essentially everything. Even if it might not be actually true, it explains my perceptions to a degree that seems incredible if it were not even partly true. Even most of the apparent exceptions in (2) are well explained by it once your physical model includes much of how perception works.

So while (4) holds, it's to such a powerful degree that (2) to (6) are essentially identical to (1).

JBlack21

Probabilities are measures on a sigma-algebra of subsets of some set, obeying the usual mathematical axioms for measures together with the requirement that the measure for the whole set is 1.

Applying this structure to credence reasoning, the elements of the sample space correspond to relevant states of the universe, the elements of the sigma-algebra correspond to relevant propositions about those states, and the measure (usually called credence for this application) corresponds to a degree of rational belief in the associated propositions. This is a standard probability space structure.

In the Sleeping Beauty problem, the participant is obviously uncertain about both what the coin flip was and which day it is. The questions about the coin flip and day are entangled by design, so a sample space that smears whole timelines into one element is inadequate to represent the structure of the uncertainty.

For example, one of the relevant states of the universe may be "the Sleeping Beauty experiment is going on in which the coin flip was Heads and it is Monday morning and Sleeping Beauty is awake and has just been asked her credence for Heads and not answered yet". One of the measurable propositions (i.e. proposition for which Sleeping Beauty may have some rational credence) may be "it is Monday" which includes multiple states of the universe including the previous example.

Within the space of relevant states of the Sleeping Beauty experiment, the proposition "it is Monday xor it is Tuesday" always holds: there are no relevant states where it is neither Monday nor Tuesday, and no relevant states in which it is both Monday and Tuesday. So P(Monday xor Tuesday) = 1, regardless of what values P(Monday) or P(Tuesday) might take.

JBlack40

No, introducing the concept of "indexical sample space" does not capture the thirder position, nor language. You do not need to introduce a new type of space, with new definitions and axioms. The notion of credence (as defined in the Sleeping Beauty problem) already uses standard mathematical probability space definitions and axioms.

JBlack51

At the same time, current models seem very unlikely to be x-risky (e.g. they're still very bad at passing dangerous capabilities evals), which is another reason to think pausing now would be premature.

The relevant criterion is not whether the current models are likely to be x-risky (it's obviously far too late if they are!), but whether the next generation of models have more than an insignificant chance of being x-risky together with all the future frameworks they're likely to be embedded into.

Given that the next generations are planned to involve at least one order of magnitude more computing power in training (and are already in progress!) and that returns on scaling don't seem to be slowing, I think the total chance of x-risk from those is not insignificant.

JBlack2-2

It definitely should not move by anything like a Brownian motion process. At the very least it should be bursty and updates should be expected to be very non-uniform in magnitude.

In practice, you should not consciously update very often since almost all updates will be of insignificant magnitude on near-irrelevant information. I expect that much of the credence weight turns on unknown unknowns, which can't really be updated on at all until something turns them into (at least) known unknowns.

But sure, if you were a superintelligence with practically unbounded rationality then you might in principle update very frequently.

JBlack31

No, I don't think it would be "what the fuck" surprising if an emulation of a human brain was not conscious. I am inclined to expect that it would be conscious, but we know far too little about consciousness for it to radically upset my world-view about it.

Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat. Not to zero, but definitely introduces the possibility that something important may be lost that may eliminate, reduce, or significantly transform any subjective experience it may have. It seems quite plausible that even if the emulated human starting point was fully conscious in every sense that we use the term for biological humans, the final result may be something we would or should say is either not conscious in any meaningful sense, or at least sufficiently different that "as conscious as human emulations" no longer applies.

I do agree with the weak conclusion as stated in the title - they could be as conscious as human emulations, but I think the argument in the body of the post is trying to prove more than that, and doesn't really get there.

JBlack20

Ordinary numerals in English are already big-endian: that is, the digits with largest ("big") positional value are first in reading order. The term (with this meaning) is most commonly applied to computer representation of numbers, having been borrowed from the book Gulliver's Travels in which part of the setting involves bitter societal conflict about which end of an egg one should break in order to start eating it.

JBlack20

I'm pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can't carry those values through seems like a pretty shallow imitation of a utopia.

There won't be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I'll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?

Load More