Gurkenglas

I operate by Crocker's rules.

I won't deliberately, derisively spread something just because you tried to point out an infohazard.

Wiki Contributions

Comments

SIA > SSA, part 4: In defense of the presumptuous philosopher

That said, I can think of an experiment that at least doesn't throw a compilation error. On a quantum computer, simulate different people in different timelines. If onlookers paying different amounts of attention to different of those timelines pumps amplitude into those timelines, and I see no reason it should, this is straightforwardly measurable. People aren't ontologically basic, so the same pumping should already warp the lesser calculations we do today.

SIA > SSA, part 4: In defense of the presumptuous philosopher

When people look for excuses to believe phi they will be more likely to find excuses to believe phi than excuses to believe not-phi.

The thing that's important in a prediction model is that it is accurate. Therefore, no interventions. The protagonist is allowed to believe that he is the protagonist because that's what he believed in real life.

I'm not talking about the precision of our instruments, I say that you're measuring entirely the wrong thing. Suppose a box contains a cat and air. What has more probability, the cat or the air? That question makes no sense. Probabilities are about different ways the contents of the box might be configured.

SIA > SSA, part 4: In defense of the presumptuous philosopher

I didn't talk about the quality of the excuse, I was remarking on why you were looking for it; and people are more likely to find what they're looking for than the opposite. But on the object level: Do you mean that onlookers would intervene within their simulation to keep the protagonist safe? That sounds like a rather bad strategy for modeling important events correctly. But you probably mean that this particular fact about the future might be learnable by direct observation in the past... I regret to inform you that quantum physics does not place different amplitude on different objects on the same planet, amplitude is a property of world states. The proposed soul quantity has nothing to do with physics, it is a high-level fact about minds and what they're modelling. It is a quantity of a kind with "how much do I care about each human?" and "how likely do I think each future to be?".

SIA > SSA, part 4: In defense of the presumptuous philosopher

Sounds like you've written the bottom line of wanting to prove your status to others before you found the excuse of it being important to people to know not to mess with you.

We should not give more status to those who determine the fate of the world because then it will be determined not by those that seek to optimize it but those that seek status.

To measure soul you must pick some anthropic definition. If it's "who is most desperately-for-precision modeled by onlookers?", to measure it you acquire omnipotence, model all the onlookers, and check who they look at.

Covid 12/2: But Aside From That

That doesn't sound like less then 20 percent probability! Just in case, we could close all borders just ahead of the billions-of-infections peak for a month to check whether the omnicronomicon opens; then at least each country might only get swept by its own variant.

SIA > SSA, part 4: In defense of the presumptuous philosopher

Throw a fair coin a billion times. If you find yourself in the timeline where it comes up heads every time, that's evidence that it has more soul than the others.

Does the Structure of an algorithm matter for AI Risk and/or consciousness?

Yes, architecture matters. We don't know what architectures are how likely to produce a rogue agent, but we have subjective expectations, and what a coincidence it would be that they should be the same in each case. For example, if an architecture easily solves a given task, it needs to be scaled up less and then I expect less opportunity for a mesa-optimizer to arise. Mixture of Experts is riskier if the chance that an expert of given size will go rogue is astronomically close to neither 0 nor 1; it is less risky if that chance scales too fast with size. Of course, it's a shaky assumption that splitting a large network into experts will render them unable to form an entire rogue. My object-level arguments might be wrong, but our risk mitigation strategies should not disregard the question of architecture entirely.

Biology-Inspired AGI Timelines: The Trick That Never Works

My answer to the exercise, thought through before reading the remainder of the post but written down after seeing others do the same:

There is more than one direction of higher entropy to take, not necessarily towards OpenPhil's distribution. Also, entropy is relative to a measure and the default formula privileges the Lebesgue measure. Instead of calculating entropy from the probabilities for buckets 2022, 2023, 2024, ..., why not calculate it for buckets 3000-infinity, 2100-3000, 2025-2099, 2023-2024, ...?

Covid 12/2: But Aside From That

If variants are spawned as a function of total cases, shouldn't we worry that the billions of infections this winter will spawn a dozen new variants that all sweep the globe at the same time and possibly kill everyone?

SIA > SSA, part 4: In defense of the presumptuous philosopher

Then I would not put them in an Earth environment, and would not straddle them with akrasia. This bias sounds more caused by focusing on the humans that end up determining the fate of the cosmos for purposes of prediction.

Load More