All of Peter Gerdes's Comments + Replies

Challenge: know everything that the best go bot knows about go

This is an interesting direction to explore but as is I don't have any idea what you mean by understand the go bot and I fear figuring that out would itself require answering more than you want to ask.

For instance, what if I just memorize the source code. I can slowly apply each step on paper and as the adversarial training process has no training data or human expert input if I know the rules of go I can, Chinese room style, fully replicate the best go bot using my knowledge given enough time.

But if that doesn't count and you don't just mean be better th... (read more)

Conceptual engineering: the revolution in philosophy you've never heard of

At a conceptual level I'm completely on board.  At a practical level I fear a disaster.  Right now you at least need to find a word which you can claim to be analyzing and that fact encourages a certain degree of contact and disagreement even if a hard subject like philosophy should really have 5 specific rebuttal papers (the kind journals won't publish) for each positive proposal rather than the reverse as they do now.

The problem with conceptual engineering for philosophy is that philosophers aren't really going to start going out and doin... (read more)

Comment on decision theory

I'd argue that this argument doesn't work because the places where CDT, EDT or some new system diverge from each other are outside of the set of situations in which decision theory is a useful way to think about the problems. I mean it is always possible to simply take the outside perspective and merely describe facts of the form: under such and such situations algorithm A performs better than B.

What makes decision theory useful is that it implicitly accommodates the very common (for humans) situation in which the world doesn't depend in noticeable ways

... (read more)
Boltzmann brain decision theory

Seems like phrasing it in terms of decision theory only makes the situation more confusing. Why not just state the results in terms of: assuming there are a large number of copies of some algorithm A then there is more utility if A has such and such properties.

This works more generally. Instead of burying ourselves in the confusions of decision theory we can simply state results about what kind of outcomes various algorithms give rise to under various conditions.

>assuming there are a large number of copies of some algorithm A then there is more utility if A has such and such properties. This is only relevant if this results in a change in algorithm A. eg causal decision theory can know that if it was a UDT agent, then it would have more money in the Newcomb problem, but it won't change itself because of this (if Omega decided before the agent existed).
Are you in a Boltzmann simulation?

I think we need to be careful here about what constitutes a computation which might give rise to an experience. For instance suppose a chunk of brain pops into existence but with all momentum vectors flipped (for non-nuclear processes we can assume temporal symmetry) so the brain is running in reverse.

Seems right to say that could just as easily give rise to the experience of being a thinking human brain. After all we think the arrow of time is determined by direction of decreasing entropy not by some weird fact that only computations which proced in one

... (read more)
I tend to see this as an issue of decision theory, not probability theory. So if causality doesn't work in a way we can understand, the situation is irrelevant (note that some backwards-running brains will still follow an understandable causality from within themselves, so some backwards-running brains are decision-theory relevant).
Are you in a Boltzmann simulation?

You are making some unjustified assumptions about the way computations can be embedded in a physical process. In particular we shouldn't presume that the only way to instantiate a computation giving rise to an experience is via the forward evolution of time. See comment below.

Are you in a Boltzmann simulation?

That won't fix the issue. Just redo the analysis at whatever size is able to mereky do a few seconds of brain simulation.

It probably depends on how mass and time duration of the fluctuation are traded between themselves. For quantum fluctuations which return back to nothingness this relation is define by the principle of uncertainty, and for any fluctuations with significant mass, its time of existence would be minuscule share of a second, which would be enough only for one static observer-moment. But if we able imagine very efficient in calculations computer, which could perform many calculations by the time allowed for its existence by uncertainty principle, it should dominate by number of observer-moments.
Bayesian Probability is for things that are Space-like Separated from You

Of course, no actual individual or program is a pure Bayesian. Pure Bayesian updating presumes logical omniscience after all. Rather, when we talk about Bayesian reasoning we idealize individuals as abstract agents whose choices (potentially none) have a certain probabilistic effect on the world, i.e., basically we idealize the situation as a 1 person game.

You basically raise the question of what happens in Newcomb like cases where we allow the agent's internal deliberative state to affect outcomes independent of explicit choices made. But whole m... (read more)

Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem.

While I agree with your conclusion in some sense you are using the wrong notion of probability. The people who feel there is a right answer to the sleeping beauty case aren't talking about the kind of formally defined count over situations in some formal model. If that's the only notion of probability then you can't even talk about the probabilities of different physical theories being true.

The people who think there is a sleeping beauty paradox believe there is something like the rational credence one should have in a proposition given y... (read more)

Expected Pain Parameters

Also, I think there is a fair bit of tension between your suggestion that we should be taking advice from others about how much things should hurt and the idea that we should use the degree of pain we feel as a way to identify abusive/harmful communities/relationships. I mean the more we allow the advice from those communities to determine whether we listen to those pain signals the less useful they are to us .

What I mean is more like "if someone is suggesting that you do something painful, they should present you with a model of why and how that pain is okay". This doesn't rule out misappropriation - I'm sure cult leaders and certain brands of interpersonal abusers do it handily, especially if they're weaponizing guilt - but it's at least robust against generic, opaque commands to "suck it up", and if you go in with that expectation you'll have an opportunity to notice something is wrong if someone tells you that you shouldn't be in pain and your pain is invalid (they don't have a model that describes the thing you are in fact feeling, so they don't have a good model of the situation as a whole).
Expected Pain Parameters

It's hard to identify and convey degrees of pain which I suspect was part of your problem communicating with the nurses. Ultimately, with physical pain we can often muddle through with something like the 10 point pain scale doctors use as most of us have experience roughly equivalent truly intense pain and there isn't that much interpersonal variation in how unpleasant we find different kinds of physical pain.

However, with emotional pain it's almost impossible to convey how uncomfortable something should be to other people. For instance, w... (read more)

What makes you think that different people experience the same kind of pain the same way? One thing that produces a kind of pain is to go swimming in a river in spring. Depending on how your body reacts when you touch the water that experience can be both healthy or unhealthy and I don't think that's a simple matter of degrees of pain. When Taber Shadburne teaches Radical Honesty he often makes the point that an essential part of Radical Honesty is to become a connoisseur of pain who can tell different kinds of emotional pain apart the way the proverbial wine connoisseur can tell apart the taste of different wines. A wine connoisseur usually doesn't simply judge a wine on a one-dimensional scale and the one-dimensional nature that's implied by the "how much apologizing will hurt" isn't a useful way to think about the emotions either. Taber also used a Yoga metaphor where experienced Yoga practitioners need to be able to tell apart different kinds of pain. The pain of stretching a muscle is an essential part of Yoga and welcome while the pain from overturning an ankle is bad and has to be avoided. I talked about this with one LW'ler who does Yoga and the person said that they are able to tell 7 different kinds of pain apart in their Yoga practice.
It's not so much "how much will this hurt" as it is "how much should this hurt". In other words, "how much does it have to hurt before I reconsider". In the running case, for example, you can't know before their run if they'll experience mild muscle soreness of if they'll step on a nail. You want them to know that if it feels like they've stepped on the nail, this isn't what you're talking about, and you shouldn't try to run through that. There is a distinction between "this is how intense the sensations might be" and "this is the thing they signify, and how bad it is". A lot of the subjective experience of "pain" has to do with the meaning attached to it, and the reaction to that meaning. In jiu jitsu for example, beginners are often not taught heel hooks in part because the sensation of a knee ligament about to rupture doesn't always stand out as a big deal, and so people will sometimes hurt themselves because they don't notice the warning signs. At the same time, you can get people screaming in pain [] once their foot is turned the wrong way because all of a sudden the meaning has changed and they no longer feel "okay". Other people can have the same thing happen to them and just kinda look at it like "oops, I screwed that up" because they simply aren't overwhelmed by the idea that their ligaments just tore and their limb isn't pointing the right way anymore. When you're talking to someone who is in pain (or needs to do something which will be painful), there's two things you want to communicate. One is that it's okay, even though whatever the bad thing is that happened, and the other is what the bad thing is. When you can do those two things, their entire experience can change dramatically. The same principles apply to emotional concerns. For example, if someone is going to feel embarrassed by something to a degree which seems appropriate and okay, then all you are going to need to communicate is "Yes, this is g
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases

If you actually follow the advice about glomarization it is no longer improbable that you will be interrogated by someone who has read the rationalist literature on the subject and thought through the consequences. Investigators do their homework and being committed enough to glomarize frequently enough to do the intended work is a feature that will stick out like a sore thumb when your associates are interviews and immediately send the investigator out to read the literature.

Now maybe most investigators aren’t anywhere near this through but if you are facing an investigator who doesn’t even bother looking into your normal behavior your glomarization is irrelevant anyway.