Sniffnoy

I'm Harry Altman. I do strange sorts of math.

Highly upvoted posts:

Sniffnoy's Comments

Bayesian examination

Yeah, proper scoring rules (and in particular both the quadratic/Brier and the logarithmic examples) have been discussed here a bunch, I think that's worth acknowledging in the post...

Misconceptions about continuous takeoff

It is sometimes argued that even if this advantage is modest, the growth curves will be exponential, and therefore a slight advantage right now will compound to become a large advantage over a long enough period of time. However, this argument by itself is not an argument against a continuous takeoff.

I'm not sure this is an accurate characterization of the point; my understanding is that the concern largely comes from the possibility that the growth will be faster than exponential, rather than merely exponential.

Goal-thinking vs desire-thinking

I mean, are you actually disagreeing with me here? I think you're just describing an intermediate position.

Goal-thinking vs desire-thinking

OK. I think I didn't think through my reply sufficiently. Something seemed off with what you were saying, but I failed to think through what and made a reply that didn't really make sense instead. But thinking things through a bit more now I think I can lay out my actual objection a bit more clearly.

I definitely think that if you're taking the point of view that suicide is preferable to suffering you're not applying what I'm calling goal-thinking. (Remember here that the description I laid out above is not intended as some sort of intensional definition, just my attempt to explicate this distinction I've noticed.) I don't think goal-thinking would consider nonexistence as some sort of neutral point as many do.

I think the best way of explaining this maybe is that goal-thinking -- or at-least the extreme version which nobody actually uses -- is to simply not consider happiness or suffering as whatever as separate objects worth considering at all, that can be good or bad, or that should be acted on directly; but purely as indicators of whether one is achieving one's goals -- intermediates to be eliminated. In this point of view, suffering isn't some separate thing to be gotten rid of by whatever means, but simply the internal experience of not achieving one's goals, the only proper response to which is to go out and do so. You see?

And if we continue in this direction, one can also apply this to others; so you wouldn't have "not have other people suffer horribly" as a goal in the first place. You would always phrase things in terms of other's goals, and whether they're being thwarted, rather than in terms of their experiences.

Again, none of what I'm saying here necessarily follows from what I wrote in the OP, but as I said, that was never intended as an intensional definition. I think the distinction I'm drawing makes sense regardless of whether I described it sufficiently clearly initially.

Goal-thinking vs desire-thinking

This is perhaps an intermediate example, but I do think that once you're talking about internal experiences to be avoided, it's definitely not all the way at the goal-thinking end.

Goal-thinking vs desire-thinking

Hm, I suppose that's true. But I think the overall point still stands? It's illustrating a type of thinking that doesn't make sense to one thinking in terms of concrete, unmodifiable goals in the external world.

Coherent decisions imply consistent utilities

So this post is basically just collecting together a bunch of things you previously wrote in the Sequences, but I guess it's useful to have them collected together.

I must, however, take objection to one part. The proper non-circular foundation you want for probability and utility is not the complete class theorem, but rather Savage's theorem, which I previously wrote about on this website. It's not short, but I don't think it's too inaccessible.

Note, in particular, that Savage's theorem does not start with any assumption baked in that R is the correct system of numbers to use for probabilities[0], instead deriving that as a conclusion. The complete class theorem, by contrast, has real numbers in the assumptions.

In fact -- and it's possible I'm misunderstanding -- but it's not even clear to me that the complete class theorem does what you claim it does, at all. It seems to assume probability at the outset, and therefore cannot provide a grounding for probability. Unlike Savage's theorem, which does. Again, it's possible I'm misunderstanding, but that sure seems to be the case.

Now this has come up here before (I'm basically in this comment just restating things I've previously written) and your reply when I previously pointed out some of these issues was, frankly, nonsensical (your reply, my reply), in which you claimed that the statement that one's preferences form a partial preorder is a stronger assumption than "one prefers more apples to less apples", when, in fact, the exact reverse is the case.

(To restate it for those who don't want to click through: If one is talking solely about one's preferences over number of apples, then the statement that more is better immediately yields a total preorder. And if one is talking about preferences not just over number of apples but in general, then... well, it's not clear how what you're saying applies directly; and taken less literally, it just in general seems to me that the complete class theorem is making some very strong assumptions, much stronger than that of merely a total preorder (e.g., real numbers!).)

In short the use of the complete class theorem here in place of Savage's theorem would appear to be an error and I think you should correct it.

[0]Yes, it includes an Archimedean assumption, which you could argue is the same thing as baking in R; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking R as your number system means as a statement about the agent's preferences.

Noticing Frame Differences

Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?

The Forces of Blandness and the Disagreeable Majority

Sorry, but: The thing at the top says this was crossposted from Otium, but I see no such post there. Was this meant to go up there as well? Because it seems to be missing.

Load More