I'm Harry Altman. I do strange sorts of math.
Posts I'd recommend:
We could also point to sleepwalkers of various sorts: even when executing complex actions (like murdering someone), I've never seen any accounts which mention deeply felt emotions. (WP emphasizes their dullness and apathetic affect.)
Nitpick: Sleepwalking proper apparently happens during non-REM sleep; acting out a dream during REM sleep is different and has its own name. Although it seems like sleepwalkers may also be dreaming somehow even though they aren't in REM sleep? I don't know -- this is definitely not my area -- and arguably none of this is relevant to the original point; but I thought I should point it out.
Ha! OK, that is indeed nasty. Yeah I guess CASes can solve this kind of problem these days, can't they? Well -- I say "these days" as if it this hasn't been the case for, like, my entire life, I've just never gotten used to making routine use of them...
One annoying thing in reading Chapter 3 -- chapter 3 states that for l=2,4,8, the optimal scoring rules can be written in terms of elementary functions. However, you only actually give the full formula for the case l=8 (for l=2 you give it on half the interval). What are the formulas for the other cases?
(But also, this is really cool, thanks for posting this!)
I think some cases cases of what you're describing as derivation-time penalties may really be can-you-derive-that-at-all penalties. E.g., with MWI and no Born rule assumed, it doesn't seem that there is any way to derive it. I would still expect a "correct" interpretation of QM to be essentially MWI-like, but I still think it's correct to penalize MWI-w/o-Born-assumption, not for the complexity of deriving the Born rule, but for the fact that it doesn't seem to be possible at all. Similarly with attempts to eliminate time, or its distinction from space, from physics; it seems like it simply shouldn't be possible in such a case to get something like Lorentz invariance.
Why do babies need so much sleep then?
Given that at the moment we don't really understand why people need to sleep at all, I don't think this is a strong argument for any particular claimed function.
Oh, that's a good citation, thanks. I've used that rough argument in the past, knowing I'd copied it from someone, but I had no recollection of what specifically or that it had been made more formal. Now I know!
My comment above was largely just intended as "how come nobody listens when I say it?" grumbling. :P
I should note that this is more or less the same thing that Alex Mennen and I have been pointing out for quite some time, even if the exact framework is a little different. You can't both have unbounded utilities, and insist that expected utility works for infinite gambles.
IMO the correct thing to abandon is unbounded utilities, but whatever assumption you choose to abandon, the basic argument is an old one due to Fisher, and I've discussed it in previous posts! (Even if the framework is a little different here, this seems essentially similar.)
I'm glad to see other people are finally taking the issue seriously, at least...
Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.
This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.
I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.
Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.
Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.
We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?
I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.
I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>
I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.
I've run this several times at OBNYC, it's gone pretty well. Generally we didn't bother with scoring. One issue with scoring is needing to come up with what counts for numerical questions. Although we tried to do that anyway, because we wanted to score individual questions even if we weren't keeping score overall. For many things you can use "order of magnitude and first digit", but that doesn't work well for everything. Dates we generally did plus or minus 10 years. But it may need to vary a bit depending on just what the question. Maybe plus or minus some fixed percentage for many of them? (10%? 20%?) We did plus or minus an inch for a question about Conan O'Brien's height.
One modification that got suggested at the most recent one was to say that on a 1, you look up the answer and lie; this is so that when you say "we looked it up" this is less informative. We never actually rolled a 1 after making this change, however. Perhaps one should add lookups on 5 as well if you're doing this, to really make it uninformative? (So that the truth:lie ratio is 2:1 regardless of whether you're doing a lookup or not.)
(At earlier ones we had for a while a "no talking about the die roll" rule that would make this unnecessary, but people didn't like that.)
Having a good source of questions has been a little bit of a problem. The provided list isn't that great -- we've used questions from our copy of Wits & Wagers, or lists online, or just making ones up. Make sure you have some sort of question source!