I'm confused by the study you cited. It seems to say that 14 females self-reported as flirting and that "18% (n = 2)" of their partners correctly believed they were flirting, but 2/14 = 14% and 3/14 = 21%. To get 18% of 14 would mean about 2.5 were right. Maybe someone said "I don't know" and that counted that as half-correct? If so, that wasn't mentioned in the procedure section.
It also says that 11 males self-reported as flirting, and lists accuracy as "36% (n = 5)", but 5/11 would be 45%; an accuracy of 36% corresponds to 4/11.
I don't think I trust this paper's numbers.
If we were to take the numbers at face value, though, the paper is effectively saying that female flirting is invisible. 18% correctly believed the girls were flirting when they were, but 17% believed they were flirting even when they weren't, and with only 14 girls flirting, 1% is a rounding error. So this is saying that actual female flirting has zero effect on whether her partner perceives her as flirting.
Agree that other players having tools, social connections, and intelligence in general all make it much harder to judge when you have the advantage. But I don't see how this answers the question of "why create underdog bias instead of just increasing the threshold required to attack?"
Strong disagree on the ancient world being zero-sum. A lion eating an antelope harms the antelope far more than it helps the lion. Thog murdering Mog to steal Mog's meal harms Mog far more than it helps Thog. I think very little in nature is zero-sum.
Seems weird to posit that evolution performed a hack to undermine an instinct that was, itself, evolved. If getting into conflicts that you think you can win is actually bad, why did that instinct evolve in the first place? And if it's not bad, why did evolution need to undermine it in such a general-purpose way?
I can imagine a story along the lines of "it's good to get into conflicts when you have a large advantage but not when you have a small advantage", but is that really so hard to program directly that it's better to deliberately screw up your model of advantage just so that the rule can be simplified to "attack when you have any advantage"? Accurate assessment seems pretty valuable, and evolution seems to have created behaviors much more complicated than "attack when you have a large advantage".
I agree that humans aren't very good at reasoning about how other players will react and how this should affect their own strategy, but I don't think that explains why they would have evolved one strategy that's not that vs another strategy that's not that.
(Also, I don't think Risk is a very good example of this. It's a zero-sum game, so it's mostly showing relative ability, not absolute ability. Also, the game is far removed from the ancestral environment and sending you a lot of fake signals (the strategies appropriate to the story the game is telling are mostly not appropriate to the abstract rules the game actually runs on), so it seems unsurprising to me that humans would tend to be bad at predicting behavior of other humans in this context. The rules are simple, but that's not the kind of simplicity that would make me expect humans-without-relevant-experience to make good predictions about how things will play out.)
A combination of the ideas in "binary search through spacetime" and "also look at your data":
If you know a previous time when the code worked, rather than starting your binary search at the halfway point between then and now, it is sometimes useful to begin by going ALL the way back to when it previously worked, and verifying that it does, in fact, work at that point.
This tests a couple of things:
If the bug still happens after you've restored to the "known working point", then you'll want to figure out why that is before continuing your binary search.
I don't always do this step. It depends how confident I am about when it worked, how confident I am in my restore process, and how mysterious the bug seems. Sometimes I skip this step initially, but then go back and do it if diagnosing the bug proves harder than expected.
Guess we're done, then.
Are you really unable to anticipate that this is very close to what I would have said, if you had asked me why I didn't respond to those things? The only reason that wouldn't be my exact answer is that I'd first point out that I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model! This doesn't seem like a hard one to get right, if you were extending half the charity to me that you extend yourself, you know? (should I be angry with you for this, by the way?)
You complain that I failed to anticipate that you would give the same response as me, but then immediately give a diametrically opposed response! I agreed that I didn't respond to the example you highlighted, and said this was because I didn't pick up on your implied argument. You claim that you did respond to the examples I highlighted. The accusations are symmetrical, but the defenses are very much not.
I did notice that the accusations were symmetrical, and because of that I very carefully checked (before posting) whether the excuse I was giving myself could also be extended to you, and I concluded definitively that it couldn't. My examples made direct explicit comparisons between my model and (my model of) your model, and pointed out concrete ways that the output of my model was better; it seems hugely implausible you failed to understand that I was claiming to score Bayes points against your model. Your example did not mention my model at all! (It contrasts two background assumptions, where humans are either always nice or not, and examines how your model, and only your model, interacts with each of those assumptions. I note that "humans are always nice" is not a position that anyone in this thread has ever defended, to my knowledge.)
And yes, I did also consider the meta-level possibility that my attempt to distinguish between what was said explicitly and what wasn't is so biased as to make its results useless. I have a small but non-zero probability for that. But even if that's true, that doesn't seem like a reason to continue the argument; it seems like proof that I'm so hopeless that I should just cut my losses.
I considered including a note in my previous reply explaining that I'd checked if you could use my excuse and found you couldn't, but I was concerned that would feel like rubbing it in, and the fact that you can't use my excuse isn't actually important unless you try to use it, and I guessed that you wouldn't try. (Whether that guess was correct is still a bit unclear to me--you offer an explanation that seems directly contradictory to my excuse, but you also assert that you're saying the same thing as me.)
If you are saying that I should have guessed the exact defense you would give, even if it was different from mine, then I don't see how I was supposed to guess that.
If you are saying that I should have guessed you would offer some defense, even if I didn't know the details, then I considered that moderately likely but I don't know what you think I should have done about it.
If I had guessed that you would offer some defense that I would accept then I could have updated to the position I expected to hold in the future, but I did not guess that you'd have a defense I would accept; and, in fact, you don't have one. Which brings us to...
(re-quoted for ease of reference)
I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model!
I have carefully re-read the entire reply that you made after the comment containing the two examples I accused you of failing to respond to.
Those two examples are not mentioned anywhere in it. Nor is there a general statement about "my examples" as a group. It has 3 distinct passages, each of which seems to be a narrow reply to a specific thing that I said, and none of which involve these 2 examples.
Nor does it include a claim that I've misapplied your model, either generally or related to those particular examples. It does include a claim that I've misunderstood one specific part of your model that was completely irrelevant to those two examples (you deny my claim that the relevant predictions are coming from a part of the person that can't be interrogated, after flagging that you don't expect me to follow that passage due to inferential distance).
Your later replies did make general claims about me not understanding your model several times. I could make up a story where you ignored these two examples temporarily and then later tried to address them (without referencing them or saying that that was what you were doing), but that story seems neither reasonable nor likely.
Possibly you meant to write something about them, but it got lost in an editing pass?
Or (more worryingly) perhaps you responded to my claim that you had ignored them not by trying to find actions you took specifically in response to those examples, but instead by searching your memory of everything you've said for things that could be interpreted as a reply, and then reported what you found without checking it?
In any case: You did not make the response you claimed that you made, in any way that I can detect.
Communication is tricky!
Sometimes both parties do something that could have worked, if the other party had done something different, but they didn't work together, and so the problem can potentially be addressed by either party. Other times, there's one side that could do something to prevent the problem, but the other side basically can't do anything on their own. Sometimes fixing the issue requires a coordinated solution with actions from both parties. And in some sad situations, it's not clear the issue can be fixed at all.
It seems to me that these two incidents both fall clearly into the category of "fixable from your side only". Let's recap:
(1) When you talked about your no-anger fight, you had an argument against my model, but you didn't state it explicitly; you relied on me to infer it. That inference turned out to be intractable, because you had a misunderstanding about my position that I was unaware of. (You hadn't mentioned it, I had no model that had flagged that specific misunderstanding as being especially likely, and searching over all possible misunderstandings is infeasible.)
There's an obvious, simple, easy, direct fix from your side: State your arguments explicitly. Or at least be explicit that you're making an argument, and you expect credit. (I mistook this passage as descriptive, not persuasive.)
I see no good options from my side. I couldn't address it directly because I didn't know what you'd tried to do. Maybe I could have originally explained my position in a way that avoided your misunderstanding, but it's not obvious what strategy would have accomplished that. I could have challenged your general absence of evidence sooner--I was thinking it earlier, but I deferred that option because it risked degrading the conversation, and it's not clear to me that was a bad call. (Even if I had said it immediately, that would presumably just accelerate what actually happened.)
If you have an actionable suggestion for how I could have unilaterally prevented this problem, please share.
(2) In the two examples I complained you didn't respond to, you allege that you did respond, but I didn't notice and still can't find any such response.
My best guess at the solution here is "you need to actually write it, instead of just imagining that you wrote it." The difficulty of implementing that could range from easy to very hard, depending on the actual sequence of events that lead to this outcome. But whatever the difficulty, it's hard to imagine it could be easier to implement from my side than yours--you have a whole lot of relevant access to your writing process that I lack.
Even assuming this is a problem with me not recognizing it rather than it not existing, there are still obvious things you could do on your end to improve the odds (signposting, organization, being more explicit, quoting/linking the response when later discussing it). Conversely, I don't see what strategy I could have used other than "read more carefully," but I already carefully re-read the entire reply specifically looking for it, and still can't find it.
I understand it's possible to be in a situation where both sides have equal quality but both perceive themselves as better. But it's also possible to be in a situation where one side is actually better and the other side falsely claims it's symmetrical. If I allowed a mere assertion of symmetry from the other guy to stop me from ever believing the second option, I'd get severely exploited. The only way I have a chance at avoiding both errors is by carefully examining the actual circumstances and weighing the evidence case-by-case.
My best judgment here is that the evidence weighs pretty heavily towards the problems being fixable from your side and not fixable from my side. This seems very asymmetrical to me. I think I've been as careful as I reasonably could have been, and have invested a frankly unreasonable amount of time into triple-checking this.
Before I respond to your other points, let me pause and ask if I have convinced you that our situation is actually pretty asymmetrical, at least in regards to these examples? If not, I'm disinclined to invest more time.
I don't think that's fair. For one, your model said you need anger in order to retaliate, and I gave an example of how I didn't need anger in order to retaliate.
I didn't respond to this because I didn't see it as posing any difficulty for my model, and didn't realize that you did.
I don't think you need anger in order to retaliate. I think anger means that the part of you that generates emotions (roughly, Kahneman's system 1) wants to retaliate. Your system 2 can disagree with your system 1 and retaliate when you're not angry.
Also, your story didn't sound to me like you were actually retaliating. It sounded to me like you were defending yourself, i.e. taking actions that reduced the other guy's capability of harming you. Retaliation (on my model) is when you harm someone else in an effort to change their decisions (not their capabilities), or the decisions of observers.
So I'm quite willing to believe the story happened as you described it, but this was 2 steps removed from posing any problem to my model, and you didn't previously explain how you believed it posed a problem.
I also note that you said "for one" (in the quote above) but then there was no number two in your list.
If you wait to see signs that the person is being forced to choose between changing their own mind or ignoring data, then you have a much more solid base.
I do see a bunch of signs of that, actually:
So "Yes, I'm talking about our models of how the world should work", and also that is necessarily the same as our models of how the world does work -- even if we also have meta models which identify the predictable errors in our object level models and try to contain them.
This seems like it's just a simple direct contradiction. You're saying that model X and model Y are literally the same thing, but also that we keep track of the differences between them. There couldn't be any differences to track if they were actually the same thing.
I also note that you claimed these are "necessarily" the same, but provided no reasoning or evidence to back that up; it's just a flat assertion.
At the same time, I'm curious if you've thought about how it looks from my perspective. You've written intelligent and thoughtful responses which I appreciate, but are you under the impression that anything you've written provides counter-evidence? Do you picture me thinking "Yes, that's what I'm saying" before you argue against what you think I'm saying?
There are some parts of your model that I think I probably roughly understand, such as the fact that you think there's some model inside a person making predictions (but it's not the same as the predictions they profess in conversation) and that errors in these predictions are a necessary precondition to feeling negative emotions. I think I can describe these parts in a way you would endorse.
There are some parts of your model that I think I probably don't understand, like where is that model actually located and how does it work.
There are some parts of your model that I think are incoherent bullshit, like where you think "should" and "is" models are the same thing but also we have a meta-model that tracks the differences between them, or where you think telling me to pay attention to my own feelings of surprise makes any sense as a response to my request for measurements.
I don't think I've written anything that directly falsifies your model as a whole--which I think is mostly because you haven't made it legible enough.
But I do think I've pointed out:
I don't think I require a better understanding of your model than I currently have in order for these points to be justified.
Well, if you were to walk outside and get rained on, would you experience surprise? If you walked outside and didn't get rained on, would you feel surprised? The answers here tells you what you're predicting.
I feel like I have experienced a lot of negative emotions in my life that were not particularly correlated with a feeling of surprise. In fact, I can recall feeling anger about things where I literally wrote down a prediction that the thing would happen, before it happened.
Conversely, I can recall many pleasant surprises, which involved a lot of prediction error but no negative emotions.
So if this is what you are relying on to confirm your theory, it seems pretty disconfirmed by my life experience. And I'm reasonably certain that approximately everyone has similar observations from their own lives.
I thought this was understood, and the only way I was taking your theory even mildly seriously was on the assumption that you meant something different from ordinary surprise.
No, I wouldn't expect the 8-year-old to be doing "I expect it to not get dark", but rather something more like "I expect to be able to see a lack of monsters at all times"
I find it quite plausible they would have a preference for seeing a lack of monsters. I do not find it remotely plausible that they would have a prediction of continuously being able to see a lack of monsters. That is substantially more stupid than the already-very-stupid example of not expecting it to get dark.
Are you maybe trying to refer to our models of how the world "should" work, rather than our models of how it does work? I'm not sure exactly what I think "should" is, but I definitely don't think it's the same as a prediction about what actually will happen. But I could maybe believe that disagreements between "should" and "is" models play a role in explaining (some) negative emotions.
If you want more direct proof that I'm talking about real things, the best example would be the transcript where I helped someone greatly reduce his suffering from chronic pain through forum PMs
I am not searching through everything you've ever written to try to find something that matches a vague description.
I feel like we've been talking for quite a while, and you are making extraordinary claims, and you have not presented ANY noteworthy evidence favoring your model over my current one, and I am going to write you off very soon if I don't see something persuasive. Please write or directly link some strong evidence.
(This is a very old post, but I think I have an interesting thing to say that hasn't been said yet.)
In most games with skill trees, I think the skill tree is actually serving multiple ludic goals, and its design ought to be understood as a compromise between those goals. Some common goals include:
When phrased that way, it seems obvious to me that goals #2 and #3 require revealing some information to the player. A puzzle is not a puzzle if you can't even see the pieces. You can't usefully customize a system if the controls aren't labeled. There's no value in offering a choice between opaque boxes.
But if goal #1 were the only goal, then I think Eliezer is completely correct.
And in fact, I think game systems that are only trying to do #1 usually do keep the upgrades hidden until you get them--with perhaps some vague hints, such as legends of a hero who could do X, or obstacles that a future upgrade will solve. For example, Zelda and Metroid games typically work like this; you just open a treasure chest and get a new ability. Ori and the Blind Forest even does both; it has a skill tree visible from the start of the game, but also gives you surprise upgrades at various milestones (although a few of the surprises are undermined if you read the skill tree carefully).
Also note that these surprise upgrades don't come with a choice; you just get what the game gives you. Because these particular game systems are focused just on goal #1, which doesn't require choice.
(Though there is also a trope where a game will give you a brief preview of many future abilities at the start of the game, then take them away. I see this as a sort of "teaser", like a movie trailer or book blurb, which helps players decide which game to play and how long to stick with it. I think it does probably make the game less fun...if you assume the player was going to play it all the way to the end regardless. But it helps the player decide whether to do that. So again, this is a compromise with another goal. I also avoid reading blurbs for books that I have already decided to read!)
I have gradually come to the opinion that Eliezer's observation is pretty important, and is under-valued in current game design. I like optimization puzzles a lot, but when I spend a lot of time doing detailed planning of the abilities that I'm going to have in some far-future time, I think that does actually make them less exciting when I get them. I suspect many games could benefit from keeping more upgrades hidden (in a carefully-planned way that doesn't screw up other sources of fun).
There's a recent-ish trend of "roguelike" games where leveling up gives you a choice of upgrades, but the options are randomized each time you play. From a certain angle, I think this could be viewed as an attempt to create a new compromise between goals #1 and #2, where you can't plan a whole build in advance because your future options are unknowable, and you don't need to make your current choice based on your future plans because it's not a tree; your current choice doesn't change your future options (much), but you can still make (statistically) better and worse optimization choices. Though I don't really think that's the main thing going on in this style of progression system (I think it is primarily a cost-conscious effort increase replayability), and I can think of many examples that either aren't trying to create that #1/#2 compromise or are (IMO) severely failing at it.
The paper actually includes a second experiment where they had observers watch a video recording of a conversation and say whether they thought the person on the video was flirting. Results in table 4, page 15; copied below, but there doesn't seem to be a way to format them as a table in a LessWrong comment:
Observer | Target | Flirting conditions | Accuracy (n)
Female Female Flirting 51% (187)
Female Female Non-flirting 67% (368)
Female Male Flirting 22% (170)
Female Male Non-flirting 64% (385)
Male Female Flirting 43% (76)
Male Female Non-flirting 68% (149)
Male Male Flirting 33% (64)
Male Male Non-flirting 62% (158)
Among third-party observers, females observing females had the highest accuracy, though their perception of flirting is still only 18 percentage points higher when flirting occurs than when it doesn't.
Third-party observers in all categories had a larger bias towards perceiving flirting than the people who were actually in the conversation. Though this experimental setup also had a larger percentage of people actually flirting, so this bias was actually reasonably accurate to the data they were shown.
Though, again, this study looks shoddy and should be taken with a lot of salt.