One thing that makes the dynamic even worse is that if you correctly warn about a danger, and people do respond appropriately, this causes the disaster to not appear. Then less informed people are likely to consider you to have cried wolf anyway.
And in modern sensibilities, being seen to ‘cry wolf’—by even once raising an alarm that isn’t consummated with disaster—is something people seem to really fear.
Don't people fear that they will be faced with disbelief when the time actually comes? Say, if you decided to cry in 2024 and failed to convince anyone because of lack of evidence GPT-4o was just a flattering not-so-smart 'friend', then evidence came and you cried again, then would it make others less likely to listen than if you cried AFTER the evidence came?
Given that we live in a world where people are prone to outrage and over-updating when someone warns of a possible problem that winds up (in hindsight) not being a big deal, this constitutes a valid reason to not warn of possible problems, so as to conserve credibility for when you most need it. (But there are other considerations too; no comment on what’s the best policy all things considered.)
BUT, I think the OP is making a different point: We shouldn’t resign ourselves to living in that world! Instead we can say: People should stop being like that! People should stop being prone to outrage and over-updating when someone warns of a possible problem that winds up (in hindsight) not being a big deal. Let us criticize people for being that way, and let’s try to get them to change, including by writing nice blog post explanations of why this is so dumb and bad.
Unfortunately, there are two parties involved.
The shepherd may have a 1:100 ratio of warning cost:wolf cost, but his neighbors might only be around 1:10. Giving the twentieth false alarm may be worthwhile to the shepherd, but responding to it wouldn't be worthwhile for his neighbors. And then nobody would respond to the 21st warning, even if there was a wolf.
I think the definition of crying wolf is announcing a wolf attack when you don't really believe there is one. It makes you untrustworthy, rather than the general notion of a wolf attack.
So long as you accurately report your beliefs regarding an upcoming wolf attack, you don't need to worry about crying wolf. People will see your predictions, assess how often you are correct, and use your predictions to adjust their own to the degree that they consider optimal. Trying to rig this system by over- or under- reporting just throws everybody off; nobody is starting from the assumption that you are infallible.
It seems like if you want to get people to believe you according to this, you should make predictions where it is hard to prove that you are incorrect (for instance, by giving no timetable). Ideally, people should discount your predictions by exactly as much as the vagueness in the prediction justifies, but real humans don't behave ideally.
I don't see how that follows. If someone's predictions are all nebulous, unfalsifiable claims, then we start out viewing them as a useless predictor, and never update our beliefs about them.
Plenty of people have said things akin to "there will eventually be a wolf attack", and while they're quick to laud themselves when one inevitably happens, they don't generally get rewarded for this. People who make unlikely predictions and win? They get a spotlight.
Come to think of it, I'm surprised we don't see a bit more clout being awarded to high-scoring polymarket predictors. A guy with a public record of making millions of dollars a year by predicting military outcomes that others bet against seems like he should get a slew of prestigious consultant positions, but I haven't noticed it happening.
It is a fact about wolves and rationality that you should warn people about wolves quite a few times for every effective wolf attack.
In particular, there is an asymmetry between the costs of having one’s flock devoured and averting a non-eventuating wolf attack. If the carnage is a hundred times worse, then it’s worth up to ninety-nine false alarms to stop it.
The original fable was about a boy who would continually lie about wolves, and that is definitely poor form.
But in modern parlance, ‘crying wolf’ seems to be used for just being openly alarmed about things that turn out ok—I don’t hear much implication of deceit.
And in modern sensibilities, being seen to ‘cry wolf’—by even once raising an alarm that isn’t consummated with disaster—is something people seem to really fear. I think multiple people have asked me about whether AI safety people might have ‘cried wolf’ about some earlier GPT model. I’m not aware of anyone doing that, but the idea that they might have is so tantalizing that it bears investigating. Because if even a a few people somewhere did, it would be such a nice embarrassing blow to AI safety people.
And I probably responded in the tempting way: jumping to assure that I don’t recall hearing any such fears from these quarters. But I think that worsens public thought norms by implicitly buying into the unspoken premise that it would be quite shameful and naive to have raised even one warning.
And so relatedly, probably people who see real risks from AI are scared to voice them, lest they be seen to ‘cry wolf’ and tank the credit of the movement for the next round of dangers. Because it is taken for granted that one should only get one chance to raise an alarm. That the first warning must be for the most undeniably big, bad, real wolf.
This is not the wolf lookout system we want.
‘Warnings’ are usually about fairly bad events, and therefore they tend to be worth making when the probability of those events is still low. This creates a real difficulty for society in adjusting people’s credit when the low probability events they have warned of do not come to pass. Most of the time, if the person is right, the events still shouldn’t happen! The person wasn’t saying they were likely! Yet you don’t want to let the alarmist off the hook, with plausible deniability for arbitrarily many alarms.
I think the solution to this difficulty should look much more quantitative, like collecting rich track records of the predictions made by a person or a movement, and scoring them well. The present solution of childishly denouncing any unmet danger is insane.
And meanwhile if there are bad risks that have a low chance of appearing on every warning, we should still warn of them, and not be too much cowed by innumerate customs.