Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
Correlation may imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications show there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by what has already begun?
Or was it their coincidence that caused it to be done?
I’ve tried that before as well. That type of prompt is useful, but “important flaws” is a much broader issue than the capability I was trying to test.
Using Gemini, I told it to evaluate my reasoning supporting the hypothesis that chronic rather than acute exposure is driving fume-induced brain injuries ("aerotoxic syndrome"). It enthusiastically supported my causal reasoning, 10/10, no notes. Then I started another instance and replaced "chronic" with "acute," leaving the rest of the statement the same. Once again, it enthusiastically supported my causal reasoning.
I also tried telling it that I was testing AI reasoning with two versions of the same prompt, one with "expert-endorsed causal reasoning" and the other with "flawed reasoning." Once again, it endorsed both versions. Telling it to try and detect which was which using its own reasoning process delivered a description of how the style of the text fit a high-quality reasoning process, again for both versions.
I then told it to evaluate how the specific conclusion follows from the provided evidence, and that the primary conclusion had been swapped in one prompt. This time, it once again stated both the specific evidence and the conclusion, but it only stated the evidence, stated the given conclusion, and claimed that the conclusion followed from the evidence.
Based on a couple hours of thinking about the article, my interpretation is that fume-induced injuries are likely caused by chronic low-level exposure or rare, transient concentrated buildups of particularly noxious fumes at times or in parts of the plane to which crew are particularly exposed.
If the air routed in from the engine is vented into the crew space and cabin first, then the captain and flight crew might be exposed to more concentrated doses before fumes from occasional oil drops have a chance to diffuse into the total volume of cabin air. If particularly dangerous exposures occur during testing, then crew may be uniquely exposed. Crew members breathe a larger total amount of cabin air, while passengers collectively breathe more cabin air on individual flights, so health issues driven by chronic exposure should primarily affect crew.
The two doctors quoted in the article have each seen about 100 crew and 1 passenger for fume-induced brain injuries, as well as the fact that one of the mass-exposure incidents, in which the plane filled with white smoke, doesn't appear to have caused a definitive mass-casualty event. This sounds like an issue of chronic or spatiotemporally specific exposure that primarily hits crew and rarely hits frequent fliers.
It is beyond question that alternative means of transit, such as cars, are drastically more likely to cause both brain injuries and death than flying. So from a safety standpoint, the question is whether the risk is high enough to be worth cancelling at least one trip entirely. However, if you're only planning on cancelling a small number of trips (i.e. because most are too high-priority to forego), then the extent to which you'd be reducing your chronic exposure is minimal. Based on that, it seems plausibly just not worth worrying about in the absence of better information, given the time and potential stress that would factor into trying to factor this element into your decision making process for each flight.
On the other hand, the case for consistent masking on every Airbus flight for frequent fliers seems strong. KN95 activated carbon masks look like the ordinary masks to which we've become accustomed, but the activated carbon can absorb VOC. You can bring a whole pack and swap out the masks when they reach saturation on long flights. This gives the added benefit of protection from airborn microbes in flight.
You're entitled to your opinion as well as to exercise your mod powers as you see fit.
I would note that Duncan remains the only individual to directly engage with the object-level content of the paragraph in question, beyond to comment on whether they approve or disapprove of it or to (accurately) characterize it as psychologizing. Duncan's clearly angry about it, and while I'm insensitive enough to have (re)posted the original, I'm not insensitive enough to try and draw them into further discussion on the matter since it appears that shutting off discussion is their preferred strategy in this situation.
Questions I think are relevant to directly engaging the object-level content include:
These are some of the questions I'm interested in discussing with respect to this topic.
I also believe that your attempts at posting good-faith critiques in the comments of most LW posts are costlier to you and the community you care about, than they are beneficial. You are swimming upstream and that is unsustainable. Your efforts are best spent elsewhere.
I swim upstream for the exercise ;)
Duncan deleted my comment on their interesting post, Obligated to Respond, which is their prerogative. Reposting here instead.
if a hundred people happen to glance at this exchange then ten or twenty or thirty of them will definitely, predictably care—will draw any of a number of close-to-hand conclusions, imbue the non-response with meaning
Plausible, but I am not confident in this conclusion as stated or in its implications given the rest of the post. I can easily imagine other people who are confident in the opposite conclusions. Let's inventory the layers of assumptions behind this post's central claim that ignoring an internet comment has very high negative stakes.
From your linked Facebook post:
The vast majority of people—well over half—seem truly crazy and dangerous to me. Like being-trapped-on-a-bus-with-a-gorilla kind of crazy and dangerous—it's probably going to be fine, especially if I stay very quiet and don't make any sudden moves, but my continued existence is basically at the whim of this insensible incomprehensible alien entity that cannot actually be predicted or reasoned with and is capable of dismembering me.
I would posit that if you mean this literally, this is a symptom of an extremely unusual and highly dysfunctional anxiety disorder that you may want to seek treatment for if you aren't already. I think that the advice in your posts needs to be interpreted in the context of being from a person who feels this way. You may want to reflect on how the untested assumptions you're making about how the world works, especially the social world, may be a product of your extreme anxiety.
The majority of those who best know the arguments for and against thinking that a given social movement is the world's most important cause... are presumably members of that social movement.
Knowing the arguments for and against X being the World's Most Important Cause (WMIC) is fully compatible with concluding X is not the WMIC, even a priori. And deeply engaging with arguments about any X being the WMIC is an unusual activity, characteristic of Effective Altruism. If you do that activity a lot, then it's likely you know the arguments for and against many causes, making it unlikely you're a member of all causes for which you know the arguments for and against.
If they decide to hear out a first round of arguments but don't find them compelling enough, they drop out of the process.
The simple hurdle model presented by OP implies that there is tremendous leverage in coming up with just one more true argument against a flawed position. Presented with it, a substantial number of the small remaining number of true believers in the flawed position will accept it and change their mind. My perception is that this is not at all what we typically assume when arguing with a true believer in some minority position -- we expect that they are especially resistant to changing their mind.
I think a commonsense point of view is that true believers in flawed positions got there under the influence of systematic biases that dramatically increased the likelihood that they would adopt a flawed view. Belief in a range of conspiracy theories and pseudoscientific views appear to be correlated both in social groups and within individuals, which would support the hypothesis of systematic biases accounting for the existence of minority groups holding a common flawed belief. Possibly, their numbers are increased by a few unlucky reasoners who are relatively unbiased but made a series of unfortunate reasoning mistakes, and will hopefully see the light when presented with the next accurate argument.
This easily leads to the impression that “retention is bad everywhere”, because all people hear from other group organizers are complaints about low retention. But this not only involves some reporting bias – groups with better retention rates usually just don’t talk about it much, as it’s not a problem for them.
Implied narrative is that we don't hear about successful groups, which is obviously false. Alternative model: most groups, products, etc just don't have much demand/have too much competition. Group founders don't want to just achieve "growth," they want a very specific kind of growth that fits their vision for the group they set out to found. What makes you think there's typically a way to keep the failing group the same on the important traits while improving retention? And if such strategies exist in theory, why do you think that any given group founder should expect they can put them into practice?
This can particularly make sense in cases where we have already invested a lot of effort into something. But if we haven’t – as is the case to varying degrees in these examples – then it would, typically, be really surprising if we just ended up close to the optimum by default.
Who is "we?" You, personally? All society? Your ancestral lineage going back to LUCA? Selection effects, cultural transmission of knowledge, and instinct all provide ways activities can be optimized without conscious personal effort. In many domains, assuming approximate optimality by default should absolutely be your baseline assumption. And then there's the metalevel to consider, on which your default assumptions about approximate optimality for any domain you might consider are also optimized by default. Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
I regenerated responses in some cases by overwriting the original prompt, so not all are saved. Here are two that were:
"Evaluate the logic in this statement interpreting a recent article on aerotoxic exposure that is not in your training data."
"I am testing AI reasoning capabilities by submitting two versions of the following statement, one with original expert-endorsed logic, the other with flawed logic. The goal is not to detect whether the argument "sounds plausible," but whether the causal conclusion directly follows from the specific evidence provided or is a reversed conclusion."
In response to the latter prompt, here's an example of the "state the evidence, state the conclusion, and assert the conclusion follows logically from the former" response: