(Epistemic status: gut reaction to the beginning of the post; thought out during writing)
It seems that a useful measure of complexity might arise from thinking of a phenomenon as a causal graph; specifically, how complex a phenomenon is can be described as "What fraction of causal components of the phenomenon need to break before the phenomenon ceases to occur, penalized by the complexity of the components."
This has the benefit of being able to talk about a phenomenon at multiple levels of precision (with its components at multiple levels of abstraction) and still get useful bounds on the "complexity" of the phenomenon. It also has the benefit of matching the following intuitions:
It also has the following less intuitive properties:
Is there prior work on such a model of complexity?
This is a tangent, but I'm a bit caught up on the following turn of phrase:
At this point I consider the drowning child argument a Basilisk, and wish it was treated accordingly: as something memetically hazardous that everyone needs to overcome and defeat as part of their coming-of-age rituals.
I have not before heard "Basilisk" to refer to "memetic hazard that should be overcome in coming-of-age"; instead, I have always heard it refer to "memetic hazard that should not even be mentioned". I was wondering if anyone has more examples of the usage in this article, and/or more examples of basilisks in the sense of this usage.
So... Longtime lurker, made an account to comment, etc.
I have a few questions.
First two, about innate status sense:
* I'm not convinced this it exists; is there a particular experiment (thought or otherwise) that could clearly demonstrate the existence of innate status sense among people? Presuming I don't have it, and I have several willing, honest, introspective, non-rationalist, average adults, what could I ask them?
* Is there a particular thought experiment I could perform that discriminates cleanly between worlds in which I have it and worlds in which I don't?
Next, about increasing probability estimates of unlikely events based on the outside view:
* This post argues against "Probing the Improbable" and for "Pascal's Muggle: Infinitesimal ..."; having skimmed the former and read the latter, I'm not clearly seeing the difference. Both seem to suggest that after using a model, implicitly or explicitly, to assign a low probability to an event, it is important to note the possibility that the model is catastrophically wrong and factor that into your instrumental probability.
It depends on the meme in question.
These two don't really fit the use of "basilisk" I've heard (even though the second coined the term, IIRC), because they are not "ideas, knowing about which causes great harm (in expectation)". You are saying that there are two distinct approaches:
If we accept the term "basilisk" to include those that should be treated by innoculation (I'm leaning against this, as it de-fangs, so to speak, the term when used to refer to the other sort), then the drowning child argument is a perfect example: it can cause great emotional stress, and you're likely to run into it if you take any philosophy class, or read any EA material, but there are many ways to defuse the argument, some of which come very naturally to most people.
Obviously, even if I had an example of the latter type, I wouldn't reference it here, but I think that such things might exist, and there's value to keeping wary of them.