Alex Rozenshteyn

Posts

Sorted by New

Comments

Book Review: The Elephant in the Brain

It depends on the meme in question.

  • Some are relatively harmless, like The Game, being easy to overcome, and causing minimal suffering to those who don't.
  • Some respect the use-mention distinction, like those described in Blit and comp.basilisk FAQ, making it possible to learn and think about them without suffering their effects.

These two don't really fit the use of "basilisk" I've heard (even though the second coined the term, IIRC), because they are not "ideas, knowing about which causes great harm (in expectation)". You are saying that there are two distinct approaches:

  • Innoculation: the idea is close enough to omnipresent that someone is very likely run into it (or invent it); for basilisks of this sort, focusing on prevention and treatment is probably best.
  • Containment: the idea is esoteric, and/or it cannot be treated; for basilisks of this sort, the only solution is to signal-boost the possibility of their existence and to insist on the virtue of silence on any instances actually found.

If we accept the term "basilisk" to include those that should be treated by innoculation (I'm leaning against this, as it de-fangs, so to speak, the term when used to refer to the other sort), then the drowning child argument is a perfect example: it can cause great emotional stress, and you're likely to run into it if you take any philosophy class, or read any EA material, but there are many ways to defuse the argument, some of which come very naturally to most people.

Obviously, even if I had an example of the latter type, I wouldn't reference it here, but I think that such things might exist, and there's value to keeping wary of them.

A Candidate Complexity Measure

(Epistemic status: gut reaction to the beginning of the post; thought out during writing)

It seems that a useful measure of complexity might arise from thinking of a phenomenon as a causal graph; specifically, how complex a phenomenon is can be described as "What fraction of causal components of the phenomenon need to break before the phenomenon ceases to occur, penalized by the complexity of the components."

This has the benefit of being able to talk about a phenomenon at multiple levels of precision (with its components at multiple levels of abstraction) and still get useful bounds on the "complexity" of the phenomenon. It also has the benefit of matching the following intuitions:

  • a phenomenon were every atom needs to be in just the right place is more complex than a phenomenon with some redundancy/fault-tolerance
  • a phenomenon describable by a small number of complex components is more complex than one describable by the same number of simple components
  • a phenomenon describable by a small number of complex components is simpler than one which can only be described by a large number of simple components

It also has the following less intuitive properties:

  • a phenomenon that is "caused" by many simple components, but is extremely fault-tolerant is simpler than one caused by a few simple components, each absolutely necessary
  • whether a fault-tolerant phenomenon made up of fragile complex components is more or less complex than a fragile phenomenon made up of fault-tolerant complex components is up in the air; of course, if the components are of equal complexity, the former is simper, but if instead each component is made of the same number of atoms, the question doesn't have an obvious (to me) answer
  • the "abstraction penalty" is a degree of freedom; I was imagining it as an additive or multiplicative constant, but different penalties (e.g. "+ 1", "* 1.5", or "+ 0") may lead to differently interesting notions of complexity
  • you don't need to ground out; that is, you can pick arbitrary atoms of complexity and still make meaningful relative claims of complexity

Is there prior work on such a model of complexity?

Book Review: The Elephant in the Brain

This is a tangent, but I'm a bit caught up on the following turn of phrase:

At this point I consider the drowning child argument a Basilisk, and wish it was treated accordingly: as something memetically hazardous that everyone needs to overcome and defeat as part of their coming-of-age rituals.

I have not before heard "Basilisk" to refer to "memetic hazard that should be overcome in coming-of-age"; instead, I have always heard it refer to "memetic hazard that should not even be mentioned". I was wondering if anyone has more examples of the usage in this article, and/or more examples of basilisks in the sense of this usage.

Hero Licensing

So... Longtime lurker, made an account to comment, etc.

I have a few questions.

First two, about innate status sense:

* I'm not convinced this it exists; is there a particular experiment (thought or otherwise) that could clearly demonstrate the existence of innate status sense among people? Presuming I don't have it, and I have several willing, honest, introspective, non-rationalist, average adults, what could I ask them?

* Is there a particular thought experiment I could perform that discriminates cleanly between worlds in which I have it and worlds in which I don't?

Next, about increasing probability estimates of unlikely events based on the outside view:

* This post argues against "Probing the Improbable" and for "Pascal's Muggle: Infinitesimal ..."; having skimmed the former and read the latter, I'm not clearly seeing the difference. Both seem to suggest that after using a model, implicitly or explicitly, to assign a low probability to an event, it is important to note the possibility that the model is catastrophically wrong and factor that into your instrumental probability.