Thank you! I'd seen the poll but not the repo.
Re-reading this, it strikes me that an entity communicating purely on the first level is himself a drone, not an agent. He is a slave to the territory, and can only report its condition, even when it may harm him. (See Kant's thought experiment about an ax murderer who enters your home and demands knowledge of where your friend is hidden.)
Any chance you could point me to some keywords/authors/texts on this topic? I'd love to learn more.
My research into animal mimicry, which closely resembles Baudrillardian simulacra, makes me think the slide in language/signaling from the first to second step is a potentially intractable problem. Once some association in information-space develops a reputation among situated actors, and is recognized as open to manipulation which benefits some of those actors at the cost of others... well, there's no way to break the freeriders of dishonest signaling.
Let's say that a black and red phenotype on a butterfly develops a reputation among predators as inedible (the butterfly releases toxins on being eaten). Now it's protected, great! What used to be a lose-lose (predator eats toxins, butterfly gets eaten) is transformed to a win-win (predator avoids toxins, butterfly survives) by the power of information: honest signaling benefits everyone. This is "step 1."
Unfortunately, the next step is other, non-toxic butterflies "noticing" (which is to say, evolution exploiting) this statistical association, and protecting themselves by dishonestly signaling the black and red, protected phenotype. This works alright at first, but it's driven by frequency-dependent selection: the more dishonest signalers, the less protection for everyone, toxic or not. This is "step 2."
But the actually toxic butterflies—the original honest signalers—they can't go anywhere. They're just stuck. One might happen to evolve a new phenotype, but that phenotype isn't protected by reputational association, and it's going to take a very long time for the new signal-association to take hold in predators. Once other insects have learned how to replicate the proxy-association or symbol that protected them, they can only wait it out until it's no longer protective.
You may have noticed this is a very similar mechanism to Goodhart's Law; the mechanism's the same far as I can tell. It's all about a publicly visible signal proxies for a hidden quality which outsiders do not have access to. (E.g. the lemon problem in used car sales, or size/confidence as a proxy for fighting ability in macaque hierarchies.) It can be easier and more reliable to just learn and copy the proxy than to evolve the hidden quality and hope other people catch on. (Think how many black and red butterflies got munched before the predators learned). It's a bleak problem; I haven't been able to make much progress on it, though I'd be super curious to hear if you think I've made errors in my premises, or if there's literature in game theory on this problem.
Yes, I think it all depends whether you find the criticisms of Socratic dialogue, logical positivism, and "tree falls in a forest"-type questions raised on this board since the late 00s compelling.
I agree, and think many conceptual engineering-type philosophers would agree, about natural language. The problem is that when you're applying rigorous analysis to a "naturally" grown structure like "truth" or "knowledge," you run into serious issues. Kevin Scharp's project (e.g.) is just to improve the philosophical terms, not to interfere with mainstream use.
Though I don't know much about it, I take "meaning as use" as a vague proto-version of the more explicit theories of fuzziness, polysemy, and "family resemblance" he'd develop later in his life. In some sense, it merely restates descriptivism; in another less literal sense, it's a tonal subversion of more classical understandings of meaning.
Conceptual engineering takes a very different stance from mere descriptivism; it specifically thinks philosophers ought to "grasp the language by its reins" and carve up words and concepts in more useful ways. "Useful," of course, depends on the fields, but e.g. in metaphysics, the disambiguation would be focused on evading common language traps. In that way, it's a bit like Yudkowsky's "Taboo Your Words."
Thanks for reading!
Yes, so the premise of Chalmers's lecture, and many other texts being published right now in conceptual engineering (a quickly growing field) is to first treat and define "conceptual engineering" using conceptual engineering—a strange ouroboros. Other philosophers are doing more applied work; see Kevin Scharp's version of conceptual engineering in his work on truth, or Sally Haslanger's version of it, "ameliorative analysis." But broadly, Chalmers's tentative definition is fine as a generic-enough umbrella: constructing, analyzing, renovating, etc. Right now, really anything in the ballpark of what "conceptual engineering" intuitively connotes is a fine description.
One place to start, as Cappelen does in his monographs on the subject, is with Nietzsche's Will to Power, so I'll quote that here:
Philosophers … have trusted in concepts as completely as they have mistrusted the senses: they have not stopped to consider that concepts and words are our inheritance from ages in which thinking was very modest and unclear. … What dawns on philosophers last of all: they must no longer accept concepts as a gift, nor merely purify and polish them, but first make and create them, present them and make them convincing. Hitherto one has generally trusted one's concepts as if they were a wonderful dowry from some sort of wonderland: but they are, after all, the inheritance from our most remote, most foolish as well as most intelligent ancestors. …What is needed above all is an absolute skepticism toward all inherited concepts.
Might add to the main post as well for clarity.
EDIT: Also, to be clear, my problem is not that Chalmers attempts to offer a definition. It's that, when presented with an intellectual problem, his first recourse in designing a solution is to consult a dictionary. And to make it worse, the concept he is looking up in the dictionary is a metaphor that a scholar twenty years ago thought was a nice linguistic turn of phrase.
If there was a single allele that coded for the half of the irreducibly complex eye it could become fixed even though having a half of eye is, strictly speaking, worse than not having an eye at all.
I understand this was a toy example, so I feel bad nitpicking, but I've never quite understood why this example is so popular. While eyeballs are incredibly complex, one must imagine that "half an eyeball" is in fact very advantageous: it can likely sense light, some movement.
Thought the connection of slack to randomness was provocative, though!