As far as I can tell, I agree with what you say - this seems like a good account of how the cryptophraher's constraint cashes out in language.
To your confusion: I think Dennett would agree that it is Darwianian all the way down, and that their disagreement lies elsewhere. Dennet's account for how "reasons turn into causes" is made on Darwinian grounds, and it compels Dennett (but not Rosenberg) to conclude that purposes deserve to be treated as real, because (compressing the argument a lot) they have the capacity to affect the causal world.
Not sure this is useful?
I'm inclined to map your idea of "reference input of a control system" onto the concept of homeostasis, homeostatic set points and homeostatic loops. Does that capture what you're trying to point at?
(Assuming it does) I agree that that homeostasis is an interesting puzzle piece here. My guess for why this didn't come up in the letter exchange is that D/R are trying to resolve a related but slightly different question: the nature and role of an organism's conscious, internal experience of "purpose".
Purpose and its pursuit have a special role in how human make sense of the world and themselves, in a way non-human animals don't (though it's not a binary).
The suggested answer to this puzzle is that, basically, the conscious experience of purpose and intent (and the allocation of this conscious experience to other creatures) is useful and thus selected for.
Why? They are meaningful patterns in the world. An observer with limited resource who wants to make senes of the world (i.e. an agent that wants to do sample complexity reduction) can abstract along the dimension of "purpose"/"intentionality" to reliably get good predictions about the world. (Except, "abstracting along the dimension of intentionality" isn't an active choice of the observer, rather than a results of the fact that intentions are a meaningful pattern.) The "intentionality-based" prediction does well at ignoring variables that aren't very predictive and capturing the ones that are, in the context of a bounded agent.
In regards to "the meaning of life is what we give it", that's like saying "the price of an apple is what we give it". While true, it doesn't tell the whole story. There's actual market forces that dictate apple prices, just like there are actual darwinian forces that dictate meaning and purpose.
Agree; the causes that we create ourselves aren't all that governs us - in fact, it's a small fraction of that, considering physical, chemical, biological, game-theoretic, etc. constraints. And yet, there appears to be an interesting difference between the causes that govern simply animals and those that govern human-animals. Which is what I wanted to point at in the paragraph you're quoting and the few above it.
I'm confused about the "purposes don't affect the world" part. If I think my purpose is to eat an apple, then there will not be an apple in the world that would have otherwise still been there if my purpose wasn't to eat the apple. My purpose has actual effects on the world, so my purpose actually exists.
So, yes, basically this is what Dennett reasons in favour of, and what Rosenberg is skeptical of.
I think the thing here that needs reconciliation - and what Dennett is trying to do - is to explain why, in your apple story, it's justified to use the term "purpose", as opposed to only invoking arguments of natural selection, i.e. saying (roughly) that you (want to) eat apples because this is an evolutionarily adaptive behaviour and has therefore been selected for.
According to this view, purposes are at most a higher-level description that might be convenient for communication but that can entirely be explained away in evolutionary terms. In terms of the epistemic virtues of explanations, you wouldn't want to add conceptual entities without them improving the predictive power to your theory. I.e. adding the concept of purposes to your explanation if you could just as well explain the observation without that concept makes your explanation more complication without it buying you predictive power. All else equal, we prefer simple/parsimonious explanations over more complicated ones (c.f. Occam's razor).
So, while Rosenberg advocates for the "Darwinism is the only game in town"-view, Dennett is trying to make the case that, actually, purposes cannot be fully explained away by a simple evolutionary account, because the act of representing purposes (e.g. a parent telling their children to eat an apple every day because it's good for them, a add campaign promoting the consumption scepticalof local fruit, ...) does itself affect people's action, and thereby purposes become causes.
Thanks :)
> I will note that I found the "Rosenberg's crux" section pretty hard to read, because it was quite dense.
Yeah, you're right - thanks for the concrete feedback !
I wasn't originally planning to make this a public post and later failed to take a step back and properly model what it would be like as a reader without the context of having read the letter exchange.
I consider adding a short intro paragraph to partially remedy this.
While I'm not an expert, I did study political science and am Swiss. I think this post paints an accurate picture of important parts of the Swiss political system. Also, I think (and admire) how it explains very nicely the basic workings of a naturally fairly complicated system.
If people are interested in reading more about Swiss Democracy and its underlying political/institutional culture (which, as pointed out in the post, is pretty informal and shaped by its historic context), I can recommend this book: https://www.amazon.com/Swiss-Democracy-Solutions-Multicultural-Societies-dp-0230231888/dp/0230231888/
It talks about "consensus democracy", Swiss federalism, political power-sharing, the scope and limits of citizen's participation in-direct democracy, and treats Swiss history of being a multicultural, heterogeneous society.
[slight edit to improve framing]
Are there any existing variolation projects that I can join?
FWIW, there is this I know of: https://1daysooner.org/
That said, last time I've got an update from them (~1 month ago), any execution of these trials was still at least a few months away. (You could reach out to them via the website for more up to date information.) Also, there is a limited number of places where the trials can actually take place, so you'd have to check whether there is anything close to where you are.
(Meta: This isn't necessarily an endoresement of your main qusetion.)
That's cool to hear!
We are hoping to write up our current thinking on ICF at some point (although I don't expect it to happened within the next 3 months) and will make sure to share it.
Happy to talk!
[I felt inclined to look for observations of this thing outside of the context of the pandemic.]
Some observations:
I experience this process (either in full or the initial stages of it) for example when asked about my work (as it relates to EA, x-risks, AI safety, rationality and the like), or when sharing ~unconventional plan (e.g. "I'll just spend the next few months thinking about this") when talking to e.g. old friends from when I was growing up, people in the public sphere like a dentist, physiotherapist etc. This used to be also somewhat the case with my family but I've made some conscious (and successful) effort to reduce it.
My default reaction is [exagerating a bit for the purpose of pulling out the main contures] to sort of duck; my brain kicks into a process that feels like "oh we need to fabricate a lie now, focus" (though, lying is a bit misleading - it's more like "what are the fewest, least reveiling words I can say about this that will still be taken as a 'sufficient' answer"); my thinking feels restrained, quite the opposite of being able to think freely, clearly and calmly; often there is an experience (reminiscent) of something like shame ; also some feeling of helplessness, "I can't explain myself" or "they won't understand me"; sometimes the question feels a bit intrusive, like if they wanted to come in(to my mind?) and break things. (?)
Some reflections: