Murder, suicide, and Catholicism don't mix. It's supposed to be an challenging opera for a culture that truly believes in the religious moral compass. You empathize with Tosca and her decisions to damn herself. The guy she kills is rather evil.
I'm not sure I follow your first notion, but I don't doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it's the latter, how do you continue to interact with those who didn't follow in your footsteps? Create a (self defeating) religion?
That's close, but the object of concern isn't religious artwork but instead states of mind that are highly irrational but still compelling. Many (most?) people do a great deal of reasoning with their emotions, but rationality (justifiably) demonizes it.
Can you truly say you can communicate well with someone who is contemplating suicide and eternal damnation versus the guilt of killing the man responsible for the death of your significant other? It's probably a situation that a rationalist would avoid and definitely a state of mind far different from one a rationalist would take.
So how do you communicate with a person who empathizes with it and relates those conundrums to personal tragedies? I feel rather incapable of communicating with a deeply religious person because we simply appreciate (rightfully or wrongfully) completely different aspects of the things we talk about. Even when we agree on something actionable, our conceptions of that action are non-overlapping. (As a disclaimer, I lost contact with a significant other in this way. It's painful, and motivating of some of the thoughts here, but I don't think it's influencing my judgement such that it's much different than my beliefs before her.)
In particular, the entire situation is not so different from Eliezer's Three Worlds Collide narrative if you want to tie it to LW canon material. Value systems can in part define admissible methods of cognition and that can manifest itself as inability to communicate.
What were the solutions suggested? Annihilation, utility function smoothing, rebellion and excommunication?
I feel like this is close to the heart of a lot of concerns here: really it's a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is "winning" and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what's happy and joyful and right—
But I fear there's a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that "post-Enlightenment civilization [is] a historical bubble" because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can't be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive "superhuman" steps that could stably exist.
A loss of empathy with "regular people". My friend, for instance, loves the opera Tosca where the ultimate plight and trial comes down to the lead soprano, Tosca, committing suicide despite certain damnation.
The rational mind (of the temperature often suggested here) might have a difficult time mirroring that sort of conundrum, however it's been used to talk about and explore the topics of depression and sacrifice for just over a century now.
So if you take part of your job to be an educator of those still under the compulsion of strange mythology, you probably will have a hard time communicating with them if you absolve all connection to that mythology.
I agree! That's at least part of why my concern is pedagogical. Unless your plan is more of just run for the stars and kill everyone who didn't come along.
I'm sorry, as I'm reading it that sounds rather vague. Gelman's work stems largely from the fact that there is no central theory of political action. Group behavior is some kind of sum of individual behaviors, but with only aggregate measurements you cannot discern the individual causes. This leads to a tendency to never see zero effect sizes, for instance.
I think this is an important direction to push discourse on Rationality toward. I wanted to write a spiritually similar post myself.
The theory is that we know our minds are fundamentally local optimizers. Within the hypothesis space we are capable of considering, we are extremely good exploitive maximizers, but, as always, it's difficult to know how much to err on the side of explorative optimization.
I think you can couch creativity and revolution in terms like that, and if our final goal is to find something to optimize and then do it, it's important to note randomized techniques might be a necessary component.
This is made explicit in removing connections from the graph. The more "obviously" "wrong" connections you sever, the more powerful the graph becomes. This is potentially harmful, though, since like assigning 0 probability weight to some outcome, once you sever a connection you lose the machinery to reason about it. If your "obvious" belief proves incorrect, you've backed yourself into a room with no escape. Therefore, test your assumptions.
This is actually a huge component of Pearl's methods since his belief is that the very mechanism of adding causal reasoning to probability is to include "counterfactual" statements that encode causation into these graphs. Without counterfactuals, you're sunk. With them, you have a whole new set of concerns but are also made more powerful.
It's also really, really important to dispute that "one could split a data set using basically any possible variable". While this is true in principle, Pearl made/confirmed some great discoveries by his causal networks which helped to show that certain sets of conditioning variables will, when selected together, actively mislead you. Moreover, without using counterfactual information encoded in a causal graph, you cannot discover which variables these are.
Finally, I'd just like to suggest that picking a good hypothesis, coming to understand a system; these are undoubtedly the hardest part of knowledge involving creativity, risk, and some of the most developed probabilistic arguments. Actually making comparisons between competing hypotheses such that you can end up with a good model and know what "should be important" is the tough part fraught with possibility of failure.
If lecture notes contain as much relevant information as a book, then you should be able to, given a set of notes, write a terse but comprehensible textbook. If you're genuinely able to get that much out of notes, then yes that definitely works for you.
The concern is instead if reading a textbook only conveys a sparse, unconvincing, and context-free set of notes (which is my general impression of most lecture notes I've seen).
Both depend heavily on the quality of notes, textbook, subject, and the learning style you use, but I think it's a lot of people's experience that lecture notes alone convey only a cursory understanding of a topic. Practically enough sometimes, test-taking enough surely, but never too many steps toward mastery.