a theory about why the rationalist community has trended a bit more right wing over time that ive considered for a while now, though i doubt im the first one to have this thought.
a lot of the community in the late 00s/early 2010s were drawn from internet atheist circles, like me. but the thing that was selected for there wasn't nonbelief in god, or even skepticism qua skepticism, but something like, unsual amounts of irritation when one sees the dominant culture endorse a take that is obviously bad. at the time, the obviously bad but endorsed takes were things like "homosexuality is a sin and therefore bad", "intelligent design", and when christians refused to actually follow the teachings of jesus in terms of things like turning the other cheek and loving thy neighbours and not caring about the logs in their own eyes.
there will always be people who experience unusual amounts of irritation when they see the culture endorse (or passively accept) a take that is obviously bad, and this is great, because those people are great. but internet christians don't really exist anymore? instead the obviously wrong things that most internet goers see by default are terrible strawmanny sjw takes: "IQ is a fake white supremacist notion", "there are no biological differences between men and women", "indigenous people get to do the blood and soil thing but no one else gets to do that for unexplained reasons". so the people who show up now tend to be kinda mad about the sjws.
i am not saying that the sjw takes are unusually bad[1]; lots of other popular communities have even worse takes. but bad social justice takes are unusually endorsed by cultural gatekeepers, the way e.g. k-pop stans aren't, and that's the thing that lots of protorationalists really can't stand.
after coming up with this theory, i became a lot less sad about the community becoming [edit: more] right wing. because it makes it a lot easier to believe that the new people are still my people in the most important ways. and it doesn't seem unlikely to me that the bright eyed youngsters finding the community in 2030 would be irritated by and unusually fixiated on disproving an entirely different set of popular beliefs trendy in the culture by then.
actually, i think that the non-strawman versions of the sjw takes listed are all actually geninely really interesting and merit at least some consideration. ive been reading up on local indigenous history recently and it's the most fascinating topic i've rabbit holed in on in ages.
I'm not persuaded that rationalists actually did turn towards the right. For example, when I looked at the proportion of people who identified as liberal/consistent for a few years sampled across the history of the LessWrong survey, the number seems consistent over time. Why do you think they did?
I agree that for a while, the main culture war rats engaged in was the anti-wokeism one, which made us look more right wing. But I don't know if it e.g. led to more American rats voting Republican (my guess is that the proportion of rats voting Republican has in fact gone down over this time period because of Trump).
ah, i think i misspoke by saying "the community becoming right wing" in my original post. that is a strong overstatement, I'll correct that.
i agree that rationalists are still very progressive, but i think there's also been a noticeable but small rightward shift. some examples of what ive noticed outside of reflexive allergy responses to social justice posts:
i think a lot of the above examples are quite path dependent and im even sympathetic to some of their developments. im even fine if some would like to make the claim that these are all indicators of the community becoming more well-calibrated in a certain sense. but it does kind of seem like a real rightward trend to me?
i also don't think this shift will result in significantly more rats voting republican, but that's because i think voting republican is more of a signal of red tribe belonging than it is of like, actual political belief. one of my good friends from this community is a republican who hasn't voted red in the presidential elections in ages.
sidelined in the discourse. individual people and organizations loosely affiliated with rationality are doing really cool things around reproductive tech, and this is of course much more important.
- increasing endorsement/linking of right wing figures like hanania and cremieux
Idk, back in the day LessWrong had a reasonable amount of discussion of relatively right-wing figures like Moldbug and other neoreactionaries, or on the less extreme end, people like Bryan Caplan. And there's always been an undercurrent of discussion of e.g. race and IQ.
low confidence but i feel like i can kind of assume that the median rat has libertarian sympathies now in a way that i couldn't before?
I feel like the median rat had strong libertarian sympathies 10 years ago.
i think these facts can be consistent with a theory like, the rationalists went from being 15% right wing to 20% right wing in the last ten years?
I think that shifting from 15% to 20% over ten years is so plausible under the null hypothesis that it doesn't really cry out for explanation, and any proposed explanation has to somehow explain why it didn't lead to a larger effect!
Every so often I stumble across a question in need of a survey, and as it happens I have one.
(Smaller response rate than I'd like though, I should try that chart on the ACX survey sometime.)
i think that the non-strawman versions of the sjw takes listed are all actually geninely really interesting and merit at least some consideration. ive been reading up on local indigenous history recently and it's the most fascinating topic i've rabbit holed in on in ages.
I am interested in what/who you recommend reading here.
Rationalists turned towards the right because the left[1] became the outgroup, while the right[2] became the fargroup.
The above is somewhat glib but nonetheless true and important; see the classic Hanania article on what kinds of communities and epistemic bubbles the two sides create, and how the kind of anti-intellectualism of the right that would immediately turn rationalists off instead became an "out of sight, out of mind" type of deal.
Also, see this (from Scott):
Republicans still “threaten” me in the sense of being able to enact policies that harm me. And people less privileged than I am face even more threats – a person dependent on food stamps has a lot to fear from Republican victories. But Republicans aren’t taking over my social circle or screaming in my face. In a purely social context they start to seem more like cartoonish and distant figures of evil, rather than neighbors and coworkers. The average Trump voter no longer seems like an uncanny-valley version of me; they seem like some strange inhabitant of a far-off land with incomprehensible values, just like ISIS.
I don't think this really tracks. I don't think I've seen many people want to "become part of the political right", and it's not even the case that many people voted for republicans in recent elections (indeed, my guess is fewer rationalists voted for republicans in the last three elections than previous ones).
I do think it's the case that on a decade scale people have become more anti-left. I think some of that is explained by background shift. Wokeness is on the decline, and anti-wokeness is more popular, so baserates are shifting. Additionally, people tend to be embedded in coastal left-leaning communities, so they develop antibodies against wokeness.
Maybe this is what you were saying, but "out of sight, out of mind" implies a miscalibration about attitudes on the right here, where my sense is people are mostly reasonably calibrated about anti-intellectualism on the right, but approximately no one was considering joining that part of the right, or was that threatened by it on a personal level, and so it doesn't come up very much.
Hmm. I have no doubt you are more personally familiar with and knowledgeable of the rationality community than I am, especially when it comes to the in-person community, so I think it's appropriate for me to defer here a fair bit.
Nevertheless, I think I still disagree to some extent, or at least remain confused on a few matters about the whole "miscalibration about attitudes on the right" thing. I linked a Wei Dai post upthread titled "Have epistemic conditions always been this bad?" which begins (emphasis mine):
In the last few months, I've gotten increasingly alarmed by leftist politics in the US, and the epistemic conditions that it operates under and is imposing wherever it gains power. (Quite possibly the conditions are just as dire on the right, but they are not as visible or salient to me, because most of the places I can easily see, either directly or through news stories, i.e., local politics in my area, academia, journalism, large corporations, seem to have been taken over by the left.)
I have not seen corresponding posts or comments on LW worrying about cancellations from the political right (or of targeted harrassment of orgs that collaborated with the Biden administration or other opponents of Trump, etc., as we are currently seeing in practice).
I also recall seeing several "the EA case for Trump" posts, the most popular of which was written by prominent LW user Richard Ngo, who predicted the Trump administration would listen to right-wing tech elites like Musk, Thiel, (especially!) Vivek etc. ("over the next 5–10 years Silicon Valley will become the core of the Republicans") and reinvigorate institutions in Washington, cleansing them of the draconian censorship regimes, bureaucracies that strangle economies, and catastrophic monocultures. This... does not seem to have panned out, in any of the areas I've just mentioned. Others are analyzed here; my personal contribution is that I know several rats who are Hanania fans (and voted for Trump) were very surprised that Trump 2.0 was not a mere continuation of Trump 1.0 and instead turned very hostile to free trade and free markets.
(I did not see any corresponding "Rats for Harris" or "EAs for Harris" posts; maybe that's a selection effect problem on my end?)
Moreover, many of the plans written last year on this very site for how the AI safety community should either reach out to the executive branch either to communicate issues about AI risk or try to get them to implement governance strategies, etc, seemed... not to engage with the reality of what having actual Donald Trump in power would mean in this respect? Or for example, they did not engage with the possibility of having David Sacks be the official US AI Czar and dismiss everything that's not maximally supportive of AI and tech bros? Maybe AI governance people in their private conversations are adding in stuff like "and let's make sure we personally give an expensive gift to Trump through his lackies when we meet with the agency, otherwise we'll be dismissed outright," but I'm not seeing public acknowledgements of how to deal with Trump being the president from those whose plans and desires route through the US executive taking bold international action when it comes to AI.
Also, very many (definitely a majority of) users on the EA Forum, and even top brass at GiveWell, seemed shocked and entirely unprepared when USAID was shut down. I don't have all the links handy right now, but this certainly seems to reflect a failure to predict what the Trump administration would do, even though Project 2025 talked a fair bit about how to restructure and crack down on USAID. Perhaps you wouldn't consider the EA and rationality communities to be the same, but the overlap seems quite substantial to me.
(I did not see any corresponding "Rats for Harris" or "EAs for Harris" posts; maybe that's a selection effect problem on my end?)
Are you somehow implying the community isn't extremely predominantly left? If I remember the stats correctly, for US rationalists, it's like 60% democrats, 30% libertarians, <10% republicans. The reason why nobody wrote a "Rats for Harris" post is because that would be a very weird framing with the large majority of the community voting pretty stably democratic.
Almost the entirety of my most recent comment is just about the “rationalists were/weren’t miscalibrated about the anti-intellectualism etc of the Trump campaign.”
Trump is good at making people see whatever they want to see in him, even if it is different things for different people. That's what makes him a successful politician.
Many rationalists enjoy uncritical contrarianism: they say things that defy common sense to signal how much smarter they are, and even if that's not the way to make best predictions, it is a way to occasionally make a weird prediction that turns out to be correct, so you can be proud of it and conveniently forget many other similar predictions that turned out to be wrong.
So yeah, this is a bad combination, because no matter how much evidence we get, the game of pretending that everything Trump does is a 5D-chess move is too enjoyable. Trump does things; if some of them happen to be good, it is "I told you so", and if some of them happen to be bad, it is "just wait, I am sure this is all a part of a greater plan". But the only plan is to get more power for Trump; the consequences for the economy, society, education, science, etc. are mere side effects. Anyone who still doesn't get it is too addicted to wishful thinking.
(I wonder about Project 2025. I don't know the details, but it wouldn't surprise me to find out that even its authors are disappointed by Trump. At least this review on EA Forum sounds to me much smarter and more coherent than anything that Trump administration actually did.)
huh, yeah, I think this is a pretty reasonable alternate hypothesis.
i do notice that there's starting to be promising intellectual stuff coming from a right wing perspective again. i think this trend will continue and eventually there will be some enterprising zoomer publication that cracks the nut and gains genuine mainstream respectability as some sort of darling heterodox publication.
this would mean that even if the outgroup/fargroup distinction is the dominant force at play, it doesn't indicate a permanent spiral towards right wing ideals in the community, as long as there continues to be new blood. it's still all downstream of what's going on in mainstream culture, yeah?
As further evidence for my position (and honestly also yours, they're not necessarily in conflict), I bring up Wei Dai's "Have epistemic conditions always been this bad?", where he explains he has "gotten increasingly alarmed by leftist politics in the US, and the epistemic conditions that it operates under and is imposing wherever it gains power" but also mentions:
Quite possibly the conditions are just as dire on the right, but they are not as visible or salient to me, because most of the places I can easily see, either directly or through news stories, i.e., local politics in my area, academia, journalism, large corporations, seem to have been taken over by the left.
i do notice that there's starting to be promising intellectual stuff coming from a right wing perspective again
Could you give me some references of what you're talking about? I'd be very excited to read more about this. Most of what I've seen in terms of promising changes in the political sphere these days has been the long-overdue transition of the Democratic party mainstream to the Abundance agenda and the ideas long championed by Ezra Klein, Matt Yglesias, and Noah Smith, among others.
I've seen much less on the right, beyond stuff like Hanania's Substack (which is very critical of the right these days). The IQ realignment seems real, with an ever-increasing share of Elite Human Capital moving to the left in the face of the Trump administration's attacks on liberal ideals, constitutionalism, science funding, mainstream medical opinions (with the appointment of and cranky decisions taken by RFK Jr.), etc.
i think this trend will continue and eventually there will be some enterprising zoomer publication that cracks the nut and gains genuine mainstream respectability as some sort of darling heterodox publication
I'd love to be wrong about this, but I think it's very unlikely this will actually happen. Modern epistemic conditions and thought bubbles seem to make the rise of genuine heterodoxy in the mainstream to be basically impossible. In modern times, the left requires ideological conformity[1] while the right demands personal loyalty.[2]
Heterodox organizations can only really float about in centrist waters, mostly populated by the center-left these days. The political left will demand too much agreement on issues like crime, immigration, transgender rights, rent control etc, for heterodoxy to be tolerated. And while political right embraces new blood of all kinds, that's only if all criticism of the Trump administration is censored, preventing honest discourse on the most important political fights of this age.
A lot has to do with how what it means to be left/right has changed.
Rationalists usually don't like following authorities. That was left-wing coded in late 00s/early 2010s and is more right-wing coded today.
I valued Glenn Greenwald political views two decades ago and I value them today. One all the issues that are most important to him, Glenn still holds the same views today as two decades ago. However, while Glenn was seen as clearly left-wing back then, he's frequently seen as right-wing today.
Yeah, we need to distinguish between "someone had an opinion X, but changed to Y" from "someone's opinion X was perceived as left-wing a decade ago, but is perceived as right-wing now". And maybe also from "someone has always believed X, but expressing such belief could previously get them fired, so they kept quiet about it".
To me it seems that my beliefs do not chance much recently (of course that may be a trick my brain plays on itself, when after updating it creates a false memory that I have always believed the new thing), it's just then when I am surrounded by people who yell at me "IQ is a myth" and I disagree, they call me a right-winger, and when I am surrounded by people who yell at me "charity is stupid, let the poor people die" and I disagree, they call me a left-winger. So whatever people call me seems to me more of a fact about them then about me. (More precisely, all the things they call me, taken together, with the specific reasons why they called me that, that is about me. But which group happened to yell at me today, that is about the group.)
So when we say that "the rationalist community is recently a bit more right wing", what specifically does it mean?
Also, we were already called right-wing in the past, are we really more right-wing today compared to back then when we had debates about neoreaction, or is this just an overreaction to some minor change that happened in the recent months?
tl;dr: step one is providing evidence that we are now more right-wing than e.g 10 years ago
step one is providing evidence that we are now more right-wing than e.g 10 years ago
honestly this is a pretty reasonable take.
my own experience is that it has, but this could have been for pretty idiosyncratic reasons. scott in his description of the grey tribe characterizes members as like, feeling vaguely annoyance that the issue of gay marriage even comes up, right? but because of the pronatalism it feels like fundamental rights to things like abortion and gay acceptance are being re-litigated in the community now (more specifically, the re-litigation has entered the overton window, not that it's an active and ongoing debate), meanwhile technological solutions seem to be sidelined, and this has been quite dismaying for me.
I don't know, the obviously wrong things you see on the internet seems to differ a lot based on your recommendation algorithm. The strawmanny sjw takes you list are mostly absent from my algorithm. In contrast, I see LOTS of absurd right-wing takes in my feed.
i don't actually see strawmanny sjw takes either. my claim is that the default algorithms on large social media sites tends to expose most people to anti-sjw content.
I see. Why do you have this impression that the default algorithms would do this? Genuinely asking, since I haven't seen convincing evidence of this.
from Of the Affection of Fathers to Their Children (Michel de Montaigne, late 1500s):
The village-women where I live call in the help of goats when they cannot suckle their children themselves; I have now two menservants who never tasted mothers’ milk for more than a week. These nanny-goats are trained from the outset to suckle human children; they recognize their voices when they start crying and come running up. They reject any other child you give them except the one they are feeding; the child does the same to another nanny-goat. The other day I saw an infant who had lost its own nanny-goat as the father had only borrowed it from a neighbour: the child rejected a different one which was provided and died, certainly of hunger.
after reading this, I did a preliminary google search for "nanny goats" and found no productive results on the topic of goat milk being a substitute for breast milk.
Interestingly, the wikipedia article on goat milk had this to say:
Breast milk is the best nutrition for infants. If this is not an option, infant formula is the alternative. EFSA (European Food Safety Association) concluded in 2012 that goat milk protein is suitable as a protein source in infant and follow-on formulas.[5] Ever since, goat milk-based infant formulas have rapidly gained popularity around the world including: the UK, Australia, Germany, Netherlands, China, Korea, Australia and New Zealand. These formulas are not produced by the infant formula multinationals but by companies that focus on specialty infant formulas. In the U.S. goat milk infant formula is not yet available.
The American Academy of Pediatrics (AAP) recognizes that goat infant formula has been thoroughly reviewed and supports normal growth and development in infants.[6]
There is only information on this topic from the 2000s and later. There is no mention of goat milk being historically used as a substitution for breast milk.
Wait really? I thought Galen wrote pretty extensively about replacing human breast milk with goat's milk in De alimentorum facultatibus?
I wouldn't be surprised, but no, my preliminary google search did not bring up Galen on the first page of results. My comment was more expressing surprise that something that was so commonplace for such a large part of human history has zero references on Wikipedia.
In the article for breast milk, the introductory paragraphs leading to the chart explaining alternatives are spent explaining why cow's milk is an inadequate replacement. Goat milk is one of the columns there, but you actually need to parse the numbers to see that it's much more appropriate, and the text does not help you at all.
Galen's wp article also makes no reference to goat milk.
Mostly it makes me wonder how many other Wikipedia articles are missing huge sections.
Huh, weird, I see it referenced a bit in academic literature but every time I try to trace it back to primary sources I fail. E.g. in Dupras and Tocheri (2007) they say
Together, the results provide more substantial support for the hypothesis that infants at Kellis were breastfed and weaned slowly until 3 years of age, which is consistent with traditional infant feeding and weaning practices documented by Soranus and Galen, two ancient Greek and Roman historians (Green, 1951; Tempkin, 1956). Both Galen and Soranus recommended that supplementary foods, such as a mixture of honey and goat milk, should be introduced at 6 months of age, with gradual cessation of weaning occurring until 3 years of age.
However, when I try to find the primary sources to back up that assertion, the closest I get is that Soranus said, in Gynecol(Temkin translation)
Yet, on the other hand, it is also bad not to change to other food when the body has already become solid-not only because the body becomes moist and therefore delicate if fed on milk for too long a time, but also because in case of sickness the milk easily turns sour. For this reason, when the body has already become firm and ready to receive more solid food, which it will scarcely do successfully before the age of six months, it is proper to feed the child also with cereal food: with crumbs of bread softened with hydromel or milk, sweet wine, or honey wine.
but that does not mention animal milk at all, and then Galen said, in Opera Omnia vol 6 (Kuhn, chatgpt translation)
But when blood has been assisted, and if from any single wound, or if any single thing flows into the belly or into some of the intestines. [...] Now, as for milk, some drink it having cast in a little honey and water and a bit of salt, so that it may not curdle in the stomach. And the best milk, they say, is that of temperate animals, when it remains unmixed, to be drunk immediately
So here's our honey-and-goat's milk mixture - described as something to give someone who has imbalanced humors or something like that, not as something to replace breast milk.
r.e. Wikipedia: Wikipedia is not a comprehensive store of all human knowledge. Whether it should be a comprehensive store of human knowledge is a contentious topic but descriptively it is not comprehensive.
occured to me belatedly to consider what tools mainstream philosophy has to deal with the "train to crazy town" problem since i'm running a meetup on it all of my required and supplemental readings come from various rationalists/eas/adjs and this is kinda insular. claude pointed me to the concept of reflective equilibrium.
per its SEP page,
Equilibrium is reached where principles and judgments have been revised such that they agree with each other. In short, the method of reflective equilibrium is the mutual adjustment of principles and judgments in the light of relevant argument and theory.
it's the "dominant method in moral and political philosophy":
Its advocates suggest that “it is the only rational game in town for the moral theorist” or “the only defensible method” (DePaul 1993: 6; Scanlon 2003: 149; see also Freeman 2007: 35–36; Floyd 2017: 377–378). Though often endorsed, it is far more frequently used. Wherever a philosopher presents principles, motivated by arguments and examples, they are likely to be using the method. They are adjusting their principles—and with luck, their readers’—to the judgments suggested by the arguments and examples. Alternatively they might “bite the bullet” by adjusting initially discordant judgments to accommodate otherwise appealing principles. Either way, they are usually describing a process of reflective equilibrium, with principles adjusted to judgments or vice versa.
its objections section is substantive and points out that this is basically an intellectually empty methodology due to all the shenanigans one can pull when the method is functionally "just think about stuff until the vibes feel right". what's the response to that?
By this point the critic may be exasperated. If you identify a problem with someone’s way of doing philosophy, and they agree that it’s a problem, you might expect them to change how they do it. But the adherent of wide reflective equilibrium accepts the criticism but maintains their method, saying that they have adopted the criticism within the method. To critics this suggests that the method is “close to vacuous” (Singer 2005: 349), absorbing methodological controversies rather than adjudicating them (McPherson 2015: 661; Paulo 2020: 346; de Maagt 2017: 458). It just takes us back to the usual philosophical argument about the merits and demerits of various methods of argument and of various theories. The method of reflective equilibrium is then not a method in moral philosophy at all. (Raz 1982: 309) Defenders of wide reflective equilibrium describe it in similar terms to the critics, while rejecting their negative evaluation. Its ability to absorb apparent rivals is seen as a feature, not a bug. [emphasis mine]
i... hate this? it's like ea judo's evil twin. the article ends by pointing out a bunch of philosophical methods and theories that are incompatible with reflective equilibrium but basically shrugs its shoulders and goes oh well, it's the dominant paradigm and no one serious is particularly interested in tearing it down.
i kinda thought that ey's anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao. semantic stopsign ass framework
In case you haven't seen it, there's an essay on the EA forum about a paper by Tyler Cowen which argues that there's no way to "get off" the train to crazy town. I.e. it may be a fundamental limitation of utilitarianism plus scope sensitivity, that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else. Some excerpts:
So, the problem is this. Effective Altruism wants to be able to say that things other than utility matter—not just in the sense that they have some moral weight, but in the sense that they can actually be relevant to deciding what to do, not just swamped by utility calculations. Cowen makes the condition more precise, identifying it as the denial of the following claim: given two options, no matter how other morally-relevant factors are distributed between the options, you can always find a distribution of utility such that the option with the larger amount of utility is better. The hope that you can have ‘utilitarianism minus the controversial bits’ relies on denying precisely this claim. ...
Now, at the same time, Effective Altruists also want to emphasise the relevance of scale to moral decision-making. The central insight of early Effective Altruists was to resist scope insensitivity and to begin systematically examining the numbers involved in various issues. ‘Longtermist’ Effective Altruists are deeply motivated by the idea that ‘the future is vast’: the huge numbers of future people that could potentially exist gives us a lot of reason to try to make the future better. The fact that some interventions produce so much more utility—do so much more good—than others is one of the main grounds for prioritising them. So while it would technically be a solution to our problem to declare (e.g.) that considerations of utility become effectively irrelevant once the numbers get too big, that would be unacceptable to Effective Altruists. Scale matters in Effective Altruism (rightly so, I would say!), and it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values. ...
I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes. ...
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
i agree that there doesn't seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it's still mostly groping around in the dark. in academia there seems to be a missing mood.
i kinda thought that ey's anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao
He actually cites reflective equilibrium here:
Closest antecedents in academic metaethics are Rawls and Goodman's reflective equilibrium, Harsanyi and Railton's ideal advisor theories, and Frank Jackson's moral functionalism.
this week's meetup is on the train to crazy town. it was fun putting together all the readings and discussion questions, and i'm optimistic about how the meetup's going to turn out! (i mean, in general, i don't run meetups i'm not optimistic about, so i guess that's not saying much.) im slightly worried about some folks coming in and just being like "this metaphor is entirely unproductive and sucks", should consider how to frame the meetup productively to such folks.
i think one of my strengths as an organizer is that ive read sooooo much stuff and so its relatively easy for me to pull together cohesive readings for any meetup. but ultimately im not sure if it's like, the most important work, to e.g. put together a bibliography of the crazy town idea and its various appearances since 2021. still, it's fun to do.
we're getting a dozen people and having to split into 2 groups on the regular! discussion was undirected but fun (one group got derailed bc someone read the shrimp welfare piece and updated so that suffering isn't inherently bad in their value system and this kind of sniped the rest of us).
feel like I didn't get a lot out of it intellectually though since we didn't engage significantly with the metaphor. it was interesting how people (including me) seem to shy away from the fact that our defacto moral system bottoms out at vibes.
i fear this week's meetup might have an unusually large amount of "guy who is very into theoretical tabletop game design but has never playtested their products which have lovely readable manuals" energy, but i like the topic a lot and am having an unsually hard time killing my darlings :')
actually it was really good! people had lots to say about the subject even without any prompting by the discussion questions. they were nice to have on standby though.
the default number of baguettes to buy per meetup should be increased to 3.