I think it's admirable to say things like "I don't want to [do the thing that this community holds as near-gospel as a good thing to do.]" I also think the community should take it seriously that anyone feels like they're punished for being intellectually honest, and in general I'm sad that it seems like your interactions with EAs/rats about AI have been unpleasant.
That said...I do want to push back on basically everything in this post and encourage you and others in this position to spend some time seeing if you agree or disagree with the AI stuff.
For the "what if I decide it's not a big deal conclusion":
It seems to me like government-enforced standards are just another case of this tradeoff - they are quite a bit more useful, in the sense of carrying the force of law and applying to all players on a non-voluntary basis, and harder to implement, due to the attention of legislators being elsewhere, the likelihood that a good proposal gets turned into something bad during the legislative process, and the opportunity cost of the political capital.
This post has already helped me admit that I needed to accept defeat and let go of a large project in a way that I think might lead to its salvaging by others - thanks for writing.
First, congratulations - what a relief to get in (and pleasant update on how other selective processes will go, including the rest of college admissions)!
I lead HAIST and MAIA's governance/strategy programming and co-founded CBAI, which is both a source of conflict of interest and insider knowledge, and my take is that you should almost certainly apply to MIT. MIT is a much denser pool of technical talent, but MAIA is currently smaller and less well-organized than HAIST. Just by being an enthusiastic participant, you could help make it a more robust group, and if you're at all inclined to help organize (which I think would be massively valuable), you could solve an important bottleneck in making MAIA an awesome source of excellent alignment researchers. (If this is the case, would love to chat.) You'd also be in the HAIST/MAIA social community either way, but I think you'd have more of a multiplier effect by engaging on the MAIA side.
As other commenters have noted, I think there are a few reasons to prefer MIT for your own alignment research trajectory, like a significantly stronger CS department (you can cross-register, but save yourself the commute!), a slightly nerdier and more truth-seeking culture, and better signaling value. (To varying degrees including negative values, these are probably also true for Caltech, Mudd, Olin, and Stanford, per John Wentworth's comment, but I'm more familiar with MIT.)
I also think it will just not take that long to do one more application, since you have another couple weeks to do it anyway. I would prioritize getting one last app to MIT over the line, and if you find you still have energy consider doing the same to Caltech, Stanford, maybe others, idk. Not the end of the world to end up at Harvard by any means, but I do think it would be good for both you and humanity if you wound up at MIT!
I don't think this is the right axis on which to evaluate posts. Posts that suggest donating more of your money to charities that save the most lives, causing less animal suffering via your purchases, and considering that AGI might soon end humanity are also "harmful to an average reader" in a similar sense: they inspire some guilt, discomfort, and uncertainty, possibly leading to changes that could easily reduce the reader's own hedonic welfare.
However -- hopefully, at least -- the "average reader" on LW/EAF is trying to believe true things and achieve goals like improving the world, and presenting them arguments that they can evaluate for themselves and might help them unlock more of their own potential seems good.
I also think the post is unlikely to be net-negative given the caveats about trying this as an experiment, the different effects on different kinds of work, etc.
Quick note on 2: CBAI is pretty concerned about our winter ML bootcamp attracting bad-faith applicants and plan to use a combo of AGISF and references to filter pretty aggressively for alignment interest. Somewhat problematic in the medium term if people find out they can get free ML upskilling by successfully feigning interest in alignment, though...
Great write-up. Righteous Mind was the first in a series of books that really usefully transformed how I think about moral cognition (including Hidden Games, Moral Tribes, Secret of Our Success, Elephant in the Brain). I think its moral philosophy, however, is pretty bad. In a mostly positive (and less thorough) review I wrote a few years ago (that I don't 100% endorse today), I write:
Though Haidt explicitly tries to avoid the naturalistic fallacy, one of the book’s most serious problems is its tendency to assume that people finding something disgusting implies that the thing is immoral (124, 171-4). Similarly, it implies that because most people are less systematizing than Bentham and Kant, the moral systems of those thinkers must not be plausible (139, 141). [Note from me in 2022: In fact, Haidt bizarrely argues that Bentham and Kant were likely autistic and therefore these theories couldn't be right for a mostly neurotypical world.] Yes, moral feelings might have evolved as a group adaptation to promote “parochial altruism,” but that does not mean we shouldn’t strive to live a universalist morality; it just means it’s harder. Thomas Nagel, in the New York Review of Books, writes that “part of the interest of [The Righteous Mind] is in its failure to provide a fully coherent response” to the question of how descriptive morality theories could translate into normative recommendations.
I became even more convinced that this instinct towards relativism is a big problem for The Righteous Mind since reading Joshua Greene's excellent Moral Tribes, which covers much of the same ground. But Greene shows that this is not just an aversion to moral truth; it stems from Haidt's undue pessimism about the role of reason.
Moral Tribes argues that our moral intuitions evolved to solve the Tragedy of the Commons, but the contemporary world faces the "Tragedy of Commonsense Morality," where lots of tribes with different systems for solving collective-action problems have to get along. Greene dedicates much of the section "Why I'm a Liberal" to his disagreements with Haidt. After noting his agreements — morality evolved to promote cooperation, is mostly implemented through emotions, different groups have different moral intuitions, a source of lots of conflict, and we should be less hypocritical and self-righteous in our denunciations of other tribes' views — Greene says:
These are important lessons. But, unfortunately, they only get us so far. Being more open-minded and less self-righteous should facilitate moral problem-solving, but it's not itself a solution[....]
Consider once more the problem of abortion. Some liberals say that pro-lifers are misogynists who want to control women's bodies. And some socila conservatives believe that pro-choicers are irresponsible moral nihilists who lack respect for human life, who are part of a "culture of death." For such strident tribal moralists—and they are all too common—Haidt's prescription is right on time. But what then? Suppose you're a liberal, but a grown-up liberal. You understand that pro-lifers are motivated by genuine moral concern, that they are neither evil nor crazy. Should you now, in the spirit of compromise, agree to additional restrictions on abortion? [...]
It's one thing to acknowledge that one's opponents are not evil. It's another thing to concede that they're right, or half right, or no less justified in their beliefs and values than you are in yours. Agreeing to be less self-righteous is an important first step, but it doesn't answer the all-important questions: What should we believe? and What should we do?
Greene goes on to explain that Haidt thinks liberals and conservatives disagree because liberals have the "impoverished taste receptors" of only caring about harm and fairness, while conservatives have the "whole palette." But, Greene argues, the other tastes require parochial tribalism: you have to be loyal to something, sanctify something, respect an authority, that you probably don't share with the rest of the world. This makes social conservatives great at solving Tragedies of the Commons, but very bad at the Tragedy of Commonsense Morality, where lots of people worshipping different things and respecting different authorities and loyal to different tribes have to get along with each other.
According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise might be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic.
I'm not a social conservative because I do not think that tribalism, which is essentially selfishness at the group level, serves the greater good. [...]
This is not to say that liberals have nothing to learn from social conservatives. As Haidt points out, social conservatives are very good at making each other happy. [...] As a liberal, I can admire the social capital invested in a local church and wish that we liberals had equally dense and supportive social networks. But it's quite another thing to acquiesce to that church's teaching on abortion, homosexuality, and how the world got made.
Greene notes that even Haidt finds "no compelling alternative to utilitarianism" in matters of public policy after deriding it earlier. "It seems that the autistic philosopher [Bentham] was right all along," Greene observes. Greene explains Haidt's "paradoxical" endorsement of utilitarianism as an admission that conscious moral reasoning — like a camera's "manual mode" instead of the intuitive "point-and-shoot" morality — isn't so underrated after all. If we want to know the right thing to do, we can't just assume that all of the moral foundations have a grain of truth, figure we're equally tribalistic, and compromise with the conservatives; we need to turn to reason.
While Haidt is of course right that sound moral arguments often fail to sway listeners, "like the wind and the rain, washing over the land year after year, a good argument can change the shape of things. It begins with a willingness to question one's tribal beliefs. And here, being a little autistic might help." He then cites Bentham criticizing sodomy laws in 1785 and Mill advocating gender equality in 1869. And then he concludes: "Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like 'rights,' someone had to do it with thinking. I'm a deep pragmatist [Greene's preferred term for utilitarians], and a liberal, because I believe in this kind of progress and that our work is not yet done."
My response to both paragraphs is that the relevant counterfactual is "not looking into/talking about AI risks." I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose "all their current friends" by arriving at an "incorrect" conclusion if their current friends are already fine with the person not having any view at all on AI risks.