If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
Yes, on the surface all you did was to point out an overlap between Rationalists and other groups, but what I don't understand is why you chose to emphasize this particular overlap, instead of for example the overlap between us and conservatives of wanting to stop ASI from being built, or simply leaving the Rationalists out of this speech and talk about us another time when you can speak with more nuance.
My hypotheses:
I'm leaning strongly towards 2 (as 1 seems implausible given the political nature of the occasion), but still find it quite baffling, in part because it seems like you probably could have found a better way to accomplish what you wanted, without as much of the negative consequences (i.e., alienating the community that originated much of the thinking on AI risk, and making future coalition-building between our communities more difficult).
I think I'll stop here and not pursue this line of questioning/criticism further. Perhaps you have some considerations or difficulties that are hard to talk about and for me to appreciate from afar.
“rationalists”
Thanks, I had missed this in my reading. It does seem a strange choice to include in the speech (in a negative way) if the goal is to build a broad alliance against building ASI. Many rationalist are against building ASI in our current civilizational state, including Eliezer who started the movement/community.
@geoffreymiller, can you please explain your thought process for including this word in your sentence? I'm really surprised that you seem to consider yourself a rationalist (using "we" in connection with rationalism and arguing against people who do not consider you to be a community member "in good standing"[1]) and also talk about us in an antagonistic/unfriendly way in front of others, without some overriding reason that I can see.
I had upvoted a bunch of your comments in that thread, thinking that we should consider you a member in good standing.
Thanks for this explanation, it definitely makes your position more understandable.
and on top of that there is the abstract idea of "good", saying you shouldn't hurt the weak at all. And that idea is not necessitated by rational negotiation. It's just a cultural artifact that we ended up with, I'm not sure how.
I can think of 2 ways:
If it's 1, then I'm not sure why extrapolation and philosophy will pick out the "good" and leave the "nasty stuff". It's not clear to me why aligning to culture would be better than aligning to individuals in that case.
If it's 2, then we don't need to align with culture either - AIs aligned with individuals can rederive the "good" with competent philosophy.
Does this make sense?
So for AIs maybe this kind of carry-over to philosophy is also the best we can hope for.
It seems clear that technical design or training choices can make a difference (but nobody is working on this). Consider the analogy with the US vs Chinese education system, where the US system seems to produce a lot more competence and/or interest in philosophy (relative to STEM) compared to the Chinese system. And comparing humans with LLMs, it sure seems like they're on track to exceeding (top) human level in STEM while being significantly less competent in philosophy.
I think religion and institutions built up around it (such as freedom of religion) is a fairly clear counterexample to this. They are in part a coordination technology built upon a shared illusion (e.g., that God exists) and safeguards against its "misuse" built up from centuries of experience. If you destroy the illusion at the wrong time (i.e. before better replacements are ready), you could cause a lot of damage at least in the short run, and possibly even in the long run given path dependence.
It seems to me that Richard isn't trying to bring back ethnonationalism, or even trying to "add just that touch of ethnic pride back into the meme pool", but just trying to diagnose "how the western world got so dysfunctional". If ethnonationalism and the taboo against ethnonationalism are both bad (as an ethnic minority I'm personally pretty scared of the former), then maybe we should get rid of the taboo and defend against ethnonationalism by other means, similar how there is little to no taboo against communism[1] but it hasn't come close to taking power or reapproaching its historical high water mark in the west.
If you doubt this, there's an advisor to my local school district who is a self-avowed Marxist and professor of education at the state university, and writes book reviews like this one:
«For decades the educational Left and critical pedagogues have run away from Marxism, socialism, and communism, all too often based on faulty understandings and falling prey to the deep-seated anti-communism in the academy. In History and Education Curry Stephenson Malott pushes back against this trend by offering us deeply Marxist thinking about the circulation of capital, socialist states, the connectivity of Marxist anti-capitalism, and a politics of race and education. In the process Malott points toward the role of education in challenging us all to become abolitionists of global capitalism.» (Wayne Au, Associate Professor in the School of Educational Studies at the University of Washington Bothell; Editor of the social justice teaching magazine Rethinking Schools; Co-editor of Mapping Corporate Education Reform: Power and Policies Networks in the Neoliberal State)
Some thoughts that taking this perspective triggers in me:
Can you explain more your affinity for virtue ethics, e.g., was there a golden age in history, that you read about, where a group of people ran on virtue ethics and it worked out really well? I'm trying to understand why you seem to like it a lot more than I do.
Re government debt, I think that is actually driven more by increasing demand for a "risk-free" asset, with the supply going up more or less passively (what politician is going to refuse to increase debt and spending, as long as people are willing to keep buying it at a low interest rate). And from this perspective it's not really a problem except for everyone getting used to the higher spending when some of the processes increasing the demand for government debt might only be temporary.
AI written explanation of how financialization causes increased demand for government debt
Financialization isn't a vague blob; it's a set of specific, concrete processes, each of which acts like a powerful vacuum cleaner sucking up government debt.
Let's trace four of the most important mechanisms in detail.
Derivatives (options, futures, swaps) are essentially financial side-bets on the movement of an underlying asset. The total "notional" value of these bets is in the hundreds of trillions, dwarfing the real economy.
Analogy: A giant, global casino. The more tables and higher-stakes games the casino runs (financialization), the more high-quality security chips (government bonds) it needs to hold in its vault to ensure all winnings can be paid out.
After the 2008 financial crisis, global regulators (through frameworks like Basel III) sought to make banks safer. They did this by forcing them to hold more "safe stuff" against their risky assets.
Analogy: A building code for banks. The regulators say, "For every floor of risky office space you build (loans), you must add a corresponding amount of steel-reinforced concrete to the foundation (government bonds)." To build a taller skyscraper, you have no choice but to buy more concrete.
The pool of professionally managed money (pensions, insurance funds, endowments) has exploded. These institutions have very specific, long-term promises to keep.
Analogy: A pre-order system for future cash. An insurance company is like a business that has accepted millions of pre-orders for cash to be delivered in 20, 30, and 40 years. To guarantee they can fulfill those orders, they go to the most reliable supplier (the government) and place their own pre-orders for cash (by buying bonds) that will arrive on the exact same dates.
Finance is no longer national; it is a single, interconnected global system. This system requires a neutral, trusted asset for settling international balances and storing wealth.
An analogy I like is with China's Land Finance (土地财政), where the government funded a large part of its spending by continuously selling urban land to real estate developers to build apartments and offices on, which was fine as long as urbanization was ongoing but is now causing problems as that process slows down (along with a bunch of other issues/complications). I think of government debt as a similarly useful resource or asset, that in part enables more complex financial products to be built on top, but may cause a problem one day if demand for it slows down.
ETA: To make my point another way, I think the modern monetary system (with a mix of various money and money-like assets serving somewhat different purposes, including fiat money, regulated bank deposits, government debt) has its own internal logic, and while distortions exist, they are inevitable under any system (only second-best solutions are possible, due to bounded rationality and principal-agent problems). If you want to criticize it I think you have to go beyond "debt that will never be repaid" (which sounds like you're trying to import intuitions for household/interpersonal finances, where it's clearly bad to never pay one's debts, to a very different situation), and talk about what specific distortions you're worried about, how the alternative is actually better (taking into account its own distortions), and/or how/why the system is causing erosion of virtue ethics.
I have heard rumor that most people who attempt suicide and fail, regret it.
After doing some research on this, I think this is unlikely to be true. The only quantitative study I found says that among its sample of suicide attempt survivors, 35.6% are glad to have survived, while 42.7% feel ambivalent, and 21.6% regret having survived. I also found a couple of sources agreeing with your "rumor", but one cited just a suicide awareness trainer as its source, while the other cited the above study as the only evidence for its claim, somehow interpreting it as "Previous research has found that more than half of suicidal attempters regret their suicidal actions." (Gemini 2.5 Pro says "It appears the authors of the 2023 paper misinterpreted or misremembered the findings of the 2005 study they cited.")
If this "rumor" was true, I would expect to see a lot of studies supporting it, because such studies are easy to do and the result would be highly useful for people trying to prevent suicides (i.e., they can use it to convince potential suicide attempters that they're likely to regret it). Evidence to the contrary are likely to be suppressed or not gathered in the first place, as almost nobody wants to encourage suicides. (The above study gathered the data incidentally, for a different purpose.) So everything seems consistent with the "rumor" being false.
First, I think there’s enough overlap between different reasoning skills that we should expect a smarter than human AI to be really good at most such skills, including philosophy. So this part is ok.
Supposing this is true, how would you elicit this capability? In other words, how would you train the AI (e.g., what reward signal would you use) to tell humans when they (the humans) are making philosophical mistakes, and present humans with only true philosophical arguments/explanations? (As opposed to presenting the most convincing arguments, which may exploit flaws in human's psychology or reasoning, or telling the humans what they most want to hear or what's most likely to get a thumb up or high rating.)
Fourth—and this is the payoff—I think the only good outcome is if the first smarter than human AIs start out with “good” culture, derived from what human societies think is good.
"What human societies think is good" is filled with pretty crazy stuff, like wokeness imposing its skewed moral priorities and empirical beliefs on everyone via "cancel culture", and religions condemning "sinners" and nonbelievers to eternal torture. Morality is Scary talks about why this is generally the case, why we shouldn't expect "what human societies think is good" to actually be good.
Also, wouldn't "power corrupts" apply to humanity as a whole if we manage to solve technical alignment and not align ASI to the current "power and money"? Won't humanity be the "power and money" post-Singularity, e.g., each human or group of humans will have enough resources to create countless minds and simulations to lord over?
I'm hoping that both problems ("morality is scary" and "power corrupts") are philosophical errors that have technical solutions in AI design (i.e., AIs can be designed to help humans avoid/fix these errors), but this is highly neglected and seems unlikely to happen by default.
If you get around to writing that post, please consider/address: