Nicholas Weininger
Nicholas Weininger has not written any posts yet.

Nicholas Weininger has not written any posts yet.

Nice post. It prompts two questions, which you may or may not be the right person to answer:
Trump's appointed SCOTUS judges are indeed willing to rule against him and to uphold a coherent legal theory of democracy under the rule of law, which agree or disagree is clearly not equivalent to "whatever my side wants it gets". The same sadly cannot be said of his lower court judges, notably Aileen Cannon, whose presence on the bench in his home district drastically decreases the otherwise high likelihood of his being convicted and imprisoned for having obviously, self-confessedly committed serious crimes. Cannon is exactly the sort of lawless, toadying party hack that fascist dictators-in-making around the world love to appoint to the judiciary, and we should expect lots more of them to be appointed if Trump wins in 2024. This may prove to be the biggest single piece of damage to US democracy in the next decade.
I used to be a middle manager at Google, and I observed mazedom manifesting there in two main ways:
If you try to make your organization productive by focusing your time on intensively coaching the people under you to be better at their jobs, this will make your org productive but will not result in your career advancement. This is because nobody at the level above you will be able to tell that the productivity increase is due to your efforts-- your reports' testimony to this effect will not provide appropriate social proof because they are by definition less senior than you. To advance your career you must instead give priority to activities
It seems very odd to have a discussion of arms race dynamics that is purely theoretical exploration of possible payoff matrices, and does not include a historically informed discussion of what seems like the obviously most analogous case, namely nuclear weapons research during the Second World War.
US nuclear researchers famously (IIRC, pls correct me if wrong!) thought there was a nontrivial chance their research would lead to human extinction, not just because nuclear war might do so but because e.g. a nuclear test explosion might ignite the atmosphere. They forged ahead anyway on the theory that otherwise the Nazis were going to get there first, and if they got there first they... (read more)
As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective "recipes for ruin" to use Nielsen's phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it... (read more)