But the word "treaty" has a specific meaning that often isn't what people mean.
It doesn't really have a specific meaning. It has multiple specific meanings. In US constitution it means an agreement with two-thirds approval of the Senate. The Vienna Convention on the Law of Treaties, Article 2(1)(a) defines a treaty as:
“an international agreement concluded between States in written form and governed by international law, whether embodied in a single instrument or in two or more related instruments and whatever its particular designation.”
Among international agreements, in US law you have a treaty (executive + 2/3 of the Senate), congressional–executive agreements (executive + 1/2 of the Congress + 1/2 of the Senate) and sole executive agreements (just the executive).
Both treaties (according to the US constitution) and congressional–executive agreements can be supreme law of the land and thus do things that a sole executive agreements which does not create law. From the perspective of other countries both treaties (according to the US constitution) and congressional–executive agreements are treaties.
If you would try to do an AI ban via a sole executive agreement, companies would likely sue and argue that the US president does not have the power to do that and the current US Supreme Court would likely declare the AI ban ineffectual.
When passing serious agreements like NAFTA where there's a lot of opposition to parts of the agreement turned out to be easier to gather simply majorities + 60 Senate votes to move past the filibuster than 67 Senate votes, they are passed as congressional–executive agreements. If getting votes for the AI treaty is hard, it's likely that it would also be passed as a congressional–executive agreements. If you do lobbying you probably should know the term congressional–executive agreements and be able to say it when a national security expert asks you what you want.
Using "international agreement" in the MIRI writing has the problem that it suggests a sole executive agreement would do the job when it wouldn't.
Since 2014, it is both the case that taboos against non-Woke speech have become less universal, and the case that ethnonationalism has (I think?) become considerably more prominent, which I take as at least limited evidence that such heuristic A was paying rent.
You can also take it as evidence for the opposite. Trumps first election was at a height of the taboo against non-Woke speech. A world where the taboos caused the current rise of ethnonationalism, does look like the world we are seeing.
I think a key feature of how we as humans choose heuristics is that we have a state of the world in mind that we want and we choose the heuristics we use to reach that state. It's one of the points of jimmy's sequence that I think is underread.
It's relatively easy to coherently imagine a world where most people aren't engaging in drunk driving and pick designated drivers. It's easy to imagine a world in which more and bigger buildings get build.
On the other hand, it's hard to imagine how 2040 would look while stopping the building of AGI. For me that makes “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.” a heuristics that feels more intellectual than embodied. I think to have it feel embodied, I would need to have a vision of how a future that the heuristic produces would look like.
As far as concrete imagination goes, I'm also not sure what "we" in that sentence means. Note that you don't have any unclear "we" in either the YIMBY or the Mothers Against Drunk Driving examples you describe.
This is because the conditions that call for CBT didn't apply. If you have someone who hasn't already considered the evidence against their fears, and who was open to doing so, then CBT is a novel insight and falls right out of the framework. She already considered it and concluded that her fear was irrational, and was closed to considering the evidence that the fear was wrong because she thought she already did that. So the opposite of CBT is what fell out.
This point seems very valuable and a bit hidden in the footnote.
I think while the example in this post is good at illustration, it's not good at getting people to actually apply the concept.
You don't need someone to suspend yourself to actually think through whether the objections your mind comes up might have merit after all. Many rationalists are very good at label a thought as irrational and then not giving it more attention. The mental move of actually giving the thought more attention might be very valuable. Yesterday, I had a case were I was struggling with akrasia and after actually spending some time thinking through the reasons against doing what I wanted to do, the akrasia vanished.
There are many treaties and many times treaties are violated for various reasons. Waging a war because a treaty gets violated is not the standard way.
LessWrong: some people portray eating meat as evil, and not eating meat as the bare minimum to be a decent person. But it may be more persuasive to portray eating meat as neutral, and to portray not eating meat as an insanely awesome opportunity to do a massive amount of good.
If you do the math, the problem is that "not eating meat" does not do a massive amount of good in any utility calculation that people provide. If people calculate dollar numbers for how much you would need to donate to offset it, those numbers aren't that high.
Eliezer did write Death with Dignity which seems to assert that doom is inevitable, so the book not making that case, is a meaningful step.
Currently, it seems like NVIDIA is often able to manage to sell GPUs via Singaporean third-parties to China circumventing US trade restrictions.
It seems to me, that CIA and NSA should take up the task of analyzing when that happens and then relaying the information, so that the shipments can be stopped. While it's always hard to know that CIA and NSA are up-to from the outside it seems like they currently fail at the job.
It might be valuable to push for the CIA and NSA investing more resources for that task.
1) It's a way to push US AI policy in the right direction that's not in conflict with Trump administration policy and thus has more change of being implemented.
2) It could create a power-center inside the CIA/NSA who's self-interest it is to expand control of GPUs because they might get more resources if there's more that needs controlling.