I agree. The question is when to start being lazy about what.
The idea seems correct. If you identify as X, and an authority that you respect says "all true X believe Y", you are more likely to accept Y uncritically. Especially if other Xs around you accept it; that creates social pressure.
Politics is considered a minefield here, so you would have to write the article carefully, to avoid tribalism. (Basically, do not mix "this is an analysis of how political tribes work" with "this is my favorite political tribe" in the same article.) It would probably be better to use multiple examples from various sides, rather than making it all about e.g. liberals. It would be better to use historical examples, e.g. from Ancient Rome, rather than examples today. With examples today, you risk starting a disagreement: "all Xs believe Y -- no, they don't -- yes they do -- you only say that because you hate Xs -- no, here is a proof that Z, a famous X, says Y -- Z is a fringe guy, most Xs are not like him" etc.
I think it would be better if you could somehow demonstrate that a change happened -- that at some moment in history, most Xs did not believe Y, then someone popular said "all Xs believe Y", and later most Xs believed Y. (With sources for both "previously didn't believe" and "later believed".) Of course, this is a hypothetical ideal, I am not sure whether there is enough information for something like this.
Have you read "A Fable of Science and Politics"? I am thinking specifically about the part:
Society is still divided along Blue and Green lines, and there is a “Blue” and a “Green” position on almost every contemporary issue of political or cultural importance. The Blues advocate taxes on individual incomes, the Greens advocate taxes on merchant sales; the Blues advocate stricter marriage laws, while the Greens wish to make it easier to obtain divorces; the Blues take their support from the heart of city areas, while the more distant farmers and watersellers tend to be Green; the Blues believe that the Earth is a huge spherical rock at the center of the universe, the Greens that it is a huge flat rock circling some other object called a Sun. Not every Blue or every Green citizen takes the “Blue” or “Green” position on every issue, but it would be rare to find a city merchant who believed the sky was blue, and yet advocated an individual tax and freer marriage laws.
What I mean is that it's not surprising if someone who identifies as X adopts an opinion that seems obviously X-ish (e.g. if a Christian believes in the afterlife). It is surprising if most X adopt an opinion that to an outsider would seem unrelated to X (e.g. if Christians believe we need to wage a war on someone, or keep the taxes low). It becomes more obvious when you compare different countries, or different decades in the same country, and see how the opinions change, so at one place it is obvious that "a true X would support Y", while at another place it is obvious that "a true X would oppose Y".
For example, eugenics was considered an obvious left-wing topic before WW2 (the needs of the society outweigh the freedom of the individual, unrestricted reproduction is a religious value, religions are right-wing, social engineering is left-wing), but a right-wing topic after WW2 (it is associated with Nazis, Nazis attacked the Soviet Union, Soviet Union is left-wing, therefore Nazis and eugenics are right-wing). Similarly, from the perspective of Republican Americans, Russia used to be bad, now it's kinda good. Or from the perspective of an anti-racist, colorblindness used to be the ideal ("I have a dream"), now it is considered a form of bigotry. Feminists used to insist there is absolutely no such thing as a male brain or a female brain, now if you say the same thing, you may be accused of transphobia. Socialists used to be strongly pro-technology during the days of Soviet Union and Sputnik, these days they are more likely to oppose technology as a white cishet men's thing and deny that AI could be useful.
...and now, let's consider my previous paragraph. Good, because it provided specific examples. Bad, because it was inbalanced (most examples come from the left, so it may seem like I suggest that the left is more likely to do these things than the right; which I do not actually believe, it's just that it's easier for me to remember the examples from the left), and because it contains sensitive current issues, so if someone disagrees even with one of the example, the person would probably get furious after reading it, and the debate would be about whether this one example is correct or not, rather than about the general principle of "those who can define your identity can define your specific beliefs".
If the note is a burden, I'd say it is a problem of the note-taking system rather than of the note itself.
(That said, I think it is possible that all existing systems suck, and we need to invent something much better.)
Patrick McKenzie reminds you that for best results in professional work you want to adopt the diction and mannerisms of a professional, including when talking to AI.
Can you do that in two steps? Like, in one window ask the AI "here is my question, rephrase it the way a professional would say it", and then copy the result to another window?
Is this an ad?
When I look at the pointy-shaped shoes in shop, I wonder: does anyone actually have feet like that?
Or are people just suffering in the name of fashion? Or do they buy shoes a few numbers larger and keep the pointy end empty?
Just a silly idea: If many people start using LLMs, and as a result of that learn to better translate their intuitions into explicit descriptions... perhaps this could help us solve alignment.
I mean, a problem with alignment is that we have some ideas of good, but can't make them explicit. But maybe the reason is that in the past, we had no incentive to become good at expressing our ideas explicitly... but instead we had an incentive to bullshit. However, when everyone will use LLMs to do things, that will create an incentive to be good at expressing your ideas, so that the LLM can implement them more properly.
I find it interesting and unfortunate that there aren't more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
Maybe it's just my bubble -- and I really do not want to offend anyone, only to report honestly on what I observe around me -- understanding economics seems right-wing coded. More precisely, when I talk to right-wing people about economics, there is a mix of descriptive and normative, but when I talk to left-wing people about economics, it is normative only: what should be done, in their opinion, often ignoring the second-order effects. Describing the economics as it is, seems like expressing approval; and approving of capitalism is right-wing.
Basically, if you made a YouTube video containing zero opinion on how things should be, only explaining the basic things about supply and demand (like, how scarcity makes things more expensive in a free market) and similar stuff, people listening to the video would label you as right-wing. Many of those who identify as left-wing would even dismiss the video as right-wing propaganda.
So, if my understanding is correct, this seems like a problem the left wing needs to solve internally. There is not much we can do as rationalists when someone makes not understanding something a signal of loyalty.
Perhaps it wasn't obvious previously.
I suspect the usual dynamics at companies is that when others start doing something, you better start doing it too, or it will seem like negligence. For example, if you are a company lawyer, and other companies have NDAs, you better prepare one for your company, too. Because the risks are asymmetric -- if you do the same thing everyone else does, and something bad happens, well that's the cost of doing business; but if you do something different from everyone else, and something bad happens, that makes you seem incompetent.
Level 1 allows you to work on something alone. So if you can't find other people who would cooperate with you (e.g. because you are a little autistic), level 1 is the only one you can work at. Your options are to work at level 1 alone, or serve other people, or fail.
Which I suspect might have also been Scott Adams' problem. He could see that the other levels work better, but he couldn't work at them efficiently. So he just kept writing about how cool those levels are; sometimes impressing people who were at the same skill level as him. But he has never achieved those levels himself.
Elon Musk... I am not very familiar with him. I suspect that he is sufficiently non-autistic to be able to use the higher levels efficiently, and yet he radiates enough nerd vibes (naturally? or is that an image cultivated on purpose?) to convince the nerds that he is one of them, so they keep simping for him and working for him.