Looking at small children, seems like human nature has hardcoded the following instincts:
Seems like a lot of human behavior can be explained by the interaction of these two forces -- people try to take things; other people try to prevent them from taking more than their share. Sometimes the stronger people succeed to take more than their share, at the cost of making enemies.
Then, I guess we have many civilizational inventions on top of this. Strategies that a smart person could figure out alone, or copy from others, such as "if you take a lot, but share some of that with a few people who agree to support you in turn, you can still have a lot, and also some allies instead of only enemies". A smart chimp could do this with one or two strong allies; a human dictator can create an entire army and a secret police for this purpose. It probably helps a lot if the culture provides you with potential minions who already understand their role, so you don't have to explain individually to everyone what is an "army" and why you might benefit from joining one.
The reaction to people taking power can be to avoid them or fight them. The availability of these reactions depends on the situation. A complicated civilization can invent complicated methods to check people in power. In a primitive, a group of people who hate status quo can simply walk away and start a new tribe.
It seems to me that people have some "switches" that respond to changes in environment, for example in situations of natural disaster people become more altruistic. (And more likely to lynch you if they catch you looting houses in the middle of a disaster.)
But this all seems like possibly a consequence of the two basic instincts "try to get more, unless strongly opposed" and "prevent others from getting more than their fair share". With culture changing the definition of "fair share", e.g. we may be taught that high-status people (nobility, university educated, corporate bosses, etc.) deserve more than we do, but in turn we deserve more than low-status people (homeless, foreigners, less intelligent, etc.), because <insert ideology>.
If I was Sam, I would try to keep the definition of "the spec, a public document" such that I can unilaterally replace it when the right moment comes.
For example, "the spec" is defined as the latest version of a document that was signed by OpenAI key and published at openai/spec.html... and I keep a copy of the key and the access rights to the public website... so at the last moment I update the spec, sign it with the key, upload it to the website, and tell the AI "hey, the spec is updated".
Basically, the coup is a composition of multiple steps, each seemingly harmless when viewed in isolation. Could be made even more indirect, for example, I wouldn't have the access rights to the public website per se, but there would exist a mechanism to update the documents at public website, and I could tell it to upload the new signed spec. Or a mechanism to restore the public website from a backup, and I can modify the backup. Etc.
But it's also true that participating in the public sphere enables cooperation; enables mutual aid; enables creating reputation and credibility; enables joining with others to achieve your values.
Before internet, we had "public spheres" of various size -- there is a difference between e.g. telling something to a group of friends, speaking at a village meeting, or publishing a book.
Internet kinda makes it all one size -- anything said anywhere could leak anywhere else. Which changes the equation: you can't choose smaller sphere for smaller risks and smaller benefits; now everything is associated with the large risks. You can't trust that things said to your friends won't reach your employer.
It's like a small sip of wine.
Astral Codex Ten comments however, are like drinking the entire bottle.
https://read.isabelunraveled.com/p/manifest-rationally decent article on "manifestation" without woo.
Made me think about "keep your identity small" and epistemic vs instrumental rationality.
Short version is that you should keep your epistemic identity small. Avoid things like "I am <political faction>", because they will make you think <beliefs associated with the political faction> regardless of evidence.
But you should choose your instrumental identity consciously. (See also: use your identity carefully.) Things like "I am the kind of person who does X" communicate to your System 1 that you want to do X.
This requires some more thought on how to keep these two separate; how to prevent the kind of failure where identifying as "someone who does X" makes me believe that "I do X"... even if I actually don't, or only do rarely. It probably help to keep records on how often you do X, so that your beliefs come from the records, not the identity itself, but I am not sure whether this is the entire answer, or I missed something important.
Some things are more legible than others. If I believe something based on dozen pieces of evidence all pointing in the same direction, removing one piece of evidence wouldn't significantly change the outcome.
(Of course, removing all of them would change my mind; and even removing a few of them would make me suspicious about the remaining ones.)
So sometimes it makes sense to write things that are not cruxy.
The concept has a specific definition, but yeah, many people just use it as an excuse to call their opponents idiots. Not sure how much to blame Taleb for that, and how much it's just that every concept gets diluted when the masses notice it. Would different words be more resistant against misinterpretation, but perhaps less memetically virulent? Probably yes; calling your opponents "idiots" is just too tempting.
The originally intended meaning is something like "people who fail/refuse to notice second-order effects, and often fail to do even the most obvious sanity checks, because they are completely focused on the fact that their first-order conclusions are supported by ScienceTM". (Imagine a less stupid version of someone claiming that it impossible to clean up their room, because it is a scientifically proven fact that entropy always increases. But the statement would typically be made about e.g. economy.)
Taleb -- who is an idiot in some different ways -- does not match this. He has eyes on the ball; his goal is to increase the sales of his books, and he is doing that skillfully.
Hm, I wonder what would be better words for the concept. A "first-order intellectual"?
Yes, it seems like there is a difference between "inwards stubbornness" and "outwards stubbornness", whether people refuse to change their minds for reasons private or social.
I know some people such that if you tell them they are wrong, they will double down and get angry at you... but if you meet them a few days later, they have updated their opinion. So it seems like they are willing to update, but not to admit that they did.
Similarly, you tell some people a good idea, and they will tell you that it is stupid. The next day, they will come and propose the same idea as their own. I think many books on manipulation social skills recommend that the best way to change someone's mind about something is to let them believe that it was their own idea.
Then again, maybe this is a smaller difference than it seems, and some people are just better at remembering what was their opinion yesterday, or better at convincing themselves that yesterday was different.
A well-designed simulation is inescapable. Suppose that you are inside Conway's game of life, and you know that fact for sure. How specifically are you going to use this knowledge to escape, if all you are is a set of squares on a simulated grid, and all that ever happens in your universe is that some squares are flipped from black to white and vice versa?
To answer your first question, some kinds of pseudo-randomness are virtually indistinguishable from actual randomness, if you do not have a perfect knowledge of the entire universe. For example, in cryptography, changing one bit in the input message can on average flip 50% of bits in the output message. Imagine that the next round of pseudo-random numbers is calculated the same way from the current state of the universe -- the slightest change in the position of one particle on the opposite side of the universe could change everything.
Surprised by the karma not being higher, and by the negative reactions.
I generally enjoy your articles for providing me insights into a part of the planet I know little about, but I think this article was good even separately from that. The fact that education has been warped from “nurturing individuals” into “screening them,” is something that I also perceive as obviously true and very painful. Unlike the author, I have a university diploma, I worked as a teacher for a few years, and I have a life-long interest in improving education, so many ad-hominems used against the author wouldn't work for me. And I agree about how the system is broken. Also, Bryan Caplan's The Case Against Education makes the same point.
You can complain about a system regardless of whether you win or you lose; but of course, if you win, you have less of a motivation to do so (especially if the system is designed to pretend that the winners are superior beings, so by questioning the system you also question your own superiority), and if you lose, there is the convenient argument that your opinions are opinions of an inferior human, and therefore superior humans should avoid them without discussing them, lest they contaminate themselves with the inferiority.
This is (from my perspective) the key part of the article -- pinpointing a part of the difference between what the educational system does, versus what it pretends to be = what it derives its legitimacy from.
As a thought experiment, imagine that you are designing an educational system, and your great desire is that e.g. as many people learn calculus as possible. You do not want to sacrifice literally all resources towards this goal, but you are pretty serious about it. You would probably design a system where anyone who has a potential to learn calculus is invited and taught, and then examined.
You would filter out e.g. mentally retarded people, because trying to teach them calculus is hopeless. Also, suppose that only have control over universities, but not over elementary and middle schools. Then, you would also filter out people who fail at the prerequisites so hard that there is no way to teach them calculus in the available time.
But what you definitely wouldn't do is establish a goal of only admitting a certain fraction of the population. Not if your goal is to teach the calculus to as many people as possible. If more people arrived from middle schools sufficiently prepared, you would be happy to admit all of them; not trying to figure out more ways to reject them. (And you definitely wouldn't reject them based on e.g. extracurricular activities unrelated to math.)
From this we can conclude the difference between the idealistically stated goals of the educational system, which is providing knowledge (to those who are capable of receiving it, given limited time and resources), and its actual goals, which are more like selecting a fraction of population, based on criteria that are correlated with their ability to receive knowledge but also with lots of arbitrariness and sheer luck.
Another way to say the same thing is that if you have a specific goal, such as "teach everyone literacy", it is a game that the majority of people can win. In theory, you would be happy if 100% managed to win. The educational system is designed to be the kind of game where many people can't win, because separating the winners from the losers is its point; it would be considered a failure if somehow 100% managed to win.
A system that is advertised as trying to make 100% win (and pretends that it anyone loses, it is only their own fault), but it actually designed to make a predefined fraction fail, is a system based on lies.
Why it matters (besides the standard rationalist obsession with truth)? Among other reasons, system based on lies are surprisingly resistant against attempts to improve them -- if your proposal would improve them according to their stated criteria, but not according to their actual criteria. From the perspective of a system whose purpose it to separate winners from losers, improving the number of winners would be a failure; therefore it will resist any attempt to improve the number of winners, in ways that will be infuriating for a person who genuinely desires to see more people succeed at getting knowledge.