In the case of people however group member 2 is often incorrect because the intelligence difference is small enough that some level of judgement is often possible. Unlike with a super intelligent AI where it is currently not even known how to judge its level of intelligence because it can self improve in short order.
I agree that often people do not intuitively find it horrible scary to let an AI join their group aka let the AI out of the box, but on the other hand intuitively are wary of letting intelligent people* join their group for the reasons you listed above. In general people should be wary of both and more so for the AI since it does not share a common genetic history, brain structure, biological needs, and therefore and in general is a greater unknown. This lack of intuitive fear is much like how many species of animal lacked any fear of humans on first encounter like dodos.
Guns, germs, and steel: the fates of human societies By Jared M. Diamond
Just as modern humans walked up to unafraid dodos and island seals and killed them, prehistoric humans presumably walked up to unafraid moas and giant lemurs and killed them too.
Lack of intuitive fear can cause instinction.
*Where intelligent people are the people more intelligent then the group by a good degree but not intelligent enough to bridge the inferential divide.
Well, if we're talking about real world analogies to the AI box test, I have a minor caveat: sometimes, on Less Wrong, I see what seems to me to be the implied message that the more intelligent not only have an advantage over the less intelligent, but that the more intelligent can ipso facto completely control the less intelligent, at least in the context of hypotheticals and puzzles. This may be a wise assumption to make when we're dealing with a self-improving AGI, or with Eliezer in the context of his famous tests. But in my own experience, I personally find it difficult to control some minds that are on some level weaker than my own. Think of training cats, or calming down a screaming toddler.
I also suspect that, without too much trouble, I could go to the seedier sides of any big city or a sleazy traveling carnival and find a fair number of people who might not have anything like my academic credentials, but who would be able to con me out of my money if I were foolish enough to listen to them. Is that different from playing the AI box game with Eliezer? I don't know, because we don't have transcripts of the two games.
Who here would be confident in his or her ability to win the AI box game against an experienced professional grifter of average intelligence cast in the AI role? For that matter, if such a game could be arranged, who -- if cast in the role of the AI -- would be confident in his or her ability to win the AI box game against a cranky toddler?
Any smart person who really knows how to control the actions of less intelligent people could potentially make a fortune advising corrections facilities, juvenile halls, and schools with severe chronic discipline problems.
Any smart person who really knows how to control the actions of less intelligent people could potentially make a fortune advising corrections facilities, juvenile halls, and schools with severe chronic discipline problems
I'm in general agreement with your post, but being good at X is quite different from being good at teaching how to do X.
This assumes that we are talking about a single linear measure of intelligence, which doesn't seem to be the case with normal humans. For example the same person can be above average in spacial reasoning but below average in verbal reasoning.
The relevant analogy for this would be social intelligence, so the person you should be the most suspicious of is the person who has displayed the greatest ability to manipulate social situations, though ironically if they are socially intelligent they should be able to prevent you suspecting them.
The following is a minor curiosity that occurred to me regarding real-world analogies to the AI-box concept.
Fundamentally, the reason that we fear a randomly-chosen super-intelligent AI is twofold: