Abstract: In the FOOM debate, Eliezer emphasizes 'optimization power', something like intelligence, as the main thing that makes both evolution and humans so powerful. A different choice of abstractions says that the main thing that's been giving various organisms - from single-celled creatures to wasps to humans - an advantage is the capability to form superorganisms, thus reaping the gains of specialization and shifting evolutionary selection pressure to the level of the superorganism. There seem to be several ways by which a technological singularity could involve the creation of new kinds of superorganisms, which would then reap benefits above and beyond those that individual humans can achieve, and which would quite likely have quite different values. This strongly suggests that even if one is not worried about the intelligence explosion (because of e.g. finding a hard takeoff improbable), one should still be worried about the co-operative explosion.

After watching Jonathan Haidt's excellent new TEDTalk yesterday, I bought his latest book, The Righteous Mind: Why Good People Are Divided by Politics and Religion. At one point, Haidt has a discussion of evolutionary superorganisms - cases where previously separate organisms have joined together into a single superorganism, shifting evolution's selection pressure to operate on the level of the superorganism and avoiding the usual pitfalls that block group selection (excerpts below). With an increased ability for the previously-separate organisms to co-operate, these new superorganisms can often out-compete simpler organisms.

Suppose you entered a boat race. One hundred rowers, each in a separate rowboat, set out on a ten-mile race along a wide and slow-moving river. The first to cross the finish line will win $10,000. Halfway into the race, you’re in the lead. But then, from out of nowhere, you’re passed by a boat with two rowers, each pulling just one oar. No fair! Two rowers joined together into one boat! And then, stranger still, you watch as that rowboat is overtaken by a train of three such rowboats, all tied together to form a single long boat. The rowers are identical septuplets. Six of them row in perfect synchrony while the seventh is the coxswain, steering the boat and calling out the beat for the rowers. But those cheaters are deprived of victory just before they cross the finish line, for they in turn are passed by an enterprising group of twenty-four sisters who rented a motorboat. It turns out that there are no rules in this race about what kinds of vehicles are allowed.

That was a metaphorical history of life on Earth. For the first billion years or so of life, the only organisms were prokaryotic cells (such as bacteria). Each was a solo operation, competing with others and reproducing copies of itself. But then, around 2 billion years ago, two bacteria somehow joined together inside a single membrane, which explains why mitochondria have their own DNA, unrelated to the DNA in the nucleus. These are the two-person rowboats in my example. Cells that had internal organelles could reap the benefits of cooperation and the division of labor (see Adam Smith). There was no longer any competition between these organelles, for they could reproduce only when the entire cell reproduced, so it was “one for all, all for one.” Life on Earth underwent what biologists call a “major transition.” Natural selection went on as it always had, but now there was a radically new kind of creature to be selected. There was a new kind of vehicle by which selfish genes could replicate themselves. Single-celled eukaryotes were wildly successful and spread throughout the oceans.

A few hundred million years later, some of these eukaryotes developed a novel adaptation: they stayed together after cell division to form multicellular organisms in which every cell had exactly the same genes. These are the three-boat septuplets in my example. Once again, competition is suppressed (because each cell can only reproduce if the organism reproduces, via its sperm or egg cells). A group of cells becomes an individual, able to divide labor among the cells (which specialize into limbs and organs). A powerful new kind of vehicle appears, and in a short span of time the world is covered with plants, animals, and fungi. It’s another major transition.

Major transitions are rare. The biologists John Maynard Smith and Eörs Szathmáry count just eight clear examples over the last 4 billion years (the last of which is human societies). But these transitions are among the most important events in biological history, and they are examples of multilevel selection at work. It’s the same story over and over again: Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms. (A superorganism is an organism made out of smaller organisms.) As these superorganisms proliferate, they begin to compete with each other, and to evolve for greater success in that competition. This competition among superorganisms is one form of group selection. There is variation among the groups, and the fittest groups pass on their traits to future generations of groups.

Major transitions may be rare, but when they happen, the Earth often changes. Just look at what happened more than 100 million years ago when some wasps developed the trick of dividing labor between a queen (who lays all the eggs) and several kinds of workers who maintain the nest and bring back food to share. This trick was discovered by the early hymenoptera (members of the order that includes wasps, which gave rise to bees and ants) and it was discovered independently several dozen other times (by the ancestors of termites, naked mole rats, and some species of shrimp, aphids, beetles, and spiders). In each case, the free rider problem was surmounted and selfish genes began to craft relatively selfless group members who together constituted a supremely selfish group.

These groups were a new kind of vehicle: a hive or colony of close genetic relatives, which functioned as a unit (e.g., in foraging and fighting) and reproduced as a unit. These are the motorboating sisters in my example, taking advantage of technological innovations and mechanical engineering that had never before existed. It was another transition. Another kind of group began to function as though it were a single organism, and the genes that got to ride around in colonies crushed the genes that couldn’t “get it together” and rode around in the bodies of more selfish and solitary insects. The colonial insects represent just 2 percent of all insect species, but in a short period of time they claimed the best feeding and breeding sites for themselves, pushed their competitors to marginal grounds, and changed most of the Earth’s terrestrial ecosystems (for
example, by enabling the evolution of flowering plants, which need pollinators). Now they’re the majority, by weight, of all insects on Earth.

Haidt's argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside. To a lesser extent, so are moral intuitions about things such as fairness and proportionality. Yes, it's a group selection argument. Haidt acknowledges that group selection has been unpopular in biology for a while, but notes that it has also been making a comeback recently, and cites e.g. the work on multi-level selection as supporting his thesis. I mention some of his references (which I have not yet read) below.

Anyway, the reason why I'm bringing this up is that I've been re-reading the FOOM debate of late, and in Life's Story Continues, Eliezer references some of the same evolutionary milestones as Haidt does. And while Eliezer also mentions that the cells provided a major co-operative advantage that allowed for specialization, he views this merely through the lens of optimization power, and dismisses e.g. unicellular eukaryotes with the words "meh, so what".

Cells: Force a set of genes, RNA strands, or catalytic chemicals to share a common reproductive fate.  (This is the real point of the cell boundary, not "protection from the environment" - it keeps the fruits of chemical labor inside a spatial boundary.)  But, as we've defined our abstractions, this is mostly a matter of optimization slope - the quality of the search neighborhood.  The advent of cells opens up a tremendously rich new neighborhood defined by specialization and division of labor.  It also increases the slope by ensuring that chemicals get to keep the fruits of their own labor in a spatial boundary, so that fitness advantages increase.  But does it hit back to the meta-level?  How you define that seems to me like a matter of taste.  Cells don't quite change the mutate-reproduce-select cycle.  But if we're going to define sexual recombination as a meta-level innovation, then we should also define cellular isolation as a meta-level innovation. (Life's Story Continues)

The interesting thing about the FOOM debate is that both Eliezer and Robin seem to talk a lot about the significance of co-operation, but they never quite take it up explicitly. Robin talks about the way that isolated groups typically aren't able to take over the world, because it's much more effective to co-operate with others than try to do everything yourself, or because information within the group tends to leak out to other parties. Eliezer talks about the way that cells allowed the ability for specialization, and how writing allowed human culture to accumulate and people to build on each other's inventions.

Even as Eliezer talks about intelligence, insight, and recursion, one could view this too as discussion about the power of specialization, co-operation and superorganisms - for intelligence seems to consist of a large number of specialized modules, all somehow merged to work in the same organism. And Robin seems to take the view of large groups of people acting as some kind of a loose superorganism, thus beating smaller groups that try to do things alone:

Independent competitors can more easily displace each another than interdependent ones.  For example, since the unit of the industrial revolution seems to have been Western Europe, Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders.  So as the world becomes interdependent on larger scales, smaller groups find it harder to displace others. (Outside View of Singularity)

[Today] innovations and advances in each part of the world depending on advances made in all other parts of the world. … Visions of a local singularity, in contrast, imagine that sudden technological advances in one small group essentially allow that group to suddenly grow big enough to take over everything. … The key common assumption is that of a very powerful but autonomous area of technology.  Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. …

[Consider also] complaints about the great specialization in modern academic and intellectual life.  People complain that ordinary folks should know more science, so they can judge simple science arguments for themselves. … Many want policy debates to focus on intrinsic merits, rather than on appeals to authority.  Many people wish students would study a wider range of subjects, and so be better able to see the big picture.  And they wish researchers weren’t so penalized for working between disciplines, or for failing to cite every last paper someone might think is related somehow.

It seems to me plausible to attribute all of these dreams of autarky to people not yet coming fully to terms with our newly heightened interdependence. … We picture our ideal political unit and future home to be the largely self-sufficient small tribe of our evolutionary heritage. … I suspect that future software, manufacturing plants, and colonies will typically be much more dependent on everyone else than dreams of autonomy imagine. Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former. The riches that come from a worldwide division of labor have rightly seduced us away from many of our dreams of autarky. We may fantasize about dropping out of the rat race and living a life of ease on some tropical island. But very few of us ever do. (Dreams of Autarky)

Robin has also explicitly made the point that it is the difficulty of co-operation which suggests that we can keep ourselves safe from uploads or AIs with hostile intentions:

What if uploads decide to take over by force, refusing to pay back their loans and grabbing other forms of capital? Well for comparison, consider the question: What if our children take over, refusing to pay back their student loans or to pay for Social Security? Or consider: What if short people revolt tonight, and kill all the tall people?

In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such revolt is rare in practice; short people know that if they kill all the tall folks tonight, all the blond people might go next week, and who knows where it would all end? And short people are highly integrated into society; some of their best friends are tall people.

In contrast, violence is more common between geographic and culturally separated subgroups. Neighboring nations have gone to war, ethnic minorities have revolted against governments run by other ethnicities, and slaves and other sharply segregated economic classes have rebelled.

Thus the best way to keep the peace with uploads would be to allow them as full as possible integration in with the rest of society. Let them live and work with ordinary people, and let them loan and sell to each other through the same institutions they use to deal with ordinary humans. Banning uploads into space, the seas, or the attic so as not to shock other folks might be ill-advised. Imposing especially heavy upload taxes, or treating uploads as property, as just software someone owns or as non-human slaves like dogs, might be especially unwise. (If Uploads Come First)

Situations like war or violent rebellions are, arguably, cases where the "human superorganism adaptations" kick in the strongest - where people have the strongest propensity to view themselves primarily as a part of a group, and where they are the most ready to sacrifice themselves for the interest of the group. Indeed, Haidt quotes (both in the book and the TEDTalk) former soldiers who say that there's something very unique in the states of consciousness that war can produce:

So many books about war say the same thing, that nothing brings people together like war. And that bringing them together opens up the possibility of extraordinary self-transcendent experiences. I'm going to play for you an excerpt from this book by Glenn Gray. Gray was a soldier in the American army in World War II. And after the war he interviewed a lot of other soldiers and wrote about the experience of men in battle. Here's a key passage where he basically describes the staircase.

Glenn Gray: Many veterans will admit that the experience of communal effort in battle has been the high point of their lives. "I" passes insensibly into a "we," "my" becomes "our" and individual faith loses its central importance. I believe that it is nothing less than the assurance of immortality that makes self-sacrifice at these moments so relatively easy. I may fall, but I do not die, for that which is real in me goes forward and lives on in the comrades for whom I gave up my life.

So Robin, in If Uploads Come First, seems to basically be saying that uploads are dangerous if we let them become superorganisms. Usually, individuals have a large number of their own worries and priorities, and even if they did have much to gain by co-operating, they can't trust each other enough nor avoid the temptation to free-ride enough to really work together well enough to become dangerous.

Incidentally, this provides an easy rebuttal to the "corporations are already superintelligent" claim - while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.

It would seem to me that, whatever your take on the intelligence explosion is, the current evolutionary history would strongly suggest that new kinds of superorganisms - larger, more cohesive than human groups, and less dependent on crippling their own rationality in order to maintain group cohesion - would be a major risk for humanity. This is not to say that an intelligence explosion wouldn't be dangerous as well - I have no idea what a mind that could think 1,000 times faster than me could do - but a co-operative explosion should be considered dangerous even if you thought a hard takeoff via recursive self-improvement (say) was impossible. And many of the ways for creating a superorganism (see below) seem to involve processes that could conceivably lead to the superorganisms having quite different values from humans. Even if no single superorganism could take over, that's not much of a comfort for the ordinary humans who are caught in a crossfire.

How might a co-operative explosion happen? I see at least three possibilities:

  • Self-copying artificial intelligences. An AI doesn't need to have the evolved idea of a "self" whose interests need to be protected, above those of identical copies of the AI. An AI could be programmed to only care about the completion of a single goal (e.g. paperclips), and it could then copy itself freely, knowing that all of those copies will be working towards the same goal.
  • Upload copy clans. Carl Shulman discusses this possibility in Whole Brain Emulation and the Evolution of Superorganisms. Some people might have a view about personal identity which accepts the possibility of somebody deleting you, if there exist close-enough copies of you. In a world where uploading is possible, there could be people who could copy themselves and then have those copies work together in order to further the goals of the joint organism. If the copies were willing to have themselves deleted or be experimented on, they could come up with ways of brain modification that further increased the devotion to the superorganism. Furthermore, each copy could consent to being deleted if it seemed like its interests were drifting apart from those of the organism as a whole.
  • Mind coalescences. In Coalescing Minds: Mind Uploading-Related Group Mind Scenarios, I and Harri Valpola discuss the notion of coalesced minds, hypothetical minds created by merging together two brains through a sufficient number of high-bandwidth neural connections. In a world where uploading was possible, the creation of mind coalescences could be relatively straightforward. Then, several independent organisms could literally join together to become a single entity.

Below are some more excerpts from Haidt's book:

Many animals are social: they live in groups, flocks, or herds. But only a few animals have crossed the threshold and become ultrasocial, which means that they live in very large groups that have some internal structure, enabling them to reap the benefits of the division of labor. Beehives and ant nests, with their separate castes of soldiers, scouts, and nursery attendants, are examples of ultrasociality, and so are human societies.

One of the key features that has helped all the nonhuman ultra-socials to cross over appears to be the need to defend a shared nest. [...] Hölldobler and Wilson give supporting roles to two other factors: the need to feed offspring over an extended period (which gives an advantage to species that can recruit siblings or males to help out Mom) and intergroup conflict. All three of these factors applied to those first early wasps camped out together in defensible naturally occurring nests (such as holes in trees). From that point on, the most cooperative groups got to keep the best nesting sites, which they then modified in increasingly elaborate ways to make themselves even more productive and more protected. Their descendants include the honeybees we know today, whose hives have been described as “a factory inside a fortress.”

Those same three factors applied to human beings. Like bees, our ancestors were (1) territorial creatures with a fondness for defensible nests (such as caves) who (2) gave birth to needy offspring that required enormous amounts of care, which had to be given while (3) the group was under threat from neighboring groups. For hundreds of thousands of years, therefore, conditions were in place that pulled for the evolution of ultrasociality, and as a result, we are the only ultrasocial primate. The human lineage may have started off acting very much like chimps,48 but by the time our ancestors started walking out of Africa, they had become at least a little bit like bees.

And much later, when some groups began planting crops and orchards, and then building granaries, storage sheds, fenced pastures, and permanent homes, they had an even steadier food supply that had to be defended even more vigorously. Like bees, humans began building ever more elaborate nests, and in just a few thousand years, a new kind of vehicle appeared on Earth—the city-state, able to raise walls and armies. City-states and, later, empires spread rapidly across Eurasia, North Africa, and Mesoamerica, changing many of the Earth’s ecosystems and allowing the total tonnage of human beings to shoot up from insignificance at the start of the Holocene (around twelve thousand years ago) to world domination today.

As the colonial insects did to the other insects, we have pushed all other mammals to the margins, to extinction, or to servitude. The analogy to bees is not shallow or loose. Despite their many differences, human civilizations and beehives are both products of major transitions in evolutionary history. They are motorboats.

The discovery of major transitions is Exhibit A in the retrial of group selection. Group selection may or may not be common among other animals, but it happens whenever individuals find ways to suppress selfishness and work as a
team, in competition with other teams. Group selection creates group-related adaptations. It is not far-fetched, and it should not be a heresy to suggest that this is how we got the groupish overlay that makes up a crucial part of our righteous minds. [...]

According to Tomasello, human cognition veered away from that of other primates when our ancestors developed shared intentionality. At some point in the last million years, a small group of our ancestors developed the ability to share mental representations of tasks that two or more of them were pursuing together. For example, while foraging, one person pulls down a branch while the other plucks the fruit, and they both share the meal. Chimps never do this. Or while hunting, the pair splits up to approach an animal from both sides. Chimps sometimes appear to do this, as in the widely reported cases of chimps hunting colobus monkeys, but Tomasello argues that the chimps are not really working together. Rather, each chimp is surveying the scene and then taking the action that seems best to him at that moment. Tomasello notes that these monkey hunts are the only time that chimps seem to be working together, yet even in these rare cases they fail to show the signs of real cooperation. They make no effort to communicate with each other, for example, and they are terrible at sharing the spoils among the hunters, each of whom must use force to obtain a share of meat at the end. They all chase the monkey at the same time, yet they don’t all seem to be on the same page about the hunt.

In contrast, when early humans began to share intentions, their ability to hunt, gather, raise children, and raid their neighbors increased exponentially. Everyone on the team now had a mental representation of the task, knew that his or her partners shared the same representation, knew when a partner had acted in a way that impeded success or that hogged the spoils, and reacted negatively to such violations. When everyone in a group began to share a common understanding of how things were supposed to be done, and then felt a flash of negativity when any individual violated those expectations, the first moral matrix was born. (Remember that a matrix is a consensual hallucination.) That, I believe, was our Rubicon crossing.

Tomasello believes that human ultrasociality arose in two steps. The first was the ability to share intentions in groups of two or three people who were actively hunting or foraging together. (That was the Rubicon.) Then, after several hundred thousand years of evolution for better sharing and collaboration as nomadic hunter-gatherers, more collaborative groups began to get larger, perhaps in response to the threat of other groups. Victory went to the most cohesive groups—the ones that could scale up their ability to share intentions from three people to three hundred or three thousand people. This was the second step: Natural selection favored increasing levels of what Tomasello calls “group-mindedness”—the ability to learn and conform to social norms, feel and share group-related emotions, and, ultimately, to create and obey social institutions, including religion. A new set of selection pressures operated within groups (e.g., nonconformists were punished, or at very least were less likely to be chosen as partners for joint ventures) as well as between groups (cohesive groups took territory and other resources from less cohesive groups).

Shared intentionality is Exhibit B in the retrial of group selection. Once you grasp Tomasello’s deep insight, you begin to see the vast webs of shared intentionality out of which human groups are constructed. Many people assume that language was our Rubicon, but language became possible only after our ancestors got shared intentionality. Tomasello notes that a word is not a relationship between a sound and an object. It is an agreement among people who share a joint representation of the things in their world, and who share a set of conventions for communicating with each other about those things. If the key to group selection is a shared defensible nest, then shared intentionality allowed humans to construct nests that were vast and ornate yet weightless and portable. Bees construct hives out of wax and wood fibers, which they then fight, kill, and die to defend. Humans construct moral communities out of shared norms, institutions, and gods that, even in the twenty-first century, they fight, kill, and die to defend.

Haidt's references on this include, though are not limited to, the following:

Okasha, S. (2006) Evolution and the Levels of Selection. Oxford: Oxford University Press.

Hölldobler, B., and E. O. Wilson. (2009) The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York: Norton.

Bourke, A. F. G. (2011) Principles of Social Evolution. New York: Oxford University Press.

Wilson, E. O., and B. Hölldobler. (2005) “Eusociality: Origin and Consequences.” Proceedings of the National Academy of Sciences of the United States of America 102:13367–71.

Tomasello, M., A. Melis, C. Tennie, E. Wyman, E. Herrmann, and A. Schneider. (Forthcoming) “Two Key Steps in the Evolution of Human Cooperation: The Mutualism Hypothesis.” Current Anthropology.

New Comment
62 comments, sorted by Click to highlight new comments since:

How might a co-operative explosion happen?

Rootkits for the human brain. Large groups of people turned into a human botnet.

"Botnet" is offensive, we prefer to be called Catholic.

(FWIW various LessWrongers who have studied the issue don't agree with Eliezer's bias against arguments from group selection. (Um, me, for example; what finally convinced me was a staggeringly impressive chapter (IIRC 'twas "The Coevolution of Institutions and Preferences") from Microeconomics: Behavior, Institutions, and Evolution, though I also remember being swayed by various papers published by NECSI.) I'd be very interested in any opinions from well-read biologists or economists.)

The analogies between biological and social evolution are limited. Not only does group selection work in social evolution, but social evolution is Lamarckian in that it retains acquired traits. So you need to be careful when reasoning from one to another; I think that is one reason people keep trying to "justify" group selection in biology.

The "new" group selection (e.g. here and here) works with both organic and cultural evolution.

Dogs pass on fleas they acquired during their lifespan to their offspring - much as humans pass on ideas they acquired during their lifespan to their offspring. Both the fleas and the ideas can mutate inside their hosts - and those changes are passed on as well.

The differences between organic and cultural evolution are thus frequently overstated. Critically, Darwinian evolutionary theory applies to both realms.

except it's more like viruses than flies: singificant amounts of evolution can hapen within a single host generation, and entirely different species can crospolinate if they end up within the same host.

Depends on yer memes - but sure, often more like viruses.

"Species" is one of the more tricky areas - if there's much interbreeding, then maybe it's not two species. It isn't just memes, though - bacteria and viruses exhibit this too, as you say.

Yea, I oversimplified a bit.

Not only does group selection work in social evolution, but social evolution is Lamarckian in that it retains acquired traits

Isn't modern opinion that vanilla natural selection is also non-negligibly Lamarckian? (I suppose it's very possible that the sources I've read over-stated the Lamarckian factors.)

When you have a parenthetical inside a parenthetical inside a parenthetical, is it time to break out the square brackets?

[-]gjm100

No, it's time to take out some of the round ones.

I find that even the trivial heuristic "delete all parentheses" usually improves what I write.

(But it's no fun if you can't construct [all kinds of {silly }] elaborate nested parentheses [in your comments {in case that wasn't clear}])

You forgot a period.

Well, that's the danger with using parentheses.

The heuristic I generally use is "use parentheses as needed, but rewrite if you find that you're needing to use square brackets." Why? Thinking about it, I believe this is because I see parentheses all the time in professional texts, but almost never parentheticals inside parentheticals.

But as I verbalize this heuristic, I suddenly feel like it might lend the writing a certain charm or desirable style to defy convention and double-bag some asides. Hmm.

(A related heuristic for those with little time is to assume that lots of parentheses is correlated with lack of writing ability is correlated with low intelligence is correlated with inability to contribute interesting ideas, thus allowing you to ignore people that (ab-)use lots of parentheses. I admit to using this heuristic sometimes.)

[-]gjm110

I find that people who use a lot of parentheses tend to be intelligent, and I think this screens off the alleged inference from lots of parentheses to inability to contribute interesting ideas.

I don't know whether I'm right in thinking there's a parentheses/intelligence correlation, but if I am there's a reasonably plausible explanation. Why would someone use lots of parens? Because when they think about something, a bunch of other related things occur to them too and they want to avoid oversimplifying. Of course it's even better to think of the related things and then find ways to express yourself that don't depend on overloading your prose with parenthes, but most people who use few parentheses aren't in that category.

I don't know whether I'm right in thinking there's a parentheses/intelligence correlation, but if I am there's a reasonably plausible explanation. Why would someone use lots of parens?

((Well), that's (easy).) (((Heavy) users (of (parentheses))) tend to (be ((LISP) weenies))), and ((learning (LISP)) gives ((a) boost (of) ((15) to (30) (IQ) points), (at least))).

(First impression: You're talking about the 130 vs. 145 distinction whereas I'm talking about the 145 vs. 160 distinction (which you characterize as "even better"). (Can-barely-stand-up drunk (yet again!), opinions may or may not be reflectively endorsed, let alone right.))

[-]gjm100

Yes, it's plausible that we're talking about different distinctions. But even in the range 145-160 I am very, very unconvinced that using fewer parens is a good sign of intelligence. Perhaps you have some actual evidence? Unfortunately, people with an IQ of 160 are scarce enough that it'll probably be difficult to distinguish a real connection from a spurious one where it just happens that the smartest people are also being careful about writing style.

(Increasingly contemptuous of your too-drunk-to-stand signalling extravaganza; my comments may be distorted in consequence.)

Yes, I think I have evidence -- of about 5 people I know of 160+ IQ, none use many parentheses, whereas I know of a greater than 1 in 6 fraction in the immediate predecessor-S.D. that fall into the parenthesis-(ab)using category. Of course, even I myself don't put much faith in that data.

(Is my drunkenness-signaling (failed) signaling or (failed) counter-signaling (ignoring externalities in the form of diminished credibility)? I can't tell.)

[-]gjm30

Is treating "data" as plural rather than singular correlated with difference between high and very high IQs in your experience? :-)

(I wonder whether I'm evidence one way or another here. I'm somewhere around 150, I think, and I used to use an awful lot of parens and have forced myself not to because I think not doing so is better style. But I'm more concerned with writing style than many other people I know who are about as clever as I am.)

((Counter-signalling is a special case of signalling. It isn't necessarily (failed) just because I don't like it.))

((()))

Is treating "data" as plural rather than singular correlated with difference between high and very high IQs in your experience? :-)

In my experience that seems to correlate a lot more with conscientiousness and caring about writing style after screening off intelligence. (Also: fuck!—I hate when I forget to treat "data" as plural.)

I used to use an awful lot of parens and have forced myself not to because I think not doing so is better style.

Same here, at least when it comes to writing for a truly general audience or for myself.

(Side note: another thing that confuses me is that intelligence doesn't seem to me to be overwhelmingly correlated with spelling ability. Not quite sure what to make of this; thus far I've attributed it to unrelated selection effects on who I've encountered. Would be interested in others' impressions.)

I have found entirely the opposite; it's very strongly correlated with spelling ability - or so it seems from my necessarily few observations, of course. I know some excellent mathematicians who write very stilted prose, and a few make more grammatical errors than I'd have expected, but they can all at least spell well.

I have the opposite impression, but now that I have that correlation it's hard to make further unbiased observations.

I know many very intelligent good spellers, and several very intelligent mediocre spellers, and one or two very intelligent apparently incorrigibly atrocious spellers. I don't know any moderate-intelligence good spellers, a few moderate-intelligence atrocious spellers, and quite a few quite a few moderate-intelligence mediocre spellers. I don't know very many dumb people socially, and mostly don't know how good their spelling is as they don't write much. People I met on the Internet don't really count, as I filter too much on spelling ability to begin with.

(Since you two seem to be mostly using the mentioned IQ scores as a way to indicate relative intelligence, rather than speaking of anything directly related to IQ and IQ tests, this is somewhat tangential; however, Mr. Newsome does mention some actual scores below, and I think it's always good to be mindful when throwing IQ scores around. So when speaking of IQ specifically, I find it helpful to keep in mind the following.

There are many different tests, which value scores differently. In some tests, scores higher than about 150 are impossible or meaningless; and in all tests, the higher the numbers go the less reliable [more fuzzy] they are. One reason for this, IIRC, is that smaller and smaller differences in performance will impact the result more, on the extreme ends of the curve; so the difference in score between two people with genius IQs could be a bad day that resulted in a poorer performance on a single question. [There is another reason, the same reason that high enough scores can be meaningless; I believe this is due to the scarcity of data/people on those extreme ends, making it difficult or impossible to normalize the test for them, but I'm not certain I have the explanation right. I'm sure someone else here knows more.])

(Hence my use of parentheses: it's a way of saying, "you would be justified in ignoring this contribution". Nesov does a similar thing when he's nitpicking or making a tangential point.)

That would ruin the aesthetic.

No, that time passed when you merely had a single parenthetical inside a parenthetical. But when you have a further parenthetical inside the former two, is it then time to break out the curly brackets?

The "new" group selection (e.g. here, here and here) has been demonstrated to be pretty-much equivalent to the standard and uncontroversial inclusive fitness framwork in a raft of papers.

Here's Marek Kohn writing in 2008:

There is widespread agreement that group selection and kin selection — the post-1960s orthodoxy that identifies shared interests with shared genes — are formally equivalent.

That's not to say that group selection is useless - since it involves different models and accounting methods.

There are still a few dissenters. E.g. Nowak, Tarnita and Wilson (2010) apparently disagree - saying:

Group selection models, if correctly formulated, can be useful approaches to studying evolution. Moreover, the claim that group selection is kin selection is certainly wrong.

These folk apparently don't grok the topic too well.

For a more modern and knowledgeable group selection critique, see:

Here's Stuart West on video, covering much the same topic.

[-][anonymous]60

It's important to remember that a given quantity of intelligence / brain matter / computational power is much more powerful as a single organism than it is as a collection of them. There are problems that a human can solve easily that five cats never could, no matter how they cooperated.

[-][anonymous]80

You say that now, but just wait until someone tries to implement a UTM on a colony of cats in a highly-structured environment.

I'm fairly certain this has been attempted in Dwarf Fortress.

Well to be certain you are taking it out of context to compare it. You are comparing CAT intelligence with HUMAN intelligence! That is not fair. Compare CAT intelligence in groups vs CAT intelligence is isolation then that would be a fair comparison. "Standing on the shoulders of Giants" No matter how clever you are or how super intelligent you might be - you cannot invent everything all by yourselves. The keyboard you type or the monitor you use or the internet is an example of Co-operative Explosion. Each one of us creates a piece of the puzzle that becomes the big picture - each one of us are not capable enough to pull of internet or space travel all by ourselves. We need the research of millions of individuals piled up over centuries to let this happen and I guess that is what the author is trying to convey here. He is not trying to compare Artificial Narrow Intelligence(animal level) vs Artificial General Intelligence(Human Level)vs Artificial Intelligence (perfected human intelligence) vs Artificial Super Intelligence (exponentially capable intelligence). This blog/forum is a live example of Co-Operative Explosion. This blog evolved over a few years with ideas that are embedded by thousands of intelligent individuals at different points of time all of them with different points of view but the net result is beautiful collection of knowledge! Imagine if you hire a few people and ask them to build this blog how foolish that proposition might sound!

Haidt's argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside.

This seems more likely to be part of a general set of adaptations and norms for being nice to those like you (often kin or tribe members who you have reciprocal relationships with) and not so nice to strange-looking outsiders, who are not in reciprocal relationships either with you, or with other group members - and are thus poorly motivated to cooperate with you. Such explanations are based on kin selection and reciprocity - and typically make little or no mention of group selection or "superorganisms".

There's a field known as "tag-based cooperation" - which is all about the game-theoretic basis of color politics. Here's one of the papers that launched that field:

For some who were less impressed, here's my blog post on Jonathan Haidt's talk - and here's Jerry Coyne's response - on the same topic.

Interesting, thanks. I'm revising my faith in Haidt's theories downwards.

EDIT: Just noticed, Haidt defends himself in the comments.

EDIT2: David S. Wilson comments on Coyne's post.

EDIT: Just noticed, Haidt defends himself in the comments.

From there:

The debate is whether group-level selection GS) played ANY role, or whether everything about our moral/political/religious lives can be explained straightforwardly, without contortions, at the level of the individual.

This way of framing the debate just seems daft to me. Individuals care for others. In particular, they care for their kin. We have known the details of why individuals care for kin since the 1960s. It should not be group selection vs individual selection - it should be group selection vs kin selection - and kin selection basically won this battle back in the 1980s. That is not to say that group selection is wrong, it's just not a favoured set of models and terminology.

EDIT2: David S. Wilson comments on Coyne's post.

FWIW, that's about a different post by Coyne, from some time back.

FWIW, that's about a different post by Coyne, from some time back.

Ah, good catch.

Glenn Gray: Many veterans will admit that the experience of communal effort in battle has been the high point of their lives. "I" passes insensibly into a "we," "my" becomes "our" and individual faith loses its central importance. I believe that it is nothing less than the assurance of immortality that makes self-sacrifice at these moments so relatively easy. I may fall, but I do not die, for that which is real in me goes forward and lives on in the comrades for whom I gave up my life.

...

Incidentally, this provides an easy rebuttal to the "corporations are already superintelligent" claim - while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.

This seems to be a testable claim: Are military groups more efficient than companies at jobs companies typically do, given equivalent money/resources? For extra credit, do the same test for life-threatening jobs in which cooperation is paramount, such as coal mining, or working on overhead power lines. I don't think this is the case, or the military would want to contract for such jobs with private-sector businesses.

Police corps and fire departments may qualify here, since they do exhibit some similarity with military. But they occupy small niches - they surely do not justify a claim that "superorganisms" are always more efficient.

This seems to be a testable claim: Are military groups more efficient than companies at jobs companies typically do, given equivalent money/resources?

How do you compare the efficiency of people doing different jobs?

[-][anonymous]20

At your own peril.

while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.

These observations might not hold for uploads running on hardware paid for by the company. Which would give a combination of company+upload-tech superior cooperation options compared to current forms of collaboration. Also, company-owned uploads will have most of their social network inside the company as well, in particular not with uploads owned by competitors. Hence the natural group boundary would not be "uploads" versus "normals", but company boundaries.

Hence the natural group boundary would not be "uploads" versus "normals", but company boundaries.

Or maybe governments - if they get their act together.

Dividing your country into competing companies hardly seems very efficient.

Why yes, cooperation is one way of making an optimization process more powerful.

Robin has a post that in part addresses the question of how much value sharing can improve cooperation:

On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.

[-][anonymous]00

My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.

Wouldn't the indexicality of human values lead to Calvin problems, if that's the kind of mind you're copying?

[This comment is no longer endorsed by its author]Reply

My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.

We do have high fidelity copying today. We can accurately copy anything we can represent as digital information - including values. While we can copy values, one problem is that we can't easily convince people to "install" them. Instead the values of others often get rejected by people's memetic immune system as attempts at manipulation.

If we can copy values, or represent them as digital information, I haven't heard about it.

The closest thing I've seen is tools for exporting values into an intersubjective format like speech, writing, art, or behavior. As you point out, the subsequent corresponding import often fails... whether that's because of explicit defense mechanisms, or because the exported data structure lacks key data, or because the import process is defective in some way, or for some other reason, is hard to tease out.

Maybe you mean something different from me by the term 'values'. The values I was referring to are fairly simple to write down. Many of them are so codified in legal systems and religious traditions.

If I tell you that I like mulberries more than blackberries, then that's information about my values represented digitally. The guts of the value information really is in there. Consequently, you can accurately make predictions about what I will do if presented with various food choices - without actually personally adopting my values.

Yeah, we apparently mean different things. To my mind the statement "I like mulberries more than blackberries" is not even close to being a high-fidelity copy of my relative preferences for mulberries and blackberries; I could not reconstruct the latter given only the former.

I would classify it as being value information which can be accurately copied. I never meant to suggest that it was an accurate copy of what was in your mind. For instance, your mental representation could be considered to include additional information about what mulberries and blackberries are - broadly related to what might be found in an encyclopedia of berries. The point is that we can represent people's values digitally - to the point where we can make quite good predictions about what choices they will make under controlled conditions involving value-derived choices. Values aren't especially mysterious, they are just what people want - and we have a big mountain of information about that which can be represented digitally.

You mean "values aren't especially mysterious", I expect.

I agree that they're not mysterious. More specifically, I agree that what it means to capture my value information about X and Y is to capture the information someone else would need in order to accurately and reliably predict my relative preferences for X and Y under a wide range of conditions. And, yes, a large chunk of that information is what you describe here as encyclopedic knowledge.

So for (X,Y)=(mulberry, blackberry) a high-fidelity copy of my values, in conjunction with a suitable encyclopedia of berries, would allow you to reliably predict which one I would prefer to eat with chocolate ice cream, which one I would prefer to spread as jam on rye bread, which one I would prefer to decorate a cake with, which one I would prefer to receive a pint of as a gift, how many pints of one I'd exchange for a pint of the other, etc., etc., etc.

Yes?

Assuming I've gotten that right... so, when you say:

We do have high fidelity copying today. We can accurately copy anything we can represent as digital information - including values
...do you mean to suggest that we can, today, create a high-fidelity copy of my values with respect to mulberries and blackberries as described above?

(Obviously, this is a very simple problem in degenerate cases like "I like blackberries and hate mulberries," but that's not all that interesting.)

If so, do you know of any examples of that sort of high-fidelity copy of someone's values with respect to some non-degenerate (X,Y) pair actually having been created? Can you point me at one?

I can't meet your "complex value extraction" challenge. I never meant to imply "complete" extraction - just that we can extract value information (like this) and then copy it around with high fidelity. Revealed preferences can be good, but I wouldn't like to get into quantifying their accuracy here.

OK.
I certainly agree that any information we know how to digitally encode in the first place, we can copy around with high fidelity.
But we don't know how to digitally encode our values in the first place, so we don't know how to copy them. That's not because value is some kind of mysterious abstract ethereal "whatness of the if"... we can define it concretely as the stuff that informs, and in principle allows an observer to predict, our revealed preferences... but because it's complicated.
I'm inclined to agree with Wei_Dai that high-fidelity value sharing would represent a significant breakthrough in our understanding of and our ability to engineer human psychology, and would likely be a game-changer.

But we don't know how to digitally encode our values in the first place, so we don't know how to copy them.

Well, we do have the idea of revealed preference. Also, if you want to know what people value, you can often try asking them. Between them, these ideas work quite well.

What we can't do is build a machine that optimises them - so there is something missing, but it's mostly not value information. We can't automatically perform inductive inference very well, for one thing.

I suspect I agree with you about what information we can encode today, and you seem to agree with me that there's additional information in our brains (for example, information about berries) that we use to make those judgments which revealed preferences (and to a lesser extent explicitly articulated preferences) report on, which we don't yet know how to encode.

I don't really care whether we call that additional information "value information" or not; I thought initially you were claiming that we could in practice encode it. Thank you for clarifying.

Also agreed that there are operations our brains perform that we don't know how to automate.

This sounds like a semantic quibble to me. Okay, maybe the main problem is not in copying but in "installing", but wouldn't mind copying effectively make "installation" much easier, as well?

It wasn't intended as a semantic quibble - the idea was more to say: is there really a "major breakthrough" here? If so, what does it consist of? I was arguing against it being "high fidelity value sharing".

Mind copying would indeed bypass the "installation" issue.