An article in a recent issue of Science (Elisa Thebault & Colin Fontaine, "Stability of ecological communities and the architecture of mutualistic and trophic networks", Science 329, Aug 13 2010, p. 853-856; free summary here) studies 2 kinds of ecological networks: trophic (predator-prey) and mutualistic (in this case, pollinators and flowers). They looked at the effects of 2 properties of networks: modularity (meaning the presence of small, highly-connected subsets that have few external connections) and nestedness (meaning the likelihood that species X has the same sort of interaction with multiple other species). (It's unfortunate that they never define modularity or nestedness formally; but this informal definition is still useful. I'm going to call nestedness "sharing", since they do not state that their definition implies nesting one network inside another.) They looked at the impact of different degrees of modularity and nestedness, in trophic vs. mutualistic networks, on persistence (fraction of species still alive at equilibrium) and resilience (1/time to return to equilibrium after a perturbation). They used both simulated networks, and data from real-world ecological networks.
What they found is that, in trophic networks, modularity is good (increases persistence and resilience) and sharing is bad; while in mutualistic networks, modularity is bad and sharing is good. Also, in trophic networks, species go extinct so as to make the network more modular and less sharing; in mutualistic networks, the opposite occurs.
The commonsense explanation is that, if species X is exploiting species Y (trophic), the interaction decreases the health of species Y; and so having more exploiters of Y is bad for both X and Y. OTOH, if species X benefits from species Y, X will get a secondhand benefit from any mutually-beneficial relationships that Y has; if Y also benefits from X (mutualistic), then neither X nor Y will adapt to prevent Z from also having a mutualistic relationship with Y. (The theory does not address a mixture of trophic and mutualistic interactions in a single network.)
The effect is strong - see this figure:
This shows that, when nodes have exploitative (trophic) relationships, and you simulate evolution starting from a random network, the network almost always becomes more modular and less sharing over time; while the opposite occurs when nodes have mutually-beneficial relationships. (The few cases along the line y=x are, I infer, not cases where this effect was weak, but cases where the initial random network happened to be one or two species away from a local equilibrium.)
Armed with this knowledge, we can look at the structure of different cultures, governments, and religions, and say whether they're likely to be exploitative or mutualistic. Feudalism is an extremely hierarchical, compartmentalized social structure, in which every person has one trophic relationship with one superior. We can look at its org chart and predict that it's exploitative, without knowing anything more about it. The less-hierarchical, loopy org chart of a democracy is more compatible with mutualistic relationships - at least at the top. Note that I'm not talking about the directionality of control the relationships, as is usual when discussing democracy; I'm talking about the mere presence of multiple relationships per party. The Catholic church has a hierarchical organization, and is perhaps not coincidentally richer than any Protestant church relative to income per capita - except for the Mormons, with assets of about $6000/member, whose organizational structure I know little about (read this if interested). I do know that the Mormon church historically combined church and state, thus halving the number of power relationships its citizens participate in.
The governmental structure of a democracy is not dramatically different from the structure of a monarchy. What's really different is the economic structure of a free market, with many more shared relationships when compared, for instance, to monopolistic medieval economies, or mercantilistic colonial economies. It may be that the free market, not democracy, is responsible for our freedom.
The employer-employee relationship appears trophic. Employees are forbidden from working for more than one employer. Consultants, on the other hand, have many clients. So do doctors and lawyers. Not surprisingly, all of them get paid more per hour than employees.
Even if you're an employee, you can compare the internal structure of different companies. Every person within a company of selfish agents would, ideally, like their relationship with others to be exploitative, but for all other relationships to be mutualistic. The company owner would like to exploit the management and the workers, but have the management and workers have mutualistic interactions; while the management would prefer to exploit the workers. You may be able to look at the internal structure of a company, and see how far down the exploitative pattern penetrates. If it's a hierarchy of private fiefdoms all the way down, beware.
Any artificial intelligence will have internal structure. Artificial intelligences, unlike humans, do not come in standard-sized reproductive units, walled off computationally; therefore, there might not be cleanly-defined "individuals" (literally, non-divisible people) in an AI society. But the bulk of the computation, and hence the bulk of the potential consciousness, will be within small, local units (due to the ubiquity of power-law distributions, the efficiency of fractal transport and communication networks, and the speed of light). So it is important to consider the welfare of these units when designing AIs - at least, if we intend our initial designs to persist.
A hierarchical AI design is more compatible with exploitative relationships - even if it is bidirectional. Again, control is not the issue; the mere presence of links is. A decentralized agent-based AI, in the sense of agent-based software (often modelled on the free market, with software agents bidding on tasks), would be more amenable to mutualistic relationships.
A final caution
The work cited shows that having exploitative vs. mutualistic interactions causes compartmentalized vs. highly-shared networks to arise. It does not show that constructing compartmentalized or highly-shared networks causes exploitative or mutualistic interactions, respectively, to arise. This would be helpful to know; but remains to be demonstrated. For intelligent free agents, an argument that it would is that, when an agent has many relationships, they can cut off any agents who become exploitative. (This might not be true within an AI.)
Finally, as just noted, plants and insects are not intelligent agents; and AI components might not be completely free agents. Each of the domains above has important differences from the others, and results might not transfer as easily as this post suggests.