Some people say that the difference between Republicans and Democrats is that Republicans are conservative in the sense of opposing change, while Democrats are liberal in the sense of promoting change.  But this isn't true - both parties want change; neither especially cares how things were done in the past.

Some people say that Republicans are fiscally conservative, while Democrats are fiscally liberal.  But this isn't true.  Republicans and Democrats both run up huge deficits; they just spend the money on different things.

Some people say Democrats are liberal in the sense of favoring liberty.  But this isn't true.  Republicans want freedom to own guns and run their businesses as they please, while Democrats want the freedom to have abortions and live as they please.

Someone - it may have been George Lakoff - observed that Republicans want government to be their daddy, while Democrats want government to be their mommy.  That's the most-helpful distinction that I've heard.  Republicans want a government that's stern and and protects them from strangers.  Democrats want a government that's forgiving and takes care of all their needs.

I was thinking about this because of singletons.  Some people are in favor of creating a singleton AI to rule the universe.  I assume that, as with party affiliation, people choose a position for emotional rather than rational reasons.  So which type of person would want a singleton - a daddy-seeking Republican, or a mommy-seeking Democrat?

I think the answer is, Both.  Republicans and Democrats would both want a singleton to take care of them; just in different ways.  Those who don't want a singleton at all would be Libertarians.

Regardless of whether you think a singleton is a good idea or a bad idea - does this mean that Americans would overwhelmingly vote to construct a singleton, if they were given the choice?

And would the ideas about how to design that singleton break down along party lines?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 8:13 PM

it may have been George Lakoff - observed that Republicans want government to be their daddy, while Democrats want government to be their mommy.

It was Lakoff; it's a central point of "Don't Think of an Elephant" and the more academic book that that's based off of. And it's "Strict Father" and "Nurturant parent;" the democratic (excuse me, "progressive") view is (as he says, at least) supposed to be gender neutral.

It's also an interesting case of Bullseye bias. He has a neat theory that sounds pretty good. Then he picks political positions that support that view and frames them accordingly. There's no particular reason we would expect the stern father vs. the nurturant parent to have a particular view on abortion rights (e.g. nurturant parent could value the unborn child; stern father could insist the child corrects her mistakes), other than he applies them to D's and R's, and we expect D's and R's to have particular views on abortion rights. Oops, I mean "Progressives" and Republicans.

In other words, it sounds like a good theory because it's framed effectively; it doesn't really offer predictive power. I can't see it explaining, for example, environmental attitudes, arctic drilling, ethanol subsidies, agricultural subsidies generally, tort reform, military spending, free speech, gay rights, or many other issues. More precisely, if you didn't know what the political views already were, and you were asked to tell the attitude of a nurturant parent vs. a strict father with respect to, say, agricultural subsidies, you could come up with a convincing explanation of why each parent-type would take both sides. The theory lacks predictive value, but it's framed effectively enough to sound convincing. There's not nothing too it; there's just a lot less than Lakoff wants there to be.

You don't think it explains environmental attitudes, military spending, or gay rights?

Between Lakoff's idea, and the idea that Republicans favor the rich and Democrats favor the poor, I think you can explain a lot of the attitudes of the parties. Neither idea on its own is sufficient.

-Why wouldn't a nurturant parent want someone else's kid to be fighting in Iraq, so we don't have to fight here?

-Why would a strict father encourage his son to recklessly destroy natural resources at the expense of future generations?

-Why would a nurturant parent want to encourage their child to choose an alternative lifestyle that could cause them to suffer degradation and discrimination for much of their adult life?

-Why must a strict father take serious issue with which gender his child is attracted to?

Unrelated to those three issues - why would a strict father oppose his child getting a decent science education, and why oppose a decent biology education in particular? Why wouldn't a nurturant parent? Why would a nurturant parent want to prevent his (adult) child from carrying a sidearm to protect himself? Why wouldn't a strict father want to prevent this?

When we hear "strict father" and "nurturant parent" those terms make intuitive sense to us, and we then filter information within that framework. Because the terms are vaguely defined, it's easy for us to interpret new information as fitting within this framework. Because there exists a way of describing nearly any view as belonging to either a nurturant parent or a strict father, this framework has no predictive power and very limited descriptive power.

When we are asked to pair "strict father" and "gay rights," the availability heuristic gives us an image of a man with a military haircut throwing his gay son out of the house, or something roughly analagous. With "nurturant parent" and "gay rights," the availability heuristic gives us an image of someone encouraging their gay son to be himself. I would imagine there are plenty of parents we would call nurturant who strongly discourage their kids from "being gay," and there are probably many fathers we would call strict who have no problem with their sons or daughters being gay. The discrepancy is probably much larger for something like environmentalism or health care, and there probably isn't much of a pattern for agricultural subsidies.

There is nothing about strictness or nurturance that dictates any particular political attitude. We simply associate these attitudes with people, and we associate those people with political viewpoints. The reality outside of our simplistic, uninformed mental picture is likely a whole lot more complex.

I'm pretty sure that the set of people interested in entertaining the question lean pretty heavily libertarian, and also that if in the course of creating a Friendly AI you need to make any value judgment that correlates significantly along party lines, you're almost certainly Doing It Wrong.

if in the course of creating a Friendly AI you need to make any value judgment that correlates significantly along party lines, you're almost certainly Doing It Wrong.

This is exactly what I thought when I saw the original post, but did not then have time to write.

I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested. I find it bizarre to suppose that Republicans or Democrats interested in the Singularity would want to build a singleton without even asking whether that was the right thing to do. But I lean Libertarian.

You may be right wrt. the Doing It Wrong observation. But it's difficult to say anything meaningful about making friendly AI if we dismiss approaches that are Doing It Wrong, because the set of Doing It Right is miniscule or empty.

In a case like that, where you can't find the right way, it may be valuable to discuss approaches even if they are known to be wrong, in the hope that the analysis generalizes to approaches that are correct?

I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested.

No, I think the source of the correlation is merely that entertaining libertarianism and entertaining the possibility of being governed by an AI both require significant willingness to depart from the mainstream. Most people just write them both off as crazy.

Assume that the mainstream will confront the question eventually. What will they decide to do? In other words, Can we predict that there is a singleton in our future, based on the predominant emotional needs that people express in their choice of political party today? That's my question.

based on the predominant emotional needs that people express in their choice of political party today

Can you really translate emotional needs into future policy? Doesn't it depend on how the policy is framed? In particular, if both sides can produce reasons for a policy (as you say here), then bipartisan support does not seem terribly more likely to me than one side's rhetoric framing the issue and the other side's reason vanishing.

Can you really translate emotional needs into future policy?

If you don't think you can do that, I advise you not to go into politics.

I think the answer is, Both. Republicans and Democrats would both want a singleton to take care of them; just in different ways. Those who don't want a singleton at all would be Libertarians.

I think no mainstream political party would be able to endorse, or to convince the public, "to let machines rule humans" which is how it will inevitably be framed. The US government (of whatever party) or military might try to build a singleton but they would not tell the public about it.

Is this with or without Summer Glau's backing?

Some people are in favor of creating a singleton AI to rule the universe. [...] Republicans and Democrats would both want a singleton to take care of them; just in different ways. Those who don't want a singleton at all would be Libertarians.

This quote (especially the last sentence) shows total lack of understanding of what "ruling the universe" by AI is about. It's not about tribal chiefs and liberty, no. You won't find a technical description of how things should be in the history books -- only crazy dogma and clumsy rules of thumb.

I think that Phil was talking about what Americans in general would say that they want from an AI. It should go without saying that what they would say would reveal fundamental misunderstandings of what they were talking about.

On the contrary, it would seem so.

There is a distinction between normative "what these people should choose" and descriptive "what would be these people's (uninformed) choice". I suspect Phil is taking both to give the same result in this post. (Phil?)

Neither of those alternatives makes any sense to me. An uninformed choice would be a random choice. What they "should" choose sounds like what they would choose if they were much better reasoners than they are.

What sense did you mean to describe Democrats/libertarians/etc. choice about a singleton? Actual informed choice? Does this choice disagree with what you think they should choose (in the sense of what they would choose if they were smarter and more informed than they can ever actually be, but with the same values, which are not necessarily equal to yours)?

An uninformed choice would be a random choice.

Nothing is random in this sense. A choice made with limited ability to reason, even if "informed", still involves a fair amount of noise. Not being reliably informed of what the question actually means is but a grade of the same problem. Thus, I meant "uniformed choice" in the sense of not being reliably the correct choice, for one reason or another.

The simple answer is that political parties are just teams, and their members only appear to agree because they're biased towards whichever policy they believe their team has chosen. To predict a party's position on a policy, you can pattern-match against existing policies, but that only works because the politicians do it too. There is no overarching theme.

I want a singleton because the alternatives seem worse.

Are you trying to make some kind of a meta-point with this post, or do you actually mean it to be read seriously and in a non-ironic way?

Seriously and non-ironic. Does it not seem of value to you to ask whether people will want a singleton? Do you have any better way of answering the question?

I was more referring to the discussion of democrats and republicans, which seemed especially shallow and poorly argued. For example, you argue (well, actually you state without an argument) that neither especially cares how things were done in the past, but saying that they both want some changes and considering that evidence that there is no statistically significant difference between how those two parties think about the past and traditional norms rooted in the past..., well, that's just silly.

How about this not-thoroughly-researched but possibly-mostly-correct summary:

  • Republicans want 19th-century American weapons laws; Democrats want 18th-century European weapons laws.
  • Republicans want 19th-century business laws; Democrats want 18th-century business laws (strict governmental control, strong workers' guilds).
  • Republicans want 20th-century abortion law; Democrats want 1st through 19th-century abortion law.
  • Republicans want 19th-century environmental laws; Democrats want medieval environmental laws (vast nature preserves which the population must leave untouched).
  • Republicans want medieval taxation (special tax exemptions for the wealthy) or post-Augustinian Roman taxation (a flat head tax); Democrats want pre-Augustinian Roman taxation (taxation primarily of real estate).
  • Republicans want a 20th-century American interventionist large military; Democrats want an 18th or 19th-century American isolationist small military.

The only pattern I see is that Republicans seem to have shorter memories.

Regardless of whether you think a singleton is a good idea or a bad idea - does this mean that Americans would overwhelmingly vote to construct a singleton, if they were given the choice?

Next can we have post that excludes black people, women or chemists?

And would the ideas about how to design that singleton break down along party lines?

It would break down along the lines of the rest of the world* nuking the @%$# out of you. And I mean this to be read seriously and non-ironic way. A singleton project instigated by an American public(ish) political process would not be tolerated by the other countries with the power to stop it. Even with that counterfactualized away I would be betting that (at least) one of the American military services would make a play at control before the system went critical.

With great power comes great chance that people will be desperate to stop you while they can.

  • Except us down here in Australia. We'd be saying 'Please! You can be our Mommy AND our our Daddy! Just let us hang out with you.'

Next can we have post that excludes black people, women or chemists?

I'm an American. I'm writing about extrapolations of the American psyche from American politics. Would you rather have me assume that this generalizes to the entire world?

If you feel left-out, write your own post on your country.

I'm an American. I'm writing about extrapolations of the American psyche from American politics. Would you rather have me assume that this generalizes to the entire world?

No, I don't particularly mind. But do note that to assume that the rest of the world would allow the American public to build a singleton is a rather massive counterfactual. I also imagine that being in a world where the rest of the world was not a threat would make a rather significant impact on the American Psych.

If you feel left-out, write your own post on your country.

My reference to Australia (unfortunately) already says a lot about my country's psych as it relates to this kind of topic.

I think that any powerful human group would try to stop any other group from building a singleton of any kind. (Ditto for a first-ever FAI if people appreciate its power correctly.) To allow a singleton to be built by someone else is a total, extreme, irrevocable surrender of independence.

Which is why anyone trying to build a singleton would do so (or is doing so) in secret.

Having a generally-human-Friendly singleton built by someone else is better than existential risk (I believe), even if worse than a personal singleton.

Yes. But when it's being built by someone else in a not very transparent setting, what probability do you assign to its being generally-human-Friendly instead of particular-human-Friendly towards its creators? And how do you make sure its creators don't commit fatal mistakes?

Suppose a group (foreign government, startup, etc) is building a singleton. They say it's for the benefit of all mankind and other generally nice things. They won't let you inspect it up close and don't reveal their theories & tech, for fear of competition. Even if you offer to help, they say the value of your help is too small to risk your subverting their goals. You have the choice of:

  1. Stopping them by force - use your biggest bombs, no questions asked.
  2. Conquering them, trying to recover their technology, and using it to try to build your own singleton.
  3. As above, but releasing all information and trying to make your singleton-building process 100% transparent and open to anyone who wants to participate.
  4. Ignoring them.
  5. Defending them against any attackers.

Edit: 6. Ignoring them but starting your own singleton project and trying to finish first. Variations: (6a) secret project, (6b) public but not transparent project, just like theirs, and (6c) transparent project inviting others to assist.

Anyone choosing 6 would have to present a very good reason for not choosing 2 or 3 instead, since competition is an obvious problem, winner takes all.

What do you do? I argue that most people/governments/armies will choose 1 or 2.

Yes. But

... We are diverting a bit from 'any', 'any other' and 'any kind' here.

And yes, the only credible alternative to 1 and 2 would be 4 in those cases where the agent predicts their own singleton to foom first.

We are diverting a bit from 'any', 'any other' and 'any kind' here.

Well, I think any group would tend to consider any other group 'not transparent and reliable enough', given the extremely high stakes involved. A development effort that most people felt was open enough would be so open that there would effectively be no separate groups.

So, in any situation where the description of "one powerful group building a singleton and another group watching them" applies, I stand by my prediction.

How about if the figurehead made the effort to signal that he is too silly for a sophisticated body to consider a threat? After all, I'd sooner put plastic bottles of water on my roof than admit that some anime obsessed nerd was a bigger threat than me.

(Yes.) There are some people that I would not try to kill if they were getting close to creating singleton.

I'd like to note that if you vote down this post below -5, it ought to stop appearing in Recent Posts, though I'm not sure this has been tested properly. For some reason the score goes on saying "0" rather than showing a negative number even after voted down.