Today's post, Traditional Capitalist Values was originally published on 17 October 2008. A summary (taken from the LW wiki):

 

Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Entangled Truths, Contagious Lies, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 3:06 PM

Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.

Dubious advice. Better advice: try to actually understand the values of that system as practiced, and look at the behaviors that are rewarded or punished by practitioners.

I think Eli has the better starting point. If you want to understand a strange philosophical system you have to start by figuring out what it's members believe, how they think about the world, what they profess that they ought to do. Once you understand that, you'll want to go on to look at how their actual actions differ from their beliefs and how the predictions of their theories differ from reality. But if you don't start with their beliefs, you'll never be able to predict how an adherent would actually respond to any novel situation.

This creates some reference-class problems. You say "X is bad, look how Y is practicing it", and then I say "what Y does is not really X; in fact nobody has tried X yet; I am the first person who will try it, so please don't compare me with people who did something else".

In other words, if you can successfully claim that your system is original, you automatically receive "get out of the jail, free" card. On the other hand, some systems really are original... or at least modified enough to possibly work differently from their old versions.

But it certainly is legitimate to look how the (arguably not the same) system is practiced, what are its failures, and ask proponents: "How exactly are you planning to fix this? Please be very very specific."

EY says that real value systems "are phrased to generate warm fuzzies in their users". If we move from phrasings to beliefs, as you and ewbrownv suggest, that's a step in the right direction in my view. And that step requires looking at actions.

Classification is challenging, sure, especially in social matters. If you want to make predictions, you are generally well advised to do clustering and pattern matching and to pay attention to base rates. If there are many who have said A, B, C, and D, and done W, X, Y, and Z, it's a good bet that the next ABCD advocate will do all or most of WXYZ too. And usually, what we're most interested in predicting is actions other than utterances. For that, past data on actions constitutes the most vital information.

For that, past data on actions constitutes the most vital information.

Exactly. Which is why manipulating the past data is essential in politics. Exaggerating the differences between ABCD and A'B'C'D'; and/or claiming that WXYZ never happened and was just an enemy propaganda. You can teach people to cluster incorrectly, if you make your clustering criteria well known.

(As an example, you can teach people to classify "national socialism" as something completely opposite and unrelated to "socialism"; and you can also convince them that whatever experiences people had with "socialism" in the past, are completely unrelated to the experiences we would have with "socialism" in the future. This works even on people who had first-hand experience, and works better on people who didn't have it.)

Perhaps it could be useful to look at all reasonable value systems, try to extract the good parts, and put them all together.

Moving from values to applied rationality, it could also be useful to create a collection of biases by asking the rational supporters of given value systems: "which bias or lack of understanding do you find most frustrating when dealing with otherwise rational people from the opposite camp". Filter the rational answers, and make a social rationality textbook.

For example a capitalism proponent may be frustrated by people not understanding the relation between "what is seen and what is not seen". A socialism proponent may be frustrated by people not understanding that "the market can stay irrational longer than you can stay solvent". -- Perhaps not the best examples, but I hope you get the idea.

I'm frustrated by naive capitalists' failure to understand externalities.

On the other side, I'm frustrated by people failing to understand that technological progress creates new jobs in the process of destroying old ones, and that this is a net good, even though the people losing their jobs are the most visible. I suppose that's closely related to your "seen vs unseen" idea. Related: Jevon's paradox.

On the other side, I'm frustrated by people failing to understand that technological progress creates new jobs in the process of destroying old ones, and that this is a net good, even though the people losing their jobs are the most visible.

Is this thing still considered obvious these days? The problem with it is that the new jobs that still need people to do them are getting more difficult. We seem to have actually viable self-driving cars now, which hints that just needing to do hand-eye coordination in diverse environments no longer guarantees that a job needs a human to do it.

If we ever get automated natural language interfaces to be actually good, that's another massive sector of human labor, customer service, who just got replacable with a bunch of $10 microprocessors and a software license. So, do we now assure everyone that good natural language interfaces will never happen, even though self-driving cars obviously were never going to work in the real world either, except that now they appear to do?

At least the people in high abstraction knowledge work can be at peace knowing that if automation ever gets around to doing their jobs better than them, they probably don't need to worry very long about unemployment on the account of everybody probably ending up dead.

[-]TimS12y30

There's a lot of status quo bias here. Once upon a time, elevators and telephones had operators, but no longer.

The problem with it is that the new jobs that still need people to do them are getting more difficult.

This is an important fact, if true. There are obvious lock-in effects. For example, unemployed auto workers have skills that are no longer valued in the market because of automation. But the claim that replacement jobs are systematically more difficult, so that newly unemployed lack the capacity to learn the new jobs, is a much stronger claim.

But the claim that replacement jobs are systematically more difficult, so that newly unemployed lack the capacity to learn the new jobs, is a much stronger claim.

Yes. It's obviously true that useful things that are easier to automate will get automated more, so the job loss should grow from the easily automated end. The open question is how much do human skill distributions and the human notion of 'difficulty' match up with the easier to automate thing. It's obviously not a complete match, as a human job, bookkeeping is considered to require more skill than warehouse work, but bookkeeping is much more easily automated than warehouse work.

Human labor in basic production, farming, mining, manufacturing, basically relies on humans coming with built-in hand-eye coordination and situation awareness that has been impossible to automate satisfactorily so far. Human labor in these areas mostly consists of following instructions though, so get a good enough machine solutions for hand-eye coordination and situation awareness in the real world, and most just-following-orders, dealing-with-dumb-matter human labor is toast.

Then there's the simpler service labor where you deal with other humans and need to model humans successfully. This is probably more difficult, AI-wise. Then again, these jobs are also less essential, people don't seem to miss the telephone and elevator operators much. Human service personnel are an obvious status signal, but if the automated solution is 100x cheaper, actual human service personnel is going to end up a luxury good, and the nearby grocery store and fast food restaurant probably won't be hiring human servers if they can make do with a clunky automated order and billing system. In addition to being more scarce, high-grade customer service jobs at status-conscious organizations are going to require more skills than a random grocery store cashier job.

This leaves us mostly with various types of abstract knowledge work, which are generally considered the types of job that require the most skill. Also, one dealing with people job sector where the above argument of replacing humans with automated systems that aren't full AIs won't work are various security professions. You can't do away with modeling other humans very well and being very good at social situation awareness there.

Human service personnel are an obvious status signal, but if the automated solution is 100x cheaper, actual human service personnel is going to end up a luxury good

On the other hand, the wealth said automated solutions will generate means that luxury goods are a lot more affordable.

Edit: The downside is that this means that most jobs will essentially consist of playing status games, I believe the common word for this is decadence.

[+]Manfred12y-120