Preface: I am just noting that we people seem to be basing our morality on some rather ill defined intuitive notion of complexity. If you think it is not workable for AI, or something like that, such thought clearly does not yet constitute a disagreement with what I am writing here.

More preface: The utilitarian calculus is an idea that what people value is described simply in terms of summation. The complexity is another kind of f(a,b,c,d) that behaves vaguely like a 'sum' , but is not as simple as summation. If the a,b,c,d are strings, and it is a programming language, the above expression would often be written like f(a+b+c+d) , using + to mean concatenation, while it is something very fundamentally different from summation of real valued numbers. But it can appear confusingly close, as for a,b,c,d that don't share a lot of information among themselves, the result will behave a lot like a function on sum of real numbers. It will, however, diverge from the sum like behaviour as the a,b,c,d share more information among themselves, much in similar to how our intuitions for what is right diverge from sum like behaviour when you start considering exact duplicates of people, which only diverged for a few minutes.

It's a very rough idea, but it seems to me that a lot of common sense moral values are based on some sort of intuitive notion of complexity. Happiness via highly complex stimuli that pass through highly complex neural circuitry inside your head seems like a good thing to pursue; happiness via wire, resistor, and battery seems like a bad thing. What makes the idea of literal wireheading and hard pleasure inducing drugs so revolting for me, is the simplicity, banality of it. I have much fewer objections to e.g. hallucinogens (never took any myself but I am also an artist and I can guess that other people may have lower levels of certain neurotransmitters, making them unable to imagine what I can imagine).

The complexity based metrics have a property that they easily eat for breakfast huge numbers like "a dust speck in the 3^^^3  eyes", and even the infinity. The torture of a conscious being for a long period of time can easily be more complex issue than even the infinite number of dust specks.

Unfortunately, the complexity metrics like Kolmogorov's complexity are noncomputable on arbitrary input, and are big for truly random values. But in so much as the scenario is specific and has been arrived at by computation, there is this computation's complexity which sets an upper bound on complexity of scenario. The mathematics may also be not here yet. We have the intuitive notion of complexity where the totally random noise is not very complex, the very regular signal is not either, but some forms of patterns are highly complex.

This may be difficult to formalize. We could of course only define the complexities when we are informed of properties of something, but can not compute them for arbitrary input from scratch; if we map something as 'random numbers', the complexity is low; if it is encrypted volumes of works of Shakespeare, even though we wouldn't be able to distinguish that from random in practice (assuming good encryption), as we are told what it is, we can assign it higher complexity.

This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens (Note that for the most part, evolution's power gone into improving the bacteria; the path leading up H. Sapiens is a very special case). Maybe we for some reason try to extrapolate this [note: for example, a lot of people rank their preference of animals as food by the animal's complexity of behaviours, which makes the human least desirable food; we have anti-whaling treaties], maybe it is a form of goal convergence between brain as intelligent system, and evolution (both employ hill climbing to arrive at solutions), or maybe we evolved the system that aligns with where evolution was heading because that increased fitness [edit: to address possible comment, we have another system based on evolution - the immune system - it works by evolving the antigens using somatic hypermutation; it's not inconceivable that we use some evolution-like mechanism to tweak our own neural circuitry, given that our circuitry does undergo massive pruning in early stages of life].

New Comment
102 comments, sorted by Click to highlight new comments since: Today at 1:00 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

People don't want you to be happy for complex reasons. They want you to be happy for specific reasons, that just happen to not be simple.

People want you to be happy because you enjoy some piece of fine art, not because the greatest common divisor of the number of red and black cars you saw on the way to work is prime.

0Dmytry12y
what is complex about the greatest common divisor being prime? It's a laughably simple thing compared to image recognition of any kind, involved in the appreciation of piece of fine art. I can easily write the thing that will check the GCD of number of coloured cars for primeness. I can't write human level image recognition software. It's so bloody complex in comparison, it's not even funny.
8orthonormal12y
You're not engaging with his point. By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept, without the bother of writing one out explicitly (because there's no compact way to do so— the compact ways of representing complex concepts in English are reserved for the complex concepts that we do care about).
-5Dmytry12y
-1gjm12y
How are you going to count the red and black cars without image recognition?
0Dmytry12y
With very simple kind that I can write easily. Not with human like that takes immense effort. Detecting a car in image isn't as hard as it sounds. Having been shown pictures of cats and dogs or other arbitrary objects, and then telling apart cats and dogs, that's hard. Bonus points for not knowing which is which and finding out that there's two different types of item.
0gjm12y
Surely identifying cars isn't that much easier than identifying cats. I dare say it's somewhat easier; cars commonly have uniform colours, geometrical shapes, and nice hard edges. But are you really sure you could easily write a piece of software that, given (say) a movie of what you saw on your way to work, would count the number of red and black cars you saw? (Note that it needs to determine when two things in different frames are the same car, and when they aren't.)
0Dmytry12y
Well, processing a movie that was taken by eyes is somewhat difficult indeed. Still, the difficulty in free form image recognition at human level is so staggering, that this doesn't get close. Cars are recognizable with various hacks.

Complexity in morality is like consistency in truth. Let me explain.

If all of your beliefs are exactly correct, they will be consistent. But if you force all your beliefs to be consistent, there's no guarantee they'll be correct. You can end up fixing a right thing to make it consistent with a wrong thing.

Just so with morality; humans value complex things, but this complexity is a result, not a cause, and not something to strive for in and of itself.

0Dmytry12y
Good point, however note that we call systems of consistent beliefs 'mathematics'; it is unlinked from reality, but is extremely useful just as long as one understands the sort of truth that is there to the consistency. The consistency produces conditional truth - truth of a statement that "if A is true, then B is true" . Without mathematics, there is no belief improving.

The sequences contain a preemptive counterargument to your post, could you address the issues raised there?

3lukstafi12y
I read Dmytry's post as a hint, not a solution. Since obviously pursuing complexity at "face value" would be pursuing entropy.
2Dmytry12y
Yep. It'd be maximized by heating you up to maximum attainable temperature, or by throwing you in to black hole, depending to how you look at it.
2lukstafi12y
We can have a low-information reference class with instances of high entropy, the "heat soup". But then, picking a reference class is arbitrary (we can contrive a complex class of heat soup flavors).
-2Dmytry12y
I don't like EY's posts about AI. He's not immune to the sunk cost fallacy, and the worst form of sunk cost fallacy when one denies outright (with long handwave) any possibility of a better solution, having sunk the cost into the worse one. Ultimately, if the laws of physics are simple, he's just flat out factually wrong that morality doesn't arise from simple rules. His morality arose from those laws of physics, and in so much as he's not a Boltzmann's brain, his values aren't incredibly atypical. edit: To address it further. He does raise a valid point that there is no simple rule. The complexity metrics though are by no means a simple 'rule', they are in-computable and thus aren't even a rule.
9cousin_it12y
Physics can contain objects whose complexity is much higher than that of physics. Do you have a strong argument why randomness didn't make a big contribution to human morality?
0Dmytry12y
Well, suppose I were to make just the rough evolution sim, given really powerful computer. Even if it evolves society with principles we can deem moral once in a trillion societies - which is probably way low given that much of our principles are game theoretic - that just adds 40 bits to description for indexing those sims. edit: and the idea of the evolution sim doesn't really have such a huge complexity; any particular evolution sim does, but we don't care which evolution simulator we are working with; we don't need the bits for picking one specific one, just the bits for picking a working one.
4cousin_it12y
Game-theoretic principles might be simple enough, but the utility function of a FAI building a good future for humanity probably needs to encode other information too, like cues for tasty food or sexual attractiveness. I don't know any good argument why this sort of information should have low complexity.
1Dmytry12y
You may be over-fitting there. The FAI could let people decide what they want when it comes to food and attractiveness. Actually it better would, or i'd be having some serious regrets about this FAI.
1cousin_it12y
That's reasonable, but to let people decide, the FAI needs to recognize people, which also seems to require complexity...
1faul_sname12y
If your biggest problem is on the order of recognizing people, the problem of FAI becomes much, much easier.
0Dmytry12y
Well, and the uFAI needs to know what "paperclips or something" means (or a real world goal at all). Obstacle faced by all contestants in the race. We humans learn what is other people and what isn't. (Or have evolved it, doesn't matter)
4endoself12y
If you get paperclips slightly wrong, you get something equally bad (staples is the usual example, but the point is that any slight difference is about equally bad), but if you get FAI slightly wrong, you don't get something equally good. This breaks the symmetry.
0Dmytry12y
I think if you get paperclips slightly wrong, you get a crash of some kind. If I get a ray-tracer slightly wrong, it doesn't trace electrons instead of photons. edit: To clarify. It's about definition of person vs definition of paperclip. You need a very broad definition of person for FAI, so that it won't misidentify a person as non-person (misidentifying dolphins as persons won't be a big problem), and you need a very narrow definition of paperclip for uFAI, so that a person holding two papers together is not a paperclip. It's not always intuitive how broad definitions compare to narrow in difficulty, but it is worth noting that it is ridiculously hard to define paperclip making so that a Soviet factory anxious to maximize the paperclips would make anything at all, while it wasn't particularly difficult to define what a person is (or to define what 'money' are so that capitalist paperclip factory would make paperclips to maximize profit).
1cousin_it12y
I agree that paperclips could also turn out to be pretty complex.
0othercriteria12y
I don't think "paperclip maximizer" is taken as a complete declarative specification of what a paperclip maximizer is, let alone what it understands itself to be. I imagine the setup is something like this. An AI has been created by some unspecified (and irrelevant) process and is now doing things to its (and our) immediate environment. We look at the things it has done and anthropomorphize it, saying "it's trying to maximize the quantity of paperclips in the universe". Obviously, almost every word in that description is problematic. But the point is that the AI doesn't need to know what "paperclips or something" means. We're the ones who notice that the world is much more filled with paperclips after the AI got switched on. This scenario is invariant under replacing "paperclips" with some arbitrary "X", I guess under the restriction that X is roughly at the scale (temporal, spatial, conceptual) of human experience. Picking paperclips, I assume, is just a rhetorical choice.
0Dmytry12y
Well, I agree. That goes also for the what ever process determines something to be person. The difference is that the FAI doesn't have to create persons; it's definition doesn't need to process correctly things from the enormous space of possible things that can be or not be persons. It can have very broad definition that will include dolphins, and it will still be OK. The intelligence, to some extent, is self defeating when finding a way to make something real; the easiest Y that is inside set X should be picked, by design, as instrumental to making more of some kind of X. I.e. you define X to be something to hold papers together, the AI thinks and thinks and sees that a single atom, under some circumstances common in the universe (very far away in space), can hold the papers together; it finds the Kasimir effect which makes a vacuum able to hold two conductive papers together; and so on. The X has to be resistant against such brute forcing for the optimum solution. Whenever the AI can come up with some real world manufacturing goal that it can't defeat in such a fashion, well, that's open to debate. Incomputable things seem hard to defeat. edit: Actually. Would you consider a case of a fairly stupid nano-manufacturing AI destroying us, and itself, with gray goo, an unfriendly AI? That seems to be a particularly simple failure mode for self improving system, FAI or UFAI, under bounded computational power.And a failure mode for likely non-general AIs, as we are likely to employ such AIs to work on biotechnology and nanotechnology.
0othercriteria12y
It doesn't sound like you are agreeing with me. I didn't make any assumptions about what the AI wants or whether its instrumental goals can be isolated. All I supposed was that the AI was doing something. I particularly didn't assume that the AI is at all concerned with what we think it is maximizing, namely, X. As for the grey goo scenario, I think that an AI that caused the destruction of humanity not being called unfriendly would indicate a incorrect definition of at least one of "AI", "humanity", or "unfriendly" ("caused" too, I guess).
2Dmytry12y
Can you be more specific? I have an AI that's iterating parameters to some strange attractor - defined within it - until it finds unusual behaviour. I can make the AI that would hillclimb+search for the improvements to the former AI. edit: Now, the worst thing that can happen, it makes mind hack image that kills everyone who looks at it. That wasn't the intent, but the 'unusual behaviour' might get too unusual for human brain to handle. Is that a serious risk? No it's a laughable one.
0othercriteria12y
Implicit in my setup was that the AI reached the point where it was having noticeable macroscopic effects on our world. This is obviously easiest when the AI's substrate has some built-in capacity for input/output. If we're being really generous, it might have an autonomous body, cameras, an internet connection, etc. If we're being stingy, it might just be an isolated process running on a computer with its inputs limited to checking the wall-clock time and outputs limited to whatever physical effects it has on the CPU running it. In the latter case, doing something to the external world may be very difficult but not impossible. The program you have doing local search in your example doesn't sound like an AI; even if you stuck it in the autonomous body, it wouldn't do anything to the world that's not a generic side-effect of its running. No one would describe it as maximizing anything.
2Dmytry12y
Well, it is maximizing what ever I defined for it to maximize, usefully for me, and in a way that is practical. In any case, you said, "All I supposed was that the AI was doing something." . My AI is doing something. Yea, and it's rolling forward and clamping it's manipulators until they wear out. Clearly you want it to maximize something in the real world, not just do something. The issue is that the only things it can do approximately this way is shooting at colour blue or the like. Everything else requires very detailed model, and maximization of something in the model, followed by carrying out of the actions in the real world, which, interestingly, is entirely optional, and which even humans have trouble getting themselves to do (when I invent something and to my satisfaction am sure that it will work, it is boring to implement, and it is a common problem). Edit: and one other point, without model all you can do is try random stuff on the world itself, which is not at all intelligent (and resembles the Wheatley in portal 2 trying to crack the code).
0TheOtherDave12y
...or perhaps "destruction".
0[anonymous]12y
Sorry, I don't understand what exactly you are proposing. A utility function is a function from states of the universe to real numbers. If the function contains a term like "let people decide", it should also define "people", which seems to require a lot of complexity. Or are you coming at this from some other perspective, like assigning utilities to possible actions rather than world states? That's a type error and also very likely to be Bayesian-irrational.
-2Will_Newsome12y
Randomness is Chaitin's omega is God implies stochasticity (mixed Strategies) implies winning in the limit due to hypercomputational advantages universally if not necessarily contingently. Hence randomness isn't at odds as such with morality. Maybe Schmidhuber's ideas about super-omegas are relevant. Doubt it.
6ArisKatsaris12y
Plus the process of a few hundred million years of evolutionary pressures. Do you think simulating those years and extrapolating the derived values from that simulation is clearly easier and simpler than extrapolating the values from e.g. a study of human neural scans/human biochemistry/human psychology?
2David_Gerard12y
It's not clear to me how the second is obviously easier. How would you even do that? Are there simple examples of doing this that would help me understand what "extrapolating human values from a study of human neural scans" would entail?
2Dmytry12y
One could e.g. run a sim of bounded intelligence agents competing with each other for resources, then pick the best one, that will implement the tit for tat and more complex solutions that work. It was already the case that for iterated prisoner's dilemma there wasn't some enormous number of amoral solutions, to the much surprise of AI researchers of the time who wasted their efforts trying to make some sort of nasty sneaky Machiavellian AI. edit: anyhow i digress. The point is that when something is derivable via simple rules (even if impractical), like laws of physics, that should enormously boost the likehood that it is derivable in some more practical way.
0faul_sname12y
Would "yes" be an acceptable answer? It probably is harder to run the simulations, but it's worth a shot at uncovering some simple cases where different starting conditions converge on the same moral/decision making system.
1Vaniver12y
You may want to check out this post instead; it seems like a much closer response to the ideas in your post.
1Dmytry12y
I'm not proposing the AI, I'm noting that the humans seem to use some intuitive notion of complexity to decide what they like. edit: also had the Eliezer ever written a Rubik cube solving AI? Or anything even remotely equal? Easy to pontificate how other people think wrong when you aren't having to solve anything. The way engineers think, it works for making me a car. The way Eliezer thinks, that works for making him an atheist. Big difference. (I am atheist too, so not a religious stab, and I like Eliezer's sequences, it's just that problem solving is something we are barely at all capable of, and adding any extra crap to shoot down the lines of thought which may in fact work does not help you any) edit: also, the solution: you just do hill climbing with n-move look ahead. As a pre-processing step you may search for sequences that climb the hill out of any condition. It's a very general problem solving method, hill climbing with move look-ahead. If you want the AI to invent hill climbing, well I know of one example, evolution, and this one does increase some kind of complexity on the line that is leading up to mankind, who invents better hill climbing, even though complexity is not the best solution to 'reproducing the most'. If the point is making the AI that comes up with the very goal of solving Rubik's cube, that gets into the AGI land, but using the cube for improving own problem solving skill is the way it is for us. I like to solve cube into some pattern. An alien may not care into what pattern to solve the cube, just as long as he pre-commits on something random, and its reachable.

To address some topic digression. My point is not the theoretical notion whenever you can or can't derive the FAI rules this way. The point here is that we, humans, seem to use some intuitive notion of complexity - for lack of better word - to rank moral options. The wire-heading objection issue is particularly striking example of this.

[-][anonymous]12y00

Just a note - I'd change your last sentence as it seems to imply some form of Lamarckianism and will probably get your post downvoted for that, when I'm sure that wasn't your intent...

I don't understand why this post and some of Dmytry's comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.

My personal impression has been that emotions are a result of a hidden subconscious logical chain, and can be affected by consciously following this chain, thus reducing this apparent complexity to something simple. The experiences of others here seem to agree, from Eliezer's admission that he has developed a knack for "switching off arbitrary minor emotions" to Alicorn... (read more)

I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of "reasoning", though I'd be more inclined to say "algorithms" to avoid misleading connotations) of which we're unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.

-1Dmytry12y
Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity - unavoidable in human language - of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn't result in most reasonable processing. I do not claim to be immune to this.

I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people's comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don't find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it's more common here than I think but I attend to the exceptions disproportionally; perhaps it's less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of "the interpretation that makes the least amount of sense" is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.

-1Dmytry12y
Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.
8TheOtherDave12y
Yup, that's one mechanism whereby fear tends to inhibit reasonable processing.

Excellent use of fogging in this conversation Dave.

5cousin_it12y
Seconding TheOtherDave's thanks. I stumbled on this technique a couple days ago, it's nice to know that it has a name.
3TheOtherDave12y
Upvoted back to zero for teaching me a new word. .
0[anonymous]12y
Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning's merits, which should be discussed separately from the procedure of ambiguity resolution.

I don't understand why this post and some of Dmytry's comments are downvoted so hard.

I'm going with the position that the post got the votes that it deserved. It's not very good thinking and Dmytry goes out of his way to convey arrogance and condescension while he posts. It doesn't help that rather than simply being uninformed of prior work he explicitly belligerently defies it - that changes a response of sympathy with his efforts and 'points for trying' to an expectation that he says stuff that makes sense. Of course that is going to get downvoted.

The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.

It isn't self-contradictory, just the other two.

Seriously, complexity maximisation and "This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens." That is crazy and obviously false.

It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.

Of course that is true! But that isn't what the post says. There is a world of difference between "our values are complex" and "we value complexity".

0Dmytry12y
Netting zero average, though i guess pointing that out is not a very good thing for votes.
1wedrifid12y
I don't understand what you are trying to convey.
1TheOtherDave12y
I understood it to mean that comments about karma tend to get downvoted.
0David_Gerard12y
Because someone's going through mass-downvoting. Note that your defence got downvoted by them too. When someone gets a downvote for posting the actual answer to the question, there's little going on but blue-green politics with respect to local tropes.
4Manfred12y
This is often the first explanation proposed, but is wrong most of the time. Charity, context, etc. etc.
1wedrifid12y
Not only that, his defense got downvoted by me before the post itself did and with greater intent to influence. It doesn't take local tropes to prompt disagreement here. Not thinking that human values can be attributed to valuing complexity is hardly a weird and unique-to-lesswrong position. In fact Eliezer-values (in Fun-Theory) are if anything closer to what this post advocates than what can be expected in the mainstream.
2Dmytry12y
edit: oh wait, you are speaking of shminux . I was thinking of the answer to a question.
0Dmytry12y
Actually I had 2 upvotes on that answer then it got to -1. I think I'm just going to bail out because on that same post about the Rubik's cube I could of gotten a lot of 'thanks man' replies on e.g. a programming contest forum, or the like, if there was a Rubik's cube talk like this. edit: or wait, it was at -1, then at +2, then at -1 Also on the evolution part of it, it is the case that evolution is crappy hill climber (and mostly makes better bacteria), but you can look at human lineage, and reward something that's increasing along this line, to avoid wasting too much time on bacteria. E.g. by making agents play some sort of games of wit against each other where bacteria won't get free pass.
1wedrifid12y
Consistent downvotes can be considered a signal sent by the voter consensus that they would prefer that you either bail or change your behavior. Unfortunately the behavior change in question here amounts to adopting a lower status role (ie. more willing to read and understand the words of others, less inclined to insult and dismiss others out of hand, more likely to change your mind about things when things are explained to you). I don't expect or presume that others will willingly adopt a lower status role - even when to do so will increase their status in the medium term. I must accept that they will do what they wish to do and continue to downvote and oppose the behaviors that I would see discouraged. It is quite possible - in fact my model puts it as highly likely - that your current style of social interaction would result in far greater social at other locations. Lesswrong communication norms are rather unique in certain regards.
-3Dmytry12y
You guys are very willing to insult me personally, but I am rather trying not to go personal (albeit it is rather difficult at times). That doesn't mean I don't say things that members of community can't take personally; still, in last couple days I've noticed that borderline personal insults here are tolerated way more than i'd consider normal while any stabs at community (or shared values) are not, and the disagreements tend to be taken more personally than normal in technical discourse.
-9Dmytry12y
[+]Dmytry12y-170