People don't want you to be happy for complex reasons. They want you to be happy for specific reasons, that just happen to not be simple.
People want you to be happy because you enjoy some piece of fine art, not because the greatest common divisor of the number of red and black cars you saw on the way to work is prime.
Complexity in morality is like consistency in truth. Let me explain.
If all of your beliefs are exactly correct, they will be consistent. But if you force all your beliefs to be consistent, there's no guarantee they'll be correct. You can end up fixing a right thing to make it consistent with a wrong thing.
Just so with morality; humans value complex things, but this complexity is a result, not a cause, and not something to strive for in and of itself.
The sequences contain a preemptive counterargument to your post, could you address the issues raised there?
To address some topic digression. My point is not the theoretical notion whenever you can or can't derive the FAI rules this way. The point here is that we, humans, seem to use some intuitive notion of complexity - for lack of better word - to rank moral options. The wire-heading objection issue is particularly striking example of this.
Just a note - I'd change your last sentence as it seems to imply some form of Lamarckianism and will probably get your post downvoted for that, when I'm sure that wasn't your intent...
I don't understand why this post and some of Dmytry's comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
My personal impression has been that emotions are a result of a hidden subconscious logical chain, and can be affected by consciously following this chain, thus reducing this apparent complexity to something simple. The experiences of others here seem to agree, from Eliezer's admission that he has developed a knack for "switching off arbitrary minor emotions" to Alicorn...
I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of "reasoning", though I'd be more inclined to say "algorithms" to avoid misleading connotations) of which we're unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.
I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people's comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don't find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it's more common here than I think but I attend to the exceptions disproportionally; perhaps it's less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of "the interpretation that makes the least amount of sense" is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.
I don't understand why this post and some of Dmytry's comments are downvoted so hard.
I'm going with the position that the post got the votes that it deserved. It's not very good thinking and Dmytry goes out of his way to convey arrogance and condescension while he posts. It doesn't help that rather than simply being uninformed of prior work he explicitly belligerently defies it - that changes a response of sympathy with his efforts and 'points for trying' to an expectation that he says stuff that makes sense. Of course that is going to get downvoted.
The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
It isn't self-contradictory, just the other two.
Seriously, complexity maximisation and "This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens." That is crazy and obviously false.
It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.
Of course that is true! But that isn't what the post says. There is a world of difference between "our values are complex" and "we value complexity".
Preface: I am just noting that we people seem to be basing our morality on some rather ill defined intuitive notion of complexity. If you think it is not workable for AI, or something like that, such thought clearly does not yet constitute a disagreement with what I am writing here.
More preface: The utilitarian calculus is an idea that what people value is described simply in terms of summation. The complexity is another kind of f(a,b,c,d) that behaves vaguely like a 'sum' , but is not as simple as summation. If the a,b,c,d are strings, and it is a programming language, the above expression would often be written like f(a+b+c+d) , using + to mean concatenation, while it is something very fundamentally different from summation of real valued numbers. But it can appear confusingly close, as for a,b,c,d that don't share a lot of information among themselves, the result will behave a lot like a function on sum of real numbers. It will, however, diverge from the sum like behaviour as the a,b,c,d share more information among themselves, much in similar to how our intuitions for what is right diverge from sum like behaviour when you start considering exact duplicates of people, which only diverged for a few minutes.
It's a very rough idea, but it seems to me that a lot of common sense moral values are based on some sort of intuitive notion of complexity. Happiness via highly complex stimuli that pass through highly complex neural circuitry inside your head seems like a good thing to pursue; happiness via wire, resistor, and battery seems like a bad thing. What makes the idea of literal wireheading and hard pleasure inducing drugs so revolting for me, is the simplicity, banality of it. I have much fewer objections to e.g. hallucinogens (never took any myself but I am also an artist and I can guess that other people may have lower levels of certain neurotransmitters, making them unable to imagine what I can imagine).
The complexity based metrics have a property that they easily eat for breakfast huge numbers like "a dust speck in the 3^^^3 eyes", and even the infinity. The torture of a conscious being for a long period of time can easily be more complex issue than even the infinite number of dust specks.
Unfortunately, the complexity metrics like Kolmogorov's complexity are noncomputable on arbitrary input, and are big for truly random values. But in so much as the scenario is specific and has been arrived at by computation, there is this computation's complexity which sets an upper bound on complexity of scenario. The mathematics may also be not here yet. We have the intuitive notion of complexity where the totally random noise is not very complex, the very regular signal is not either, but some forms of patterns are highly complex.
This may be difficult to formalize. We could of course only define the complexities when we are informed of properties of something, but can not compute them for arbitrary input from scratch; if we map something as 'random numbers', the complexity is low; if it is encrypted volumes of works of Shakespeare, even though we wouldn't be able to distinguish that from random in practice (assuming good encryption), as we are told what it is, we can assign it higher complexity.
This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens (Note that for the most part, evolution's power gone into improving the bacteria; the path leading up H. Sapiens is a very special case). Maybe we for some reason try to extrapolate this [note: for example, a lot of people rank their preference of animals as food by the animal's complexity of behaviours, which makes the human least desirable food; we have anti-whaling treaties], maybe it is a form of goal convergence between brain as intelligent system, and evolution (both employ hill climbing to arrive at solutions), or maybe we evolved the system that aligns with where evolution was heading because that increased fitness [edit: to address possible comment, we have another system based on evolution - the immune system - it works by evolving the antigens using somatic hypermutation; it's not inconceivable that we use some evolution-like mechanism to tweak our own neural circuitry, given that our circuitry does undergo massive pruning in early stages of life].