Posts

Sorted by New

Wiki Contributions

Comments

mc1soft4mo82

I'm glad to see a post on alignment asking about the definition of human values.  I propose the following conundrum.  Let's suppose that humans, if ask, say they value a peaceful, stable society.  I accept the assumption the human mind contains one or more utility optimizers.  I point out that the utility optimizers are likely to operate at individual, family, or local group levels, while the stated "value" has to do with society at large.  So humans are not likely "optimizing" on the same scope as they "value".

This leads to game theory problems, such as the last turn problem, and the notorious instability of cooperation with respect to public goods (commons).  According to the theory of cliodynamics put forward by Turchin et. al. utility maximization by subsets of society leads to the implementation of wealth pumps that produce inequality, and to excess reproduction among elites, that leads to elite competition in a cyclic pattern.  A historical database of over a hundred cycles from various parts of the world and history suggests every other cycle becomes violent or at least very destructive 90% of the time, and the will to reduce the number of elites and turn off the wealth pump occurs through elite cooperation less than 10% of the time.  

I add the assumption that there is nothing special about humans, and any entities (AI or extraterrestrials) that align with the value goals and optimization scopes described above will produce similar results.  Game theory mathematics does not say anything about the evolutionary history or take into account species preferences, after all, because it doesn't seem to need to.  Even social insects, optimizing presumably on much larger, but still not global scopes, fall victim to large scale cyclic wars (I'm thinking of ants here).  

So is alignment even a desirable goal?  Perhaps we should ensure that AI does not aid the wealth pump and elite competition and the mobilization of the immiserated commoners (Turchin's terminology)?  But it is the goal of many, perhaps most AI researchers to "make a lot of money" (witness recent episode with  Sam Altman and support from OpenAI employees for his profit-oriented strategy, over the board's objection, as well as the fact most competing entities developing AI are profit oriented - and competing!)  But some other goal (e.g. stabilization of society) might have wildly unpredictable results (stagnation comes to mind).

mc1soft8mo181

Thank you - the best of many good lesswrong posts.  I am currently trying to figure out what to tell my 9-year old son.  But your letter could "almost" have been written to myself.  I'm not in whichever bay area (Seattle? SanFran?).  I worked for NASA and it is also called the bay area here.  Very much success is defined by others.  Deviating from that produces accolades at first, even research dollars, but finally the "big machine" moves in a different direction way over your head and its for naught.  

My son asked point blank if he should work for NASA.  He loves space stuff.  I told him, also point blank, "No!"  You will never have any ownership at such an organization.  It will be fun, but then not yours anymore, you are reassigned, and finally you retire and lose your identity, your professional email address, most of your friends.  Unless you become a consultant or contractor which many at retirement age do not have the energy to do, and besides its demeaning to take orders from people you used to give orders to.  Ultimately the projects are canceled, no matter how successful.  The entire Shuttle program is gone, despite the prestige it brought the US.  Now the US is just another country that failed.  I didn't even like the Shuttle, but I recognized its value.  

This was John Kennedy's original aim for the Moon project, to increase the esteem of the US, and it worked, we won the Cold War.  Now we are just another country that can fail or even disappear or descend into political chaos and hardly anyone cares, many glad to see us go.  Why care?  Family is most important right?  How's your family going to exist in one of the other countries?  We came here from Europe for a reason that still exists.  I considered moving to Russia in about 2012 when I married a Russian.  I admired their collaboration with NASA and I felt free from some of the US cultural nonsense.  Imagine the magnitude of that mistake had I done it!

What should be goals?  You are free to have none or have silly ones in the US.  However, natural selection will keep giving us people that have families, because we don't live forever, not very long at all in fact, I'm 73, imagine that.  I don't really have the energy to program very much anymore.  Very small programs that I make count for something like social sciences research in cooperation theory, game theory, effects of wealth in cooperation games.  My AI programs are toys.  Programming paradigms pass away too fast to keep doing it when you get older.  You won't believe me, but I'm writing from an age way past you, chances are.

So family first.  I was god knows how late.  Too much programming.  Then I got into designing chips.  It's just like programming, just costs more.  One day I had a herniated disk and could not sit at my desk, so I retired, then a heart attack from lack of exercise.  It piles up on you.  What goal is important enough to get up when you are hurting and pursue it.  I'll tell you.  Survival of your family into the future.  Nothing else matters.  Your friends become the parents of your kids friends, even if you have nothing in common with them, because it helps your kid.  This is weird.  I don't especially like human culture.  But it's much harder to change than various reformers and activists groups think, and that way lies social chaos and possible disintegration.

But anyway, I decided that turning my interests in programming, math and physics toward the social sciences would in the long run help my son more, if I can make some tiny contribution to the field.  More people need to so this, and to do it without just arguing from within the context of their own cultural bias.  Human nature is not human.  Simple mathematical entities running as programs have many of the same characteristics.  Don't believe me?  Read this Wealth-relative effects in cooperation games - ScienceDirect .  There is a simulation you can tweak and play with linked there.

I do research in cooperation and game theory, including some work on altruism, and also some hard science work.  Everyone looks at the Rorschach blot of human behavior and sees something different.  Most of the disagreements have never been settled.  Even experiment does not completely settle them.  

My experience from having children and observing them in the first few months of life is more definitive.  They come with values and personal traits that are not very malleable, and not directly traceable to parents.  Sometimes grandparents (who were already dead, so it had to be genetic).

My experience with A.I. researchers is that they are looking for a shortcut.  No one's career will permit raising an experimental AI as a child.  The tech of the AI would be obsolete before the experiment was complete.  This post is wishful thinking that a shortcut is available.  It is speculative, anecdotal, and short on references to careful experiment.  Good luck with that.  

A related phenomenon, which I have encountered in life but not in systematic research, is that an exceptionally valuable turn is treated as a last turn, and someone will defect.  This was evident in at least two states during the tobacco lawsuits.  In Texas, the attorney general went to jail for cheating.  In Mississippi, where some relatives of mine were on the legal team, one of the lawyers tried to claim all the credit, to the extent they got involved in a separate lawsuit against each other, and felt more animosity than against the tobacco company lawyers (for whom it was not a last turn, they were planning to survive).  (The claimant later went to jail but not for that, for unrelated bribery).

I was doing a bit of web research on the last turn dilemma when I found your post.  The last turn dilemma is that it is rational to defect on the first turn in finite games, but human behavior is not consistent with that (exceptions for if game theory is explained, or players have high IQ and figure it out).  The puzzle is to explain why.

I believe the components of an answer may be present in existing research.  But not tied together.  A rather complex experiment would be required for verification which I'm not equipped to perform.  If any of you would like to discuss this, get in touch with me.  I don't really want to discuss a possible paper in a forum.  https://shulerresearch.wordpress.com/ (contact form)