Not really part of the lesswrong community at the moment, but I think evolutionary dynamics will be the next thing.
Not just of AI, but post humans, uploads etc. Someone will need to figure out what kind of selection pressures the should be so that things don't go to ruin in an explosion of variety.
All competitive situations against ideal learning agents are anti inductive in this sense. Because they can note regularities in their actions and avoid them in the future as well as you can note regularities in their actions and exploit them. The usefulness of induction is based on the relative speeds of the induction of the learning agents.
As such anti induction appears in situations like bacterial resistance to antibiotics. We spot a chink in the bacterias armour, and we can predict that that chink will become less prevalent and our strategy less useful.
So I wouldn't mark markets as special, just the most extreme example.
I find neither that convincing. Justice is not a terminal value for me, so I might sacrifice it for Winning. I prefered reading the first, but that is no indication of what a random person may prefer.
With international affairs, isn't stopping the aggression the main priority? That is stopping the death and suffering of humans on both sides? Sure it would be good to punish the aggressors rather than the retaliators but if that doesn't stop the fighting it just means more people are dying.
Also there is a difference between the adult and the child, the adult relies on the law of the land for retaliation the child takes it upon himself when he continues the fight. That is the child is a vigilante, and he may punish disproportionately e.g. breaking a leg for a dead leg.
I don't really have a good enough grasp on the world to predict what is possible, it all seems to unreal.
One possibility is to jump one star away back towards earth and then blow up that star, if that is the only link to the new star.
Re: "MST3K Mantra"
Illustrative fiction is a tricky business, if this is to be part of your message to the world it should be as coherent as possible, so you aren't accidentally lying to make a better story.
If it is just a bit of fun, I'll relax.
I wonder why the babies don't eat each other. There must be a huge selective pressure to winnow down your fellows to the point where you don't need to be winnowed. This would in turn select for small brained, large and quick growing at the least. There might also be selective pressure to be partially distrusting of your fellows (assuming there was some cooperation), which might follow over into adulthood.
I also agree with the points Carl raised. It doesn't seem very evolutionarily plausible.
"Except to remark on how many different things must be known to constrain the final answer."
What would you estimate the probability of each thing being correct is?
Reformulate to least regret after a certain time period, if you really want to worry about the resource usage of the genie.
That's true. Communities that can encourage truth speaking and exploration will probably get more of it and be richer for it in the long term.