maximkazhenkov

Comments

The Axiological Treadmill

I'm still confused about your critique, so let me ask you directly: In the scenario outlined by the OP, do you expect humans to eventually evolve to stop feeling pain from electrical shocks?

The Axiological Treadmill

Evolution can't dictate what's harmful and what's not; bigger peacock tails can be sexually selected for until it is too costly for survival, and an equilibrium sets in. In our scenario, since pain-inducing stimuli are generally bad for survival, there is no selection pressure to increase the pain threshold for electrical shocks after a certain equilibrium point. Because we start out with a nervous system that associates electrical shocks with pain, this pain becomes a pessimistic error after the equilibrium point and never gets fixed, i.e. humans still suffer under electrical shocks, just not so bad they'd rather kill themselves.

Suffering is not rare in nature because actually harmful things are common and suffering is an adequate response to them.

Why then is it possible to suffer pain worse than death? Why do people and animals suffer just as intensely beyond their reproductive age?

The Axiological Treadmill

Yes, that's what pessimistic errors are about. I'm not sure what exactly you're critiquing though?

The Axiological Treadmill
The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival. But this is missing the bigger picture.

No, it isn't. What do I care which values evolution originally intended to align us with? What do I care which direction dysgenic pressure will push our values in the future? Those aren't my values, and that's all I need to know.

After all, if you forget to shock yourself, or choose not to, then you are immediately killed. So the people in this country will slowly evolve reward and motivational systems such that, from the inside, it feels like they want to shock themselves, in the same way (though maybe not to the same degree) that they want to eat.

No, there is no selection pressure to shock yourself more than the required amount, anything beyond that is still detrimental to your reproductive fitness. Once we've evolved to barely tolerate the pain of electric shocks so as to not kill ourselves, the selection towards more pain tolerance stops, and people will still suffer a great deal because there is no incentive for evolution to fix pessimistic errors. You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia, but it certainly doesn't apply to most cases, else suffering should already be a rare occurrence in nature.

Are there non-AI projects focused on defeating Moloch globally?

Well this requirement doesn't appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).

It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn't turn out to be a thing. For me that's the most intriguing aspect of the FAI problem; there are plenty of existential risks to go around but FAI as an existential opportunity is unique.

Are there non-AI projects focused on defeating Moloch globally?
  • Maybe one-shot Prisoner's Dilemma is rare and Moloch doesn't turn out to be a big issue after all
  • On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn't any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)
The Case for Human Genetic Engineering

That's just the label for the process of how eukaryotes came about and makes no statement about its likelihood, or am I missing something?

Are there non-AI projects focused on defeating Moloch globally?
Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn't start the Great Purge for shits and giggles; it rather is a pattern that keeps repeating with rulers throughout history.

The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

That's not defeating Moloch, that's surrendering completely and unconditionally to Moloch in its original form of natural selection.

Reported for GPT-spamming

Load More