Posts

Sorted by New

Wiki Contributions

Comments

It wouldn't really change the overall outcome. What matters the most is that the total number of talented people grows exponentially, not just specific individual people.

I'm not sure they mean that. Perhaps it would be better to actually specify the specific values you want implemented. But then of course people will disagree, including the actual humans who are trying to build AGI.

If you donate to AI alignment research, it doesn't mean that you get to decide which values are loaded. Other people will decide that. You will then be forced to eat the end result, whatever it may look like. Your mistaken assumption is that there is such a thing as "human values", which will cause a world that is good for human beings in general. In reality, people have their own values, and they include terms for "stopping other people from having what they want", "making sure my enemies suffer", "making people regret disagreeing with me", and so on.

AI alignment isn't the only problem. Most people's values are sufficiently unaligned with my own that find solving AI unattractive as a goal. Even if I had a robust lever to push, such as donating to an AI alignment research org or lobby think tank and it was actually cost-effective, the end result would still be unaligned (with me) values being loaded. So there are two steps rather than one: First, you have to make sure the people who create AI have values aligned with yours, and then you have to make sure that the AI has values aligned with the people creating it.

Frankly, this is hopeless from my perspective. Just the first step is impossible. I know this from years of discussions and debates with my fellow human beings, and from observing politics. The most basic litmus test for me is if they force fates worse than death on people who explictly disagree. In other words, if suffering is mandatory or if people will respect other people's right to choose painless death as an ultima ratio solution for their own selves (not forcing it on others). This is something so basic and trivial, and yet so existential that I consider it a question where no room for compromise is possible from my perspective. And I am observing that, even though public opinion robustly favors some forms of suicide rights, the governments of this world have completely botched the implementation. And that is just one source of disagreement, the one I choose as a litmus test because the morally correct answer is so obvious and non-negotiable from my perspective.

The upside opportunities from the alleged utopias we can achieve if we get the Singularity right also suffer from this problem. I used to think that if you can just make life positive enough, the downside risks might be worth taking. So we could implement (voluntary) hedonic enhancements, experience machines and pleasure wireheading offers to make it worthwhile for those people who want it. These could be so good that it would outweigh the risk, and investing in such future life could be worth it. But of course those technologies are decried as "immoral" also, by the same types of "moralists" who decry suicide rights. To quote former LessWrong user eapache: 

...the “stimmer”‘s (the person with the brain-stimulating machine) is distinctly repugnant in a way that feels vaguely ethics-related.

...Anything that we do entirely without benefit to others is onanistic and probably wrong.

https://www.lesswrong.com/posts/e2jmYPX7dTtx2NM8w/when-is-it-wrong-to-click-on-a-cow

There is a lot of talk about "moral obligations" and "ethics" and very little about individual liberty and the ability to actually enjoy life to its fullest potential. People, especially the "moral" ones, demand Sacrifices to the Gods, and the immoral ones are just as likely to create hells over utopias. I see no value in loading their values into an AI, even if it could be done correctly and cost-effectively.

Luckily, I don't care about the fate of the world in reflective equilibrium, so I can simply enjoy my life with lesser pleasures and die before AGI takes over. At least this strategy is robust and doesn't rely on convincing hostile humans (outside of deterring more straightforward physical attacks in the near-term, which I do with basic weaponry) let alone solving the AGI problem. I "solve" climate change the same way.

In exceptional circumstances, this might be your wise understanding of their enlightened self-interest even when at cross-purposes to their present desires: e.g. taking your dog to the vet, preventing a suicide.

 

I just want to point out that your model of their enlightened self-interest can be severely wrong, e.g. some people see suicide as a rational means to avoid fates worse than death (including fates only slightly worse than death, which comprises a lot of ordinary human life that you're not supposed to complain about). This is why I value suicide as an option. And if you give yourself permission to coerce others to lose this option without their consent, you might be making them worse off according to their enlightened self-interest while motivating them to hate you at the same time.