Wiki Contributions

Comments

Answer by Andrew VlahosJul 17, 20232-8

If that was the case we would be doomed far worse than if alignment was extremely hard. It's only because of all the writing that people like Eliezer have done talking about how hard it is and how we are not on track, plus the many examples of total alignment failures already observed in existing AIs (like these or these), that I have any hope for the future at all.

Remember, the majority of humans use as the source of their morality a religion that says that most people are tortured in hell for all eternity (or, if an eastern religion, tortured in a Naraka for a time massively longer than the real age of the universe so far which is basically the same thing). Even atheists who think they are false often still believe they have good moral teachings: For example, the writer of the popular webcomic Freefall is an Atheist Transhumanist Libertarian and his serious proposed AI alignment method is to teach them to support the values taught in human religions.

Even if you avoid this extremely common failure mode, planned societies run for the good of everyone are still absolutely horrible. Almost all Utopias in fiction suck even when they go the way the author says it would. In the real world, when the plans hit real human psychology, economics and so on, the result is invariably disaster. Imagine living in an average kindergarten all day every day, and that's one of the better options. The life I had was more like Comazotz from A Wrinkle in Time, and it didn't end when school was let out.

We also wouldn't be allowed to leave. Now, for the supposed good of the beneficiaries, generally runaways are forcably returned to their home and terminally ill people in constant agony are forced to stay alive. The implication of your idea being true would that you should kill yourself now while you still have the chance.

The good news is that, instead, only the tiny minority of people able to notice problems right in front of them (even without suffering from them personally) have any chance of successful alignment.

This actually isn't true: nuclear power was already becoming cheaper than coal and so on, and improvements have been available. The problem is actually regulatory: Starting at around 1970 various reasons have caused the law to make even the same tech to become MUCH more expensive. This was avoidable and some other countries like France managed to make it keep going cheaper than alternative sources. This talks about it in detail. Here's a graph from the article:Devanney Figure 7.11: USA Unit cost versus capacity. From P. Lang, “Nuclear Power Learning and Deployment Rates: Disruption and Global Benefits Forgone” (2017)

I'd love to do this, but would have a hard time paying out because, for reasons beyond my control and caused by other people's irrationality, I'm on SSI (although that might change in a few years). In the US people can't save more than $2000 in liquid assets without losing their benefits, so I can't take much, and probably wouldn't be able to pay out because every transaction must be justified to the government, and although small purchases for entertainment would go through I'd have a hard time defending paying $1000 or whatever on a bet. Also, I've tried to work around this with crypto and lost all I paid in a scam.

I was thinking about just lying about what I could pay back, but being alienated by what seems to be the only sane and good community on the planet would be a much bigger cost. (Other people try to be sane and good, and the lesson I've learned is that "ethics" is what people talk about when they are about to make things worse for everyone except for the rationalist community).

Yes! Finally someone gets it. And this isn't just from things that people consider bad, but from what they consider good also. For most of my life "good" meant what people talk about when they are about to make things worse for everyone, and it's only recently that I had enough hope to even consider cryonics, thinking that anyone having power over me would reliably cause situation worse than death regardless of how good their intentions were.

Elieser is trying to code in a system of ethics that would remain valid even if the programmers are wrong about important things, and therefore is one of very few people with even a chance of successfully designing a good AI, but almost everyone else is just telling the AI what they should do. That's why I oppose the halt in AI research he wants.

Actually I posted a comment below the article, quoting an Alcor representative's clarification: 

"Most Members submit a Statement of Revival Preferences document to state your expectations upon revival.

Alcor cannot guarantee that it will be followed since it will be many years into the future before you are revived.

I have attached the document for your review." (and the document was very detailed)

So Alcor says that they actually are willing to do this and are trying, although they of course can't guarantee that society won't in the future decide to force revive people against their will anyway.

New update: I can't do this anyway because I'm getting partial disability (Social Security Supplemental Income) and Rudi Hoffman said insurance companies won't insure people who get any disability payments, even if they have a job. I can't even save up for it slowly because in the US people on SSSI are forbidden from saving more than $2,000 in funds (reason: bureaucratic stupidity) and although I can save by putting money into an ABLE account (which has its own bureaucratic complications) the limit is $100,000 which might not be enough if prices adjust for inflation before I have enough. :(

Cryptocurrency won't fix this: I've tried crypto before and got scammed, so it can't be trusted even if the government doesn't catch me trying to evade the law.

Something really frustrating is that the reason I'm even on disability in the first place is because of society's insanity.

An Alcor representative clarified the point:

"Most Members submit a Statement of Revival Preferences document to state your expectations upon revival.

Alcor cannot guarantee that it will be followed since it will be many years into the future before you are revived.

I have attached the document for your review."

So I guess this is already being done

Actually I think you did understand my post. What I'm confused about is that I wanted to have the option to specify "I don't want to be brought back unless X and Y", I asked them and they said they wouldn't allow me to do this, and you said that they did allow you to do this. I asked a few years ago and got a similar answer.

Could someone else who signed up for Alcor reply to this and say if they got something like that?

But I asked Alcor specifically if something like this would be possible, and they said that it wouldn't be. (Along with CI)

Not me. However, I thought of that part in Dr. Seuss where someone watches a bee to make it more productive, someone watches that watcher to make him more productive, someone watches him and so on.For all the busy bees, and all the watch watchers | tabitha ...

Load More