Andrew Vlahos

Wiki Contributions

Comments

There are three big problems with this idea. 

First, we don't know how to program an AI to value morality in the first place. You said "An AI that was programmed to be moral would..." but programming the AI to do even that much is the hard part. Deciding which morals to program in would be easy by comparison.

Second, this wouldn't be a friendly AI. We want an AI that doesn't think that it is good to smash Babylonian babies against rocks or torture humans in Hell for all of eternity like western religions say, or torture humans in Naraka for 10^21 years like the Buddhists say.

Third, you seem to be misunderstanding the probabilities here. Someone once said to consider what the world would be like if Pascal's wager worked, and someone else asked if they should consider the contradictory parts and falsified parts of Catholicism to be true also. I don't think you will get much support for this kind of thing from a group whose leader posted this.

Yes it did, it's clear that my prediction was wrong

This is true, although I don't think you'll get much interest about this because it's so obvious.

This isn't from Christianity, but actually goes back to hunter-gatherers and had a useful function. See this description of "insulting the meat". https://www.psychologytoday.com/us/blog/freedom-learn/201105/how-hunter-gatherers-maintained-their-egalitarian-ways

(to be clear, I'm not sure whether this still has a useful function or not)

https://waitbutwhy.com/2019/08/giants.html has a pretty convincing (to me) explanation of this. Basically the way human psychology works is that people have conflicts at the highest available struggle, and when no outside enemies are a threat they turn internally. For a nice graphical illustration, skip to "Me against my brothers; my brothers and me against my cousins; my cousins, my brothers, and me against strangers."

It would help. However, Twitter makes money based on energetic engagement, and no emotion drives behavior better than rage, so they don't want to fix it.

It's like the situation with phone companies. There actually are effective ways to prevent spoofed phone numbers, according to my dad who works at a telecom company. However, since scammers and telemarketers are by far the biggest customers, phone companies won't make the changes needed to do this.

No. Humans do major harm to each other, often even when they are trying to help. And that's if things go right; an AI based on human behavior has a high chance of causing harm deliberately.

I tried a long time ago and it didn't work

I'm a tutor, and I've noticed that when students get less sleep they make many more minor mistakes (like dropping a negative sign) and don't learn as well. This effect is strong enough that for a couple of students I started guessing how much sleep they got the last couple days at the end of sessions, asked them, and was almost always right. Also, I've tried at one point going on a significantly reduced sleep schedule with a consistent wakeup time, and effectiveness collapsed. I soon burned out and had to spend most of a day napping to catch up on sleep.

At this point I do think enough sleep is important, and have a different hypothesis that needed sleep is just different for different people. 

Load More