Most lies are bad, but there are circumstances where lying is necessary and does not make truth the enemy, when telling the truth causes immediate bad action.
When people in Germany were sheltering people during the holocaust, and a Nazi official asked if they were hiding anyone, the correct response was "no" even though it was a lie. When someone doesn't believe in a religion or is gay or something, but they would be cast out of the home or "honor-killed" if parents found out, they should lie until they have a way to escape.
This post isn't wrong, but I doubt anyone today (except a few crazy people) disagree with it. Do you think there is a significant risk of a large-scale human eugenics program happening before direct genetic modification becomes cheap enough to make this irrelevent?
Sorry, that was the biggest I could find
The problem is that crushing poverty is one source of misery, but not the only source of misery. This implies that very poor countries would have clear benefits from industrializing, but things like cultural pressures and instability also have an effect, so when resources are common other factors dominate and so additional industry doesn't affect things much.
Thanks for your well explained response! I'll keep your reasons in mind for future posts.
Really? That's your argument? Do you really think people wouldn't have small talk topics or understand authority figures or learn anything without these classes? If after reading this, you still think those courses are essential to learning those skills, let alone teach them efficiently, I eagerly await your reply to this.
I didn't say that she learned nothing of value, I said that the marginal value of reading additional books at this point is close to zero. The first few books were probably different. Also, one incompetent professor isn't close to the only reason I have for opposing affirmative action. Finally, I didn't simply "not think of them as different", I didn't even have the mindset to understand the argument that he was when I first heard it, which is clear evidence against the claim that "every white person has internalized racism against black people and these are the stages of racism awareness". One paragraph is not my entire mindset.
There is a fourth option: the "safe" set of values can be misaligned with humans' actual values. Some values that humans have are either not listed in the "safe" set of values, or something in the safe set of values would not quite align with what it was trying to represent.
As a specific example, consider how a human might have defined values a few centuries ago."Hmm, what value system should we build our society on? Aha! The seven heavenly virtues! Every utopian society must encourage chastity, temperance, charity, diligence, patience, kindness, and humility!". Then, later, someone tries to put happiness somewhere in the list. However, since this was not put into the constrained optimization function it becomes a challenge to optimize for it.
This is NOT something that would only happen in the past. If an AI based it's values today on what the majority agrees is a good idea, things like marijuana would be banned and survival would be replaced by "security" or something else slightly wrong.
just to be clear, the /s tag means sarcasm. I've seen it used elsewhere on the internet, but I'm not sure how common it is understood yet.
The first presidential election I could vote in was Hillary vs Trump. Also, I was not in a swing state anyway.
I suuuure felt influential as someone who didn't fall for Us vs. Them /s