Wiki Contributions

Comments

Bohaska12d10

Is the Renaissance caused by the new elite class, the merchants, focusing more on pleasure and having fun compared to the lords, who focused more on status and power?

Bohaska3mo10

 hmm, is there a collection of the history of terrorist attacks related to AI?

Bohaska3mo80

But Manifold adds 20 mana to liquidity per new trader, so it'll eventually become more inelastic over time. The liquidity doesn't stay at 50 mana. 

Bohaska4mo20

After reading this and your dialogue with Isusr, it seems that Dark Arts arguments are logically consistent and that the most effective way to rebut them is not to challenge them directly in the issue.

jimmy and madasario in the comments asked for a way to detect stupid arguments. My current answer to that is “take the argument to its logical conclusion, check whether the argument’s conclusion accurately predicts reality, and if it doesn’t, it’s probably wrong”

For example, you mentioned before an argument which says that we need to send U.S. troops to the Arctic because Russia has hypersonic missiles that can do a first-strike on the US, but their range is too short to attack the US from the Russian mainland, but it is long enough to attack the US from the Arctic.

If this really were true, we would see this being treated as a national emergency, and the US taking swift action to stop Russia from placing missiles in the Arctic, but we don’t see this.

Now, for some arguments (e.g. AI risk, cryonics), the truth is more complicated than this, but it’s a good heuristic for telling whether you need to investigate an argument more thoroughly or not.

Bohaska4mo10

We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?

You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.

(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)

Bohaska4mo10

I do believe your main point is correct, just that most people here already know that.

Bohaska4mo20

Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans. 
 

From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.

Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?

Bohaska4mo20

The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
 

Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?

Bohaska4mo10

Why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn't it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don't even have clean drinking water, let alone coffee, so it's going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].

 

Why can't you just build an AI whose goal is to fetch its owners coffee, and not to maximize the good it'll do?

[This comment is no longer endorsed by its author]Reply
Bohaska4mo20

I think you just got the wrong audience. People assume that you’re referring to effective altruism charities and aid. The average LessWrong reader already believes that traditional aid is ineffective, this post is mostly old info. Your criticisms of aid sound a bit ignorant because people pattern-match your post to criticism of charities like GiveDirectly, when people have done studies that show GiveDirectly has quite a good cost-benefit ratio

Your post is accurate, but redundant to EAs. 

 Also, slightly unrelated, but what do you think about EA charities? Have you looked into them? Do you find them better than traditional charities? 

Load More