LESSWRONG
LW

2229
Jiro
5176126600
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
We're Not The Center of the Moral Universe
Jiro2d40

People will pay the same amount to save 200,000 birds as 2,000.

People don't want to save X number of birds as a terminal goal. They want to save a significant number of birds at a reasonable cost. And the fact that the question asks you about saving X number of birds would, in most contexts outside a poll specifically designed to not do so, provide information about what number is significant and what number can be saved at a reasonable cost. Since people being asked the question with 2000 and people who are asked the question with 200000 are provided what would normally be different information, different answers are expected.

Also, the phrase "anthropic principle" appears nowhere in this post. Having nothing important until humans came around is completely expected; being able to make a graph and being important are strongly correlated, and by the anthropic principle, the time when the graph is made is always centered around now.

Reply
Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs
Jiro5d20

"You said it's mutually exclusive here and you said the opposite there" is adversarial in the sense that pointing out any sort of error or inconsistency is adversarial. In other words, it is literally adversarial, but it's a good kind of adversarial. You can't point out an inconsistency without being adversarial!

Reply
The Tale of the Top-Tier Intellect
Jiro7d60

Why can’t there be a rock-paper-scissors–like structure, where in some position, 12. …Ne4 is good against positional players and bad against tactical players?

I would say that in that situation, the move is bad, but being a positional player after your opponent makes that move is also bad.

Reply
The main way I've seen people turn ideologically crazy [Linkpost]
[+]Jiro9d-60
Resolving Newcomb's Problem Perfect Predictor Case
Jiro12d20

A human computer programmer would read your code after you submit it and decide whether your program chooses 1-boxing or 2-boxing.

The human computer programmer is not immune to the halting problem, so he can't always do this.

Reply
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Jiro18d20

EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.

That implies that if I want to make things better for Americans specifically, that would be EA.

Reply
Reminder: Morality is unsolved
Jiro18d80

Your answer highly depends on what the rule says you could be swapped with (and what it even means to be swapped with something of different intelligence, personality, or circumstances--are you still you?) Saying "every human on Earth" isn't getting rid of a nitpick; it's forcing an answer.

Reply1
The main way I've seen people turn ideologically crazy [Linkpost]
Jiro18d42

Some ideas inherently affect a lot of people. Anything involving government or income redistribution, including Marxism, falls into that category. Anything that's about what all people should do, such as veganism, also does.

You are inherently going to be arguing with a lot of stupid people, or a lot of "super fired up" people, when you argue ideas that affect such people. And you should have to. Most people wouldn't be able to correctly and logically articulate why you shouldn't steal their car, let alone anything related to Marxism or veganism, but I would say that their objections should have some bearing on whether you do so.

Reply1
Reminder: Morality is unsolved
Jiro18d20

Minimum acceptable outcome.

That's a key point that a lot of people are missing when it comes to AI alignment.

Scenarios that people are most worried about such as the AI killing or enslaving everyone, or making paperclips in disregard of anyone who is made of resources and may be impacted by that, are immoral by pretty much any widely used human standard. If the AI disagrees with some humans about morality, but this disagreement is within the moral parameters about which modern, Western, humans disagree, the AI is for all practical purposes aligned.

Reply
Omelas Is Perfectly Misread
Jiro25d-40

Nobody means literally nobody by "nobody says X".

Reply
Load More
9Some suggestions (desperate pleas, even)
8y
5