LESSWRONG
LW

467
Jiro
5173126580
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Tale of the Top-Tier Intellect
Jiro1d20

Why can’t there be a rock-paper-scissors–like structure, where in some position, 12. …Ne4 is good against positional players and bad against tactical players?

I would say that in that situation, the move is bad, but being a positional player after your opponent makes that move is also bad.

Reply
The main way I've seen people turn ideologically crazy [Linkpost]
[+]Jiro3d-50
Resolving Newcomb's Problem Perfect Predictor Case
Jiro6d20

A human computer programmer would read your code after you submit it and decide whether your program chooses 1-boxing or 2-boxing.

The human computer programmer is not immune to the halting problem, so he can't always do this.

Reply
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Jiro12d20

EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.

That implies that if I want to make things better for Americans specifically, that would be EA.

Reply
Reminder: Morality is unsolved
Jiro12d80

Your answer highly depends on what the rule says you could be swapped with (and what it even means to be swapped with something of different intelligence, personality, or circumstances--are you still you?) Saying "every human on Earth" isn't getting rid of a nitpick; it's forcing an answer.

Reply1
The main way I've seen people turn ideologically crazy [Linkpost]
Jiro12d42

Some ideas inherently affect a lot of people. Anything involving government or income redistribution, including Marxism, falls into that category. Anything that's about what all people should do, such as veganism, also does.

You are inherently going to be arguing with a lot of stupid people, or a lot of "super fired up" people, when you argue ideas that affect such people. And you should have to. Most people wouldn't be able to correctly and logically articulate why you shouldn't steal their car, let alone anything related to Marxism or veganism, but I would say that their objections should have some bearing on whether you do so.

Reply1
Reminder: Morality is unsolved
Jiro12d20

Minimum acceptable outcome.

That's a key point that a lot of people are missing when it comes to AI alignment.

Scenarios that people are most worried about such as the AI killing or enslaving everyone, or making paperclips in disregard of anyone who is made of resources and may be impacted by that, are immoral by pretty much any widely used human standard. If the AI disagrees with some humans about morality, but this disagreement is within the moral parameters about which modern, Western, humans disagree, the AI is for all practical purposes aligned.

Reply
Omelas Is Perfectly Misread
Jiro19d-40

Nobody means literally nobody by "nobody says X".

Reply
Omelas Is Perfectly Misread
Jiro20d-40

I didn't mean that there's literally no such thing whatsoever. But "be selfish and ignore the greater good" is constantly derided and is rarely even accepted, let alone presented as a good moral. The whole reason the rationalism community is tied to EA is rejection of selfishness.

Obviously self-help books are an exception, in the same way that pro-murder books are an exception to "murder isn't widely accepted".

Reply
The Mom Test for AI Extinction Scenarios
Jiro20d30

It would have given my mom the wrong impression about AI extinction risk (that it sounds crazy)

"It sounds crazy" is a correct impression, by definition. I assume you mean "the wrong impression (that it is crazy)".

But there's a fine line between "I won't mention this because people will get the wrong impression (that it's crazy)" and "I won't mention this because people will get the wrong impression (that it's false)". The former is a subset of the latter; are you going to do the latter and conceal all information that might call your ideas into doubt?

(One answer might be "well, I won't conceal information that would lead to a legitimate disagreement based on unflawed facts and reasoning. Thinking I'm crazy is not such a disagreement". But I see problems with this. If you believe in X, you by definition think that all disagreement with X is flawed, so this doesn't restrict you at all.)

Reply
Load More
9Some suggestions (desperate pleas, even)
8y
5