LESSWRONG
LW

Duncan Sabien (Inactive)
1355345123938
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Civilization & Cooperation
Banning Said Achmiz (and broader thoughts on moderation)
Duncan Sabien (Inactive)7d00

I volunteer as tribute

Reply1
Banning Said Achmiz (and broader thoughts on moderation)
Duncan Sabien (Inactive)7d20

I was going to type a longer comment for the people who are observing this interaction, but I think the phrase "case in point" is superior to what I originally drafted.

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Duncan Sabien (Inactive)8d180

(It was me, and in the place where I encouraged DrShiny to come here and repeat what they'd already said unprompted, I also offered $5 to anybody who disagreed with the Said ban to please come and leave that comment as well.)

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Duncan Sabien (Inactive)8d1715

Just noting that

one should object to tendentious and question-begging formulations, to sneaking in connotations, and to presuming, in an unjustified way, that your view is correct and that any disagreement comes merely from your interlocutor having failed to understand your obviously correct view

is a strong argument for objecting to the median and modal Said comment.

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Duncan Sabien (Inactive)8d1819

But I think a lot of Said's confusions would actually make more sense to Said if he came to the realization that he's odd, actually, and that the way he uses words is quite nonstandard, and that many of the things which baffle and confuse him are not, in fact, fundamentally baffling or confusing but rather make sense to many non-Said people.

(My own writing, from here.)

Reply
The Problem
Duncan Sabien (Inactive)11d51

Separately, I will note (shifting the (loose) analogy a little) that if someone were to propose "hey, why don't we put ourselves in the position of wolves circa 20,000 years ago?  Like, it's actually fine to end up corralled and controlled and mutated according to the whims of a higher power, away from our present values; this is actually not a bad outcome at all; we should definitely build a machine that does this to us,"

they would be rightly squinted at.  

Like, sometimes one person is like "I'm pretty sure it'll kill everyone!" and another person responds "nuh-uh!  It'll just take the lightcone and the vast majority of all the resources and keep a tiny token population alive under dubious circumstances!" as if this is, like, sufficiently better to be considered good, and to have meaningfully dismissed the original concern.

It is better in an absolute sense, but again: "c'mon, man."  There's a missing mood in being like "yeah, it's only going to be as bad as what happened to monkeys!" as if that's anything other than a catastrophe.

(And again: it isn't likely to only be as bad as what happened to monkeys.)

(But even if it were, wolves of 20,000 years ago, if you could contrive to ask them, would not endorse the present state of wolves-and-dogs today.  They would not choose that future.  Anyone who wants to impose an analogous future on humanity is not a friend, from the perspective of humanity's values.  Being at all enthusiastic about that outcome feels like a cope, or something.)

Reply
The Problem
Duncan Sabien (Inactive)11d20

No, the edit completely fails to address or incorporate

You have to be careful with the metaphor, because it can lead people to erroneously assuming that an AI would be at least that nice, which is not at all obvious or likely for various reasons

...and now I'm more confused at what's going on.  Like, I'm not sure how you missed (twice) the explicitly stated point that there is an important disanalogy here, and that the example given was more meant to be an intuition pump.  Instead you seem to be sort of like "yeah, see, the analogy means that at least some humans would not die!" which, um.  No.  It would imply that, if the analogy were tight, but I explicitly noted that it isn't and then highlighted the part where I noted that, when you missed it the first time.

(I probably won't check in on this again; it feels doomy given that you seem to have genuinely expected your edit to improve things.)

Reply
The Problem
Duncan Sabien (Inactive)11d42

I disagree with your "obviously," which seems both wrong and dismissive, and seems like you skipped over the sentence that was written specifically in the hopes of preventing such a comment:

You have to be careful with the metaphor, because it can lead people to erroneously assuming that an AI would be at least that nice, which is not at all obvious or likely for various reasons

(Like, c'mon, man.)

Reply
The Problem
Duncan Sabien (Inactive)12d92

Why would modern technology-using humans 'want' to destroy the habitats of the monkeys and apes that are the closest thing they still have to a living ancestor in the first place?  Don't we feel gratitude and warmth and empathy and care-for-the-monkey's-values such that we're willing to make small sacrifices on their behalf?

(Spoilers: no, not in the vast majority of cases. :/ )

The answer is "we didn't want to destroy their habitats, in the sense of actively desiring it, but we had better things to do with the land and the resources, according to our values, and we didn't let the needs of the monkeys and apes slow us down even the slightest bit until we'd already taken like 96% of everything and even then preservation and conservation were and remain hugely contentious."

You have to be careful with the metaphor, because it can lead people to erroneously assuming that an AI would be at least that nice, which is not at all obvious or likely for various reasons (that you can read about in the book when it comes out in September!).  But the thing that justifies treating catastrophic outcomes as the default is that catastrophic outcomes are the default.  There are rounds-to-zero examples of things that are 10-10000x smarter than Other Things cooperating with those Other Things' hopes and dreams and goals and values.  That humans do this at all is part of our weirdness, and worth celebrating, but we're not taking seriously the challenge involved in robustly installing such a virtue into a thing that will then outstrip us in every possible way.  We don't even possess this virtue ourselves to a degree sufficient that an ant or a squirrel standing between a human and something that human wants should feel no anxiety.

Reply
Goodhart's Imperius
Duncan Sabien (Inactive)14d20

The problem is, evolution generally doesn't build in large buffers.  Human brains are "pretty functional" in the sense that they just barely managed to be adequate to the challenges that we faced in the ancestral environment.  Now that we are radically changing that environment, the baseline "barely adequate" doesn't have to degrade very much at all before we have concerningly high rates of stuff like obesity, depression, schizophrenia, etc.

(There are other larger problems, but this is a first gentle gesture in the direction of "I think your point is sound but still not reassuring."  I agree you could productively make a list of proxies that are still working versus ones that aren't holding up in the modern era.)

Reply
Load More
293Make More Grayspaces
1mo
65
272Truth or Dare
3mo
58
80Review: Conor Moreton's "Civilization & Cooperation"
1y
8
362Social Dark Matter
2y
128
211Killing Socrates
2y
146
92Exposure to Lizardman is Lethal
2y
97
40Repairing the Effort Asymmetry
2y
11
110A Way To Be Okay
3y
38
256You Don't Exist, Duncan
3y
107
289Basics of Rationalist Discourse
3y
193
Load More
NSFW
3y
A beginner's guide to explaining things
9y
(+218/-83)
A beginner's guide to explaining things
9y
(+87)
A beginner's guide to explaining things
9y
(+4/-5)
A beginner's guide to explaining things
9y
(+8/-4)
Audience-centric Explanations
9y
(+42/-80)
Audience-centric Explanations
9y
(+53/-15)
Audience-centric Explanations
9y
(+58)
Audience-centric Explanations
9y
(+18/-29)
Audience-centric Explanations
9y
Load More