All of PoignardAzur's Comments + Replies

Reneging Prosocially

For what it's worth, my own experience interacting with Duncan is that, when he made a commitment and then couldn't meet it and apologized about it, the way he did it really helped me trust him and trust that he was trying to be a good friend.

I agree that you shouldn't talk about it using points and tit-for-tat language (and I think Duncan agrees too? At least he's better at being informal than the article suggests).

But overall, yeah, I agree with the article. The "illusion that friendship is unconditional" works until it doesn't. Or to put it in nerdy ter... (read more)

Reneging Prosocially

The phrase "have your cake and eat it, too" always confused younger-Duncan; I think it’s clearer in its original form "eat your cake and have it, too," or the less poetic "eat your cake and yet still have your uneaten cake." 

The French version is better: "to have the butter and the money for butter".

How factories were made safe

Not surprised at all. My father is a roofer and mostly works with African immigrants, and to hear him tell it, the biggest difficulty regarding workplace safety is getting them to wear the damn protective gear (mostly hard hats and gloves), for the reasons outlined in the article.

(From what I've heard from journal articles and the like, the other big problem in the sector is that they'll hire a lot of undocumented immigrants who lie about how qualified they are to get the job; which is another version of the same "workers will break all the safety rules written to protect them if the economic pressure is strong enough" issue.)

Can you control the past?

This feels like the kind of philosophical pondering that only makes any amount of sense in a world of perfect spherical cows, but immediately falls apart when you consider realistic real-world parameters.

Like... to go back to the Newcomb's problem... perfect oracles that can predict the future obviously don't exist. I mean, I know the author knows that. But I think we disagree on how relevant that is?

Discussions of Newcomb's problem usually handwave the oracle problem away; eg "Omega’s predictions are almost always right"... but the "almost" is pulling a l... (read more)

Can you control the past?

Agreed.

I think this type of reflection is the decision theory equivalent of calculating the perfect launch sequence in Kerbal Space Program. If you sink enough time into it, you can probably achieve it, but by then you'll have loooong passed the point of diminishing returns, and very little of what you've learned will be applicable in the real world, because you've spent all your energy optimizing strategies that immediately fall apart the second any uncertainty or fuzziness is introduced into your simulation.

Garrabrant and Shah on human modeling in AGI

I read the beginning of the debate and skimmed the end, so I might have missed something, but it really feels like it's missing a discussion about the economic incentives of AI developers.

Like, the debaters talk about saving the world, and that's cool, but... Let's assume "saving the world" (defined as "halting aging and world hunger, having reliable cryogenics, colonizing other planets, and having enough food and resources to accommodate the growing pool of humans") takes 200 years. After then, you get a post-scarcity society, but in the meantime (or at l... (read more)

4rohinmshah5moBoth Scott and I are aware of the existence of these incentives, and probably roughly agree on their strength, so I didn't bring them up. (The purpose of this debate was for Scott and I to resolve our disagreements, not to present the relevant considerations / argue for our particular sides / explain things for an audience.) I'd imagine Scott would say something along the lines of "yeah, the incentives are there and hard to overcome, but we are doomed-by-default so any plan is going to involve some hard step".
Another (outer) alignment failure story

Yeah, that was my initial reaction as well.

Modern technologies are getting increasingly complicated... but when you get down to it, a car is just a box with wheels and a combustion engine. There aren't that many ways for a outcome-perception-driven AI to go "oops, I accidentally concealed a human-killing machine gun inside the steering wheel!", especially if the AI has to subcontract to independent suppliers for parts.

1Gerald Monroe8moMoreover, tight constraints. Such a machine gun adds weight and cost without benefit to the AIs reward heuristic. A far more likely problem is it removes structure somewhere because every collision test doesn't need that material to pass. But the missing structure causes fatalities in crashes a conservatively designed vehicle would survive or long term durability problems. Human designed products have exactly this happen also however. The difference is you could make a patch to add another reward heuristic component and have another design in the prototyping phase that same week. It would let you move fast and break things and fix them far faster than human organizations can.
Seven Years of Spaced Repetition Software in the Classroom

It could be fun to introduce some of these to novices and make it part of the language A classroom slang -- a kind of introduction to thinking in language B.

There's a kind of slang that's like what you describe in r/france, where people will intentionally use idiomatic english expressions translated word-for-word in frech in non-sensical ways.

Eg people will say "je suis hors de la boucle" (I'm out of the loop) even though that sounds incomprehensible to someone who doesn't know the english idiom.

Some people get really annoyed about that pseudo-slang, though.

Book review: The Geography of Thought

I'm really not convinced by this review, the excerpts linked from the books, or the theory-crafting in the comment section.

I'm reading a lot of just-so stories, but not a lot of evidence, and the evidence there is seems like exactly the kind of papers that would fall prey to the replication crisis.

Motive Ambiguity

What is more annoying is when the people involved do not seem to appreciate the burned value as a bad thing and instead "romanticize" it

Nailed it.

I think people on this forum all share some variation of the same experience, where they observe that everyone around them is used to do something inefficient, get told by their peers "no, don't do the efficient thing, people will think less of you", they do the efficient thing, and their life gets straightforwardly easier and nobody notices or care.

This is especially the case for social norms, when you can get y... (read more)

To listen well, get curious

Yup, I came here to say this.

These days I'm often talking with Duncan Sabien, and sometimes I complain about my problems.

When I do, I almost never expect Duncan to give me solutions (though he sometimes does, because he's a smart person and a good listener). I mostly do it to vent, and to put some words on ideas and grievances I've been stewing on for a while.

I'm going to be a little elitist and say this: the smarter people are, the less you can help them by giving them advice. If people aren't self-actualized, and don't have the skill to think through the... (read more)