Sorted by New

Wiki Contributions


How factories were made safe

Not surprised at all. My father is a roofer and mostly works with African immigrants, and to hear him tell it, the biggest difficulty regarding workplace safety is getting them to wear the damn protective gear (mostly hard hats and gloves), for the reasons outlined in the article.

(From what I've heard from journal articles and the like, the other big problem in the sector is that they'll hire a lot of undocumented immigrants who lie about how qualified they are to get the job; which is another version of the same "workers will break all the safety rules written to protect them if the economic pressure is strong enough" issue.)

Can you control the past?

This feels like the kind of philosophical pondering that only makes any amount of sense in a world of perfect spherical cows, but immediately falls apart when you consider realistic real-world parameters.

Like... to go back to the Newcomb's problem... perfect oracles that can predict the future obviously don't exist. I mean, I know the author knows that. But I think we disagree on how relevant that is?

Discussions of Newcomb's problem usually handwave the oracle problem away; eg "Omega’s predictions are almost always right"... but the "almost" is pulling a lot of weight in that sentence. When is Omega wrong? How does it make its decisions? Is it analyzing your atoms? Even if it is, it feels like it should only be able to get an analysis of your personality and how likely you are to pick one or two boxes, not to perfectly predict whether you will (indeed, at the time it gives you a choice, it's perfectly possible that the decision you'll make is still fundamentally random, and you might possibly make both choices depending on factors Omega can't possibly control).

I think there are interesting discussions to be made about eg the value of honor, of sticking to precommitments even when the information you have suggests it's better for you to betray them, etc. And on the other hand, there's value to be had in discussing the fact that, in the real world, there's a lot of situations where pretending to have honor is a perfectly good substitute for actually having honor, and wannabe-Omegas aren't quite able to tell the difference.

But you have to get out of the realm of spherical cows to have those discussions.

Can you control the past?


I think this type of reflection is the decision theory equivalent of calculating the perfect launch sequence in Kerbal Space Program. If you sink enough time into it, you can probably achieve it, but by then you'll have loooong passed the point of diminishing returns, and very little of what you've learned will be applicable in the real world, because you've spent all your energy optimizing strategies that immediately fall apart the second any uncertainty or fuzziness is introduced into your simulation.

Garrabrant and Shah on human modeling in AGI

I read the beginning of the debate and skimmed the end, so I might have missed something, but it really feels like it's missing a discussion about the economic incentives of AI developers.

Like, the debaters talk about saving the world, and that's cool, but... Let's assume "saving the world" (defined as "halting aging and world hunger, having reliable cryogenics, colonizing other planets, and having enough food and resources to accommodate the growing pool of humans") takes 200 years. After then, you get a post-scarcity society, but in the meantime (or at least for the first 100 years), you still have a capitalist society with competing interests driven by profit and self-preservation.

During those 100 years, what are the incentives for AI companies to use the methods discussed here instead of "whatever the hell gives the best results fastest for the cheapest price"? (what the debaters call "the default path")

Especially since the difference between training methods is extremely technical and nuanced, so any regulating entity (governments, the EU, even Google's safety team) would have a hard time establishing specific rules.

Another (outer) alignment failure story

Yeah, that was my initial reaction as well.

Modern technologies are getting increasingly complicated... but when you get down to it, a car is just a box with wheels and a combustion engine. There aren't that many ways for a outcome-perception-driven AI to go "oops, I accidentally concealed a human-killing machine gun inside the steering wheel!", especially if the AI has to subcontract to independent suppliers for parts.

Seven Years of Spaced Repetition Software in the Classroom

It could be fun to introduce some of these to novices and make it part of the language A classroom slang -- a kind of introduction to thinking in language B.

There's a kind of slang that's like what you describe in r/france, where people will intentionally use idiomatic english expressions translated word-for-word in frech in non-sensical ways.

Eg people will say "je suis hors de la boucle" (I'm out of the loop) even though that sounds incomprehensible to someone who doesn't know the english idiom.

Some people get really annoyed about that pseudo-slang, though.

Book review: The Geography of Thought

I'm really not convinced by this review, the excerpts linked from the books, or the theory-crafting in the comment section.

I'm reading a lot of just-so stories, but not a lot of evidence, and the evidence there is seems like exactly the kind of papers that would fall prey to the replication crisis.

Motive Ambiguity

What is more annoying is when the people involved do not seem to appreciate the burned value as a bad thing and instead "romanticize" it

Nailed it.

I think people on this forum all share some variation of the same experience, where they observe that everyone around them is used to do something inefficient, get told by their peers "no, don't do the efficient thing, people will think less of you", they do the efficient thing, and their life gets straightforwardly easier and nobody notices or care.

This is especially the case for social norms, when you can get your social circle to buy in. Eg people have really silly ideas about romance and gender roles and patriarchal ideals (eg the girl has to shave and put on makeup, the guy has to pay everything, everyone must be coy and never communicate), but if you and the person you date agree to communicate openly and respect each other and don't do that crap... well, in my limited experience, it's just easier and more fun?

My point is, it's amazing how much value you can not-burn when your stop romanticizing burning value.

To listen well, get curious

Yup, I came here to say this.

These days I'm often talking with Duncan Sabien, and sometimes I complain about my problems.

When I do, I almost never expect Duncan to give me solutions (though he sometimes does, because he's a smart person and a good listener). I mostly do it to vent, and to put some words on ideas and grievances I've been stewing on for a while.

I'm going to be a little elitist and say this: the smarter people are, the less you can help them by giving them advice. If people aren't self-actualized, and don't have the skill to think through their problems, then, sure, you can listen to them for a while and give them a totally different approach or a new trick that they didn't think of. But there's also a category of people who, by the time they come to you to vent about their problems, have already put enough thought into them that they'll have considered anything you can think of after a 5-minutes conversation.

(though of course you might have domain-specific knowledge or they might have overlooked something obvious or they might need support to not pick the easy-but-wrong choice, etc)

To paraphrase Scott Alexander, we should cultivate the skill of appreciating the phatic. Obviously everything in the article is valid and insightful and being curious is absolutely a skill to cultivate, especially in rational communities. But being phatic is a good default.