Posts

Sorted by New

Wiki Contributions

Comments

I think this is more of a disagreement on messaging than a disagreement on facts.

I don't see anyone disputing the "the AI is about as unpredictable as weather" claim, but it's quite a stretch to summarize that as "we have no idea how the AI works."

I understand that abbreviated and exaggerated messaging can be optimal for public messaging, but I don't think there's enough clarification in this post between direct in-group claims and examples of public messaging.

I would break this into three parts, to avoid misunderstandings from poorly contextualized language:
1. What is our level of understanding of AIs?
2. What is the general public's expectation of our level of understanding?
3. What's the best messaging to resolve this probable overestimation?

I think it’s important to recognize that there are in fact nontrivial things that can be done with increased aural discernment. You might notice subtleties of emotional expression, or small choices in musical interpretation that aren’t perceptible to a novice listener.

If you assume there are only trivial benefits to increased aural discernment, then of course the discussion falls apart. The drawbacks of finding amateur music off-putting aren’t trivial, so aural discernment would just be a simple loss in utility.

Sure, there may be larger obvious benefits to wearing glasses than training your musical ear, but it’s arguably just a matter of degree. To me, this is the point of the parallel here: to consider how both rectifying impaired fidelity of perception and increasing beyond typical fidelity of perception are both examples of a difficult-to-calculate pattern of utility changes.

What's supposed to happen with this sentence, cognitively speaking, is that you read the sentence, slot it into a preexisting model of how deterrence and red lines work.

I think it’s a mistake to qualify this interpretation as an example of following decoupling norms. Deterrence and red lines aren’t mentioned in Eliezer’s comment at all; they’re just extra context that you’ve decided to fill in. That’s generally what people do when they read things under contextualizing norms. Interpreting this comment as a suggestion to consider initiating a nuclear exchange is also a contextualized reading, just with a different context filled in.

A highly-decoupled reading, by contrast, would simply interpret “some risk of nuclear exchange” as, well, some unquantified/unspecified risk.

This post doesn't convince me away from the "tautological" objection mentioned at the end.

Caplan is presenting a contrarian position: "Standard signaling models are incorrect. The true cost of making friends is astronomically lower than what most people think it is." It's an intriguing take, but what are those actual and perceived costs?

To me, it's borderline ridiculous to simplify the costs down to "impulse control." Making friends is often difficult, complicated, and at times genuinely risky. It takes so much more effort and commitment than simply saying hello with a smile. Even for the "sincerely friendly" people who have good social intuitions, there's clearly an opportunity cost to having friends.

I'd be interested in an analysis of the perceived vs actual costs of friendship, but I don't see any of that explored in detail here. Even the core claim that "people are impulsive" doesn't seem particularly well supported.
 

First of all, wow, great read! The non-technical explanations in the first half made it easy to map those concepts to the technical notation in the second half.

The hardest part to understand for me was the idea of absolute bits of optimization. I resolved my confusion after some closer examination, but I’d suggest two changes that would strengthen the explanation:

1. The animated graph showing increasing bits of optimization has a very unfortunate zoom level. This — combined with the fat left side of the graph — gave me a false impression that the area was simply reducing by a third of its total as the optimization bits counted up to a maximum of ~3 (2.93 to be exact). Eventually realized there’s presumably a long tail extending to the right past the range we can see, but that wasn’t clear from the start. Even knowing this, it’s still hard to get my brain to parse the area as being progressively cut in half. I would hesitate to change the shape of this graph (it’s important to understand that it’s the area being halved, not just horizontal progress) but I think zooming out the domain would make it feel much more like “approaching infinite optimization”.

2. This is a minor point, but I think the “in terms of” in this sentence is needlessly vague:

Then, the absolute optimization of a specific state  is in terms of the probability mass above it, that is, to the right of it on the -axis.

It took a bit for me to understand, “oh, bits of optimization is just calculated as the inverse of ”. Maybe it would be clearer to say that from the start? Not sure what the exact math terms are, but something like:

Then, the absolute optimization of a state  is [the proportional inverse of] the probability mass above it, that is, to the right of it on the -axis.

Yeah, I think there’s an important distinction to make in the intent of the speaker saying this remark. Sometimes it’s intended as evidence the argument is wrong, and yes, that’s a dirty rhetorical trick. But as you’ve listed here, there are several motivations to give this response as a criticism of someone’s genuinely bad/harmful persuasion tactics.

So I guess when hearing this, it’s worth taking a moment to check whether the speaker is using fallacious rhetoric, or attempting to give helpful social advice. (And then, accordingly, “go for the throat,” or kindly thank them for the presentation feedback.)

Agreed.

It seems like the moral of this parable should be “don’t make foolish, incoherent hedges” — however, the final explanations given by Eternidad don’t touch on this at all. I would be more satisfied by this parable if the concluding explanations focused on the problems of naive data aggregation.

The “three reasons” given are useful ideas, but the king’s decision in this story is foolish even if this scenario was all three: a closed game, an iterated game, and only a betting situation. (Just imagine betting on a hundred coin flips that the coin will land on its edge every time.)

Thanks for writing this, I found this dichotomy very interesting to think about!

You list “lack of commitment” as a drawback to entertaining-type relationships. This seems reasonable, but it makes me wonder: how much does this entertaining-to-building scale track low-to-high commitment? I think an idea of commitment level (be it commitment to fidelity, family, some other project, etc..) would describe a similar dynamic, and even out some odd edge cases.

For instance, I would expect a lot of successful relationships to start out more on the entertaining side of the spectrum, and then shift toward building as the honeymoon phase fades and they decide to have kids (or start some other big project). Modeling this shift takes some extra explaining if these are types of people, but with a commitment model, it’s much more self evident that most lasting relationships will follow a low-to-high commitment gradient as time goes on.

Also, consider a couple who doesn’t work together, doesn’t want kids, but just very much enjoy each other’s company. Actually, enjoying each other’s company is such a high priority that they strongly commit to a lifelong partnership, and pour significant resources into strengthening this particular relationship. This is a hypothetical couple designed as an edge case, but I don’t think it’s unrealistic. (It’s actually not far from the relationship I’m currently in). By the building-entertaining model, this couple would qualify as entertaining, but then break most of the expectations that go along with that. But with a commitment model, they’re clearly high commitment, and meet the expectations that go along with this (the attributes mostly shared with building couples).

Overall, I’m skeptical that this is a clear framing. It seems like most of these attributes are just downstream of relationships being low-commitment or high-commitment.