Said Achmiz

Wiki Contributions

Comments

Frame Control

I think that the first red flag, and the first anti-red-flag, are both diametrically wrong.

… here’s a non-exhaustive list of some frame control symptoms …

  1. They do not demonstrate vulnerability in conversation, or if they do it somehow processes as still invulnerable. They don’t laugh nervously, don’t give tiny signals that they are malleable and interested in conforming to your opinion or worldview.

This seems good, actually? Why should anyone be interested in conforming to your opinion or worldview? What’s so great about your opinion? (General-‘your’, I mean; I am not referring to OP specifically.) It seems to me that the baseline assumption should be that no one is interested in conforming to your opinion or worldview, unless (and this ought to be expected to be unusual!) you manage to impress them considerably (and even then, such conformance should not be immediate, but should come after much consideration, to take place at leisure, not in the actual moment of conversation!).

More generally: attempting to think deeply and without restriction about the ideas of others, and to change our minds, while actively being subject to social pressures in a live interpersonal setting, is extremely failure-prone and almost always unnecessary. It is sometimes inescapable, but usually it’s completely avoidable.

To the extent that this post encourages doing such things, it is encouraging exactly the opposite of rationalist best practices.

(For a related point, see this Schopenhauer quote.)

I once had a long talk with a very smart man who was widely perceived as deeply compassionate and kind, but long after the talk I realized at no point in the conversation he had indicated being impacted by my ideas, despite there being multiple opportunities for him to make at the very least small acknowledgements that I was onto something good.

Why is this the slightest bit surprising, or at all a bad sign or “red flag”? Why should this man have been impacted by your ideas? Aren’t you making some wildly improbable assumptions about how impressive and “impactful” your ideas were/are? (And even if they were impactful, rightly this man ought to have delayed any “impact” until due consideration, as noted above.)

Likewise, why do you assume that he had any good reason to think that you were onto something good? Maybe you weren’t onto anything good? Most people usually aren’t onto anything good, so this, again, ought to be the default assumption.

It took me a long time to realize this because he’d started out the conversation by framing me as special, telling me it was unusual to find someone else who had the ideas I did, that I must have taken a different path.

This seems not at all to contradict the preceding. “Unusual” and “different” does not mean “good” or “worthy of consideration or respect” or even “makes any sense whatsoever”.

So if frame control looks so similar to just being a normal person, what are some signs that someone isn’t doing frame control? Keeping in mind that these are pointers, not absolute, and not doing these doesn’t mean someone is doing frame control.

  1. They give you power over them, like indications that they want your approval or unconditional support in areas you are superior to them. They signal to you that they are vulnerable to you.

This seems bad, actually. It seems to me like a sign of insecurity and unjustified submission. I, for one, have no interest in having my conversation partners signal that they’re vulnerable to me (nor have I any interest in signaling to that I’m vulnerable to them).

Rather, it is right and proper that two people should meet as equals—each willing to defend his view, each confident in his own reason and judgment; open to the possibility of his interlocutor having interesting things to say, but expecting to have this possibility prove itself, and not assuming it. In other words: “Speak, and I will listen; you have no special power over me, nor I over you; our minds are free, and we face each other with unfettered reason.”

Kriorus update: full bodies patients were moved to the new location in Tver

Viewing the photos requires a Facebook account. Are they available anywhere else?

A Defense of Functional Decision Theory

We know the error rate of the predictor, so this point is moot.

How do we know it? If the predictor is malevolent, then it can “err” as much as it wants.

A Defense of Functional Decision Theory

Why? Why is it not about which action to take?

It’s obvious right-boxing gives the most utility in this specific scenario only, but that’s not what it’s about.

I reject this. If Right-boxing gives the most utility in this specific scenario, then you should Right-box in this specific scenario. Because that’s the scenario that—by construction—is actually happening to you.

In other scenarios, perhaps you should do other things. But in this scenario, Right is the right answer.

A Defense of Functional Decision Theory

If you commit to taking Left, then the predictor, if malevolent, can “mistakenly” “predict” that you’ll take Right, making you burn to death. Just like in the given scenario: “Whoops, a mistaken prediction! How unfortunate and improbable! Guess you have no choice but to kill yourself now, how sad…”

There absolutely is a better strategy: don’t knowingly choose to burn to death.

[linkpost] Why Going to the Doctor Sucks (WaitButWhy)

Hmm.

Now, this may be a stupid question, but I can’t seem to find the answer on a skim or Cmd-F of the post: why is this… thing… called “the Lanby”? What is a… lanby?

[linkpost] Why Going to the Doctor Sucks (WaitButWhy)

This is a long post, unnecessarily broken up by illustrations, that takes a long time to get to anything resembling a point. It also seems to be written with the purpose of shilling some sort of startup or product.

Now, I don’t necessarily have anything against any of these things, but… they do dramatically lower my estimate of the likelihood that reading the post will be a good use of my time.

With that in mind: is there any chance we could get some sort of “executive summary”? OP, I get the sense that you’re trying to convey that this is something important or valuable—so, would you consider writing a few sentences about what the heck this post is about?

A Defense of Functional Decision Theory

How exactly is it preventable? I’m honestly asking.

It’s preventable by taking the Right box. If you take Left, you burn to death. If you take Right, you don’t burn to death.

If you have a strategy that, if the agent commits to it before the predictor makes her prediction, does better than FDT, I’m all ears.

Totally, here it is:

FDT, except that if the predictor makes a mistake and there’s a bomb in the Left, take Right instead.

Meta-discussion from "Circling as Cousin to Rationality"

Would you say that I have an obligation to react to their response, i.e. either admit that I lost an argument, or take the effort to see whether I agree with their interpretation of the information? Right now I am not motivated to do the latter.

Well, first of all, my comment described an interaction between the author of a post or comment (i.e., someone who was putting forth some idea) and an interlocutor who was requesting a clarification (or noting an inconsistency, or asking for a term to be defined, etc.). As far as I can tell, based on a skim of the discussion thread you linked, in that case you were the one who was asking someone else a question about something they had posted, so you would be the interlocutor, and they the author. They posted something, you asked a question, they gave an answer…

Are you obligated to then respond to their response? Well… yes? I mean, what was the point of asking the question in the first place? You asked for some information, and received it. Presumably you had some reason for asking, right? You were going to do something with either the received information, or the fact that none could be provided? Well, go ahead and do it. Integrate it into your reasoning, and into the discussion. Otherwise, why ask?

I can’t easily find it at the moment, but Eliezer once wrote something to the effect that an argument isn’t really trustworthy unless it’s critiqued, and the author responds to the critics, and the critics respond to the response, and the author responds to the critics’ response to his response. But why? What motivates this requirement? As I wrote in the grandparent comment: nothing but normative epistemic principles, i.e. the fact that if we don’t conform to these requirements, we are far more likely to end up mistaken, believing nonsense, etc.

Similarly with your obligation to respond. Why are you thus obligated? Well, if you ask for information from your interlocutor, they provide it, and then you just ignore it… how exactly do you expect ever to become less wrong?

Load More