localdeity

Wiki Contributions

Comments

In my humble opinion, the only difference between "bad" Frame Control and "good" Frame Control is in how much the Frame corresponds with objective reality, and hopefully, social reality as well.

Hmm.  I would guess that, if someone is using a wrong frame (let's say it depends on assumptions that are demonstrably false), and you have a better frame in mind, there are still better ways and worse ways to go about communicating this and going from the one to the other.  Like, explicitly saying "It looks like you're assuming X, which is wrong because ..." seems like the most educational and intellectually legible approach, probably best in a good-faith discussion with an intelligent counterpart; whereas e.g. just saying new stuff from a different set of assumptions that doesn't directly engage with what they've said—but initially looks like it does, and takes long enough / goes through enough distracting stuff before it reaches a mismatch that they've forgotten that they'd said something different—is potentially bad.

Now, er, the original post says it uses "frame control" to mean the non-explicit, tricky approach.  It mentions "Trying to demonstrate, through reason and facts, how their box is better", and says "These are all attempts to control your frame, but none of these is what I mean by frame control", and "No; frame control is the “man doesn’t announce his presence, he just stalks you silently” of the communication world."

This is unfortunate, because the bare phrase "frame control" will inevitably be interpreted as "actions that control the frame" without further qualifiers (I'd forgotten that the post had the above definition).  Something like "silent frame control", "frame manipulation", or "frame fuckery" would probably fit better.

To describe the current board state, something like this seems reasonable.

Problem: If I go to a chapter, e.g. https://hpmor.com/chapter/63 , and then I use the dropdown menu from the top to select another chapter, it takes me to e.g. https://hpmor.com/go.php?chapter=36 , which is a "Page not found" page.

I guess I would summarize by saying:

  • If the things you're predicting are completely independent, then naive "calibration" works fine: if you're good at putting things into an "80% likely" bucket, then in practice ~80% of those predictions will be true.
  • If the things you're predicting are highly correlated with each other—e.g. questions like "Will company X fail?", "Will company Y fail?", and so on, when the most likely way for company X to fail involves a general economic downturn that affects all the companies—then even if you were perfect at putting propositions into the 5% bucket, the actual outcomes may look a lot more like "0% became true" or "100% became true" than like "5% became true".
  • Therefore, when evaluating someone's calibration, or creating a set of predictions one plans to evaluate later, one should take these correlations into account.
    • If one expects correlated outcomes, probably the best thing is to factor out the correlated part into its own prediction—e.g. "Chance of overall downturn [i.e. GDP is below X or something]: 4%" and "Chance of company X failing, conditional on overall downturn: 70%" and "Chance of company X failing, conditional on no downturn: 2.3%" (which comes out to ~5% total).
    • If the predictor didn't do this, but there was an obvious-in-retrospect common cause affecting many propositions... well, you still don't know what probability the predictor would have assigned to that common cause, which is unfortunate, and makes it difficult to judge.  Seems like the most rigorous thing you can do is pick one of the correlated propositions, and throw out the rest, so that the resulting set of propositions is (mostly) independent.  If this leaves you with too few propositions to do good statistics with, that is unfortunate.
      • One might think that if you're evaluating buckets separately (e.g. "the 80% bucket", "the 90% bucket"), it's ok if there's a proposition in one bucket that's correlated with a proposition in another bucket; as long as there's no correlation within each bucket, it remains the case that, if the predictor was good, then ~80% of the propositions in the 80% bucket should be true.  But then you can't do a meta-evaluation at the end that combines the results of separate buckets: e.g. if they said "5% company X fails, 10% company Y fails, 15% company Z fails, 20% company Q fails", and there was a downturn and they all failed, then saying "The predictor tended to be underconfident" would be illegitimate.

Hmm, I assumed “ansii art” was a typo on “ASCII art”, but apparently it’s instead a typo on “ANSI art”, and GPT knows about it—I guess it knows enough about nfo files (or the context of discussing such files) to pick the right correction.

Perhaps one of these strategies?

  • If the user downvoted the thing (comment/post) they're replying to, then hide the reply from Recent Discussion.  In theory, this would cover both the "heated political debate" case and the "low quality post" case.
  • If the forum downvoted the thing they're replying to (i.e. karma ≤ 0), then hide the reply from Recent Discussion.  This would cover only the "low quality post" case.

If I were implementing this, I would first look at a bunch of samples of comment chains matching the above queries, to see how well theory matches reality.

If you can have one thought, than another thought, and the link between the two is only 90% correct, not 99.9% correct...

Then, you don't know how to think.

[...]

You can't build a computer if each calculation it does is only 90% correct. If you are doing reasoning in sequential steps, each step better be 100% correct, or very, very close to that. Otherwise, after even a 100 reasoning steps (or even 10 steps), the answer you get will be nowhere near the correct answer.

This is a nice thing to think about.  I'm sure you're aware of it, and some of this will overlap with what you say, but here are the strategies that come to mind, which I have noticed myself following and sometimes make a point of following, when I think I need to:

  • Take multiple different trains of thought—maximizing the degree to which their errors would be independent—and see if they end up in the same place.  Error correction with unreliable hardware is a science.
  • Whenever you generate an "interesting" claim, try to check it against the real world.
    • Consider claims "interesting" when they would have significant (and likely observable) real-world consequences, and when they seem "surprising" (this sense built via experience).
  • Have a sense of how confident you are in each step of the chain of reasoning.  (Also built via experience.)
  • Practice certain important kinds of thinking steps to lower your error rate.  (I didn't do this deliberately, but there were logic puzzle books and stuff lying around, which were fun to go through.)

There is now a South Park episode titled "Japanese Toilet", which depicts them as being insanely good (obviously exaggerated) (I haven't seen more than the clip).  I expect this will cause some fraction of viewers to become curious and look into the reality, and some fraction of those to try out something with a bidet.

I definitely agree that we should start shifting to a norm that focuses on punishing bad actions, rather than trying to infer their mental state.

Do you have limitations to this in mind?  Consider the political issue of abortion.  One side thinks the other is murdering babies; the other side thinks the first is violating women's rightful ownership of their own bodies.  Each side thinks the other is doing something monstrous.  If that's all you need to justify punishment, then that seems to mean both sides should fight a civil war.

("National politics?  I was talking about..."  The one example the OP gives is SBF, and other language alludes to sex predators and reputation launderers, and the explicit specifiers in the first few paragraphs are "harmful people" and "bad behavior"; it's such a wide range that it seems hard to declare anything offtopic.)

Load More