Connor_Flexman

Connor_Flexman's Posts

Sorted by New

Connor_Flexman's Comments

Connor_Flexman's Shortform

Remember that just like there are a lot of levels to any skill, there are a lot of levels to any unblocking!

It feels to me like perhaps both parties are making a mistake when one person (the discoverer) says, "I finally figured out [how to be emotionally liberated or something]!" and the skeptic is like "whatever, they'll just come back in a few months and say they figured out even more about being emotionally liberated, what a pointless hamster wheel." (Yes, often people are unskilled at this type of thing and the first insight doesn't stick, but I'm talking about the times when it does.)

In these cases, the discoverer will *still find higher levels of this* later on! It isn't that they've discovered the True Truth about [emotional liberation], they've just made a leap forward that resolves lots of their known issues. So even if the skeptic is right that they'll discover another thing in the future that sounds very similar, that doesn't actually invalidate their present insight.

And for the discoverer, often it is seductive to think you've finally solved that domain. Oftentimes most or all of your present issues there feel resolved! But that's because you triangulate from the most pressing issues. In the future, you'll find other cracks in your reality, and need to figure out superficially similar but slightly skewed domains—and thinking you've permanently solved a complicated domain will only hamper this process. But that doesn't mean your insight isn't exactly as good as you think it is.

Maybe Lying Doesn't Exist

I feel torn because I agree that unconscious intent is incredibly important to straighten out, but also think

1. everyone else is relatively decent at blaming them for their poor intent in the meantime (though there are some cases I'd like to see people catch onto faster), and

2. this is mostly between the person and themselves.

It seems like you're advocating for people to be publicly shamed more for their unconscious bad intentions, and this seems both super bad for social fabric (and witch-hunt-permitting) while imo not adding very much capacity to change due to point (2), and would be much better accomplished by a culture of forgiveness such that the elephant lets people look at it. Are there parts of this you strongly disagree with?

Connor_Flexman's Shortform

Sometimes people are explaining a mental move, and give some advice on where/how it should feel in a spatial metaphor. For example, they say "if you're doing this right, it should feel like the concept is above your head and you're reaching toward it."

I have historically had trouble working well with advice like this, and I don't often see it working well for other people. But I think the solution is that for most people, the spatial or feeling advice is best used as an intermediate/terminal checksum, not as something that is constructive.

For example, if you try to imagine feeling their feeling, and then seeing what you could do differently to get there, this will usually not work (if it does work fine, carry on, this isn't meant for you). The best way for most people to use advice like this is to just notice your spatial feeling is much different than theirs, be reminded that you definitely aren't doing the same thing as them, and be motivated to go back and try to understand all the pieces better. You're missing some part of the move or context that is generating their spatial intuition, and you want to investigate the upstream generators, not their downstream spatial feeling itself. (Again, this isn't to say you can't learn tricks for making the spatial intuition constructive, just don't think this is expected of you in the moment.)

For explainers of mental moves, this model is also useful to remember. Mental moves that accomplish similar goals in different people will by default involve significantly different moving parts in their minds and microstrategies to get there. If you are going to explain spatial intuitions (that most people can't work easily with), you probably want to do one of the following:

1) make sure they are great at working with spatial intuitions

2) make sure they know it's primarily a checksum, not an instruction

3) break down which parts generate that spatial intuition in yourself, so if they don't have it then you can help guide them toward the proper generators

4) figure out your own better method of helping them work with it that I haven't discovered

5) remember the goal is not to describe your experience as you experience it, but to teach them the skill, and just don't bring up the spatial intuition as if they should be guided by that right now

Chris_Leong's Shortform

Does FDT make this any clearer for you?

There is a distinction in the correlation, but it's somewhat subtle and I don't fully understand it myself. One silly way to think about it that might be helpful is "how much does the past hinge on your decision?" In smoker's lesion, it is clear the past is very fixed—even if you decide to not to smoke, that doesn't affect the genetic code. But in Newcomb's, the past hinges heavily on your decision: if you decide to one-box, it must have been the case that you could have been predicted to one-box, so it's logically impossible for it to have gone the other way.

One intermediate example would be if Omega told you they had predicted you to two-box, and you had reason to fully trust this. In this case, I'm pretty sure you'd want to two-box, then immediately precommit to one-boxing in the future. (In this case, the past no longer hinges on your decision.) Another would be if Omega was predicting from your genetic code, which supposedly correlated highly with your decision but was causally separate. In this case, I think you again want to two-box if you have sufficient metacognition that you can actually uncorrelate your decision from genetics, but I'm not sure what you'd do if you can't uncorrelate. (The difference again lies in how much Omega's decision hinges on your actual decision.)

Schematic Thinking: heuristic generalization using Korzybski's method

Really like this explanation, especially the third example and conclusion.

I feel like a similar mental move helps me understand and work with all sorts of not-yet-operationalized arguments in my head (or that other people make). If I think people are "too X", and then I think about what my other options to have said were there, it helps me triangulate about what thing I actually mean. I think this is much faster and more resilient to ladder-of-abstraction mistakes (as you mention) than many operationalization techniques, like trying to put numbers on things.

I think my personal mental move is less like being aware of all the things I could have said, and more like being aware that the thing I was saying was a stand-in meant to imply lots of specific things that are implausible to articulate in their own form.

Strong stances

Not core, but when you say

(I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)

It's notable that it seems like some plausible probabilistic models of neuroscience are formatted such that only one path actually fires (is experienced), and the probability only comes in at the level of the structure weighting the probability of which path might fire.

How feasible is long-range forecasting?

On both the piece and the question, I feel consistently confused that people keep asking "is long-range forecasting feasible" as a binary in an overly general context, which, as TedSanders mentioned, is trivially false in some cases and trivially true in others.

I get that if you are doing research on things, you'll probably do research on real-world-esque cases. But if you were trying to prove long-term forecasting feasibility-at-all (which Luke's post appears to, as it ends with sounding unsure about this point), you'd want to start from the easiest case for feasibility: the best superforecaster ever predicting the absolute easiest questions, over and over. This is narrow on forecasters and charitable on difficulty. I'm glad to see Tetlock et al looking at a narrower group of people this time, but you could go further. And I feel like people are still ignoring difficulty, to the detriment of everyone's understanding.

If you predict coin tosses, you're going to get a ROC AUC of .5. Chaos theory says some features have sensitive dependence to initial conditions that are at too low of resolution for us to track, and that we won't be able to predict these. Other features are going to sit within basins of attraction that are easy to predict. The curve of AUC over time should absolutely drop off over time like that, because more features slip out of predictability as time goes on. This should not be surprising! The real question is "which questions are how predictable for which people?" (Evidently not the current questions for the current general forecasting pool.)

There are different things to do to answer that. Firstly, two things NOT to do that I see lots:

1. Implying low resolution/AUC is a fault without checking calibration (as I maybe wrongly perceive the above graph or post as doing, but have seen elsewhere in a similar context). If you have good calibration, then a .52 AUC can be fine if you say 50% to most questions and 90% to one question; if you don't, that 90% is gonna be drowned out in a sea of other wrong 90%s

2. Trying to zero out questions that you give to predictors, e.g. "will Tesla produce more or less than [Tesla's expected production] next year?". If you're looking for resolution/AUC, then baselining on a good guess specifically destroys your ability to measure that. (If you ask the best superforecaster to guess whether a series of 80% heads-weighted coin flips comes up with an average more than .8, they'll have no resolution, but if you ask what the average will be from 0 to 1 then they'll have high resolution.) It will also hamstring your ability to remove low-information answers if you try subtracting background, as mentioned in the next list.

Some positive options if you're interested in figuring out what long-term questions are predictable by whom:

1. At the very least, ask questions you expect people to have real information about

2. Ask superforecasters to forecast metadata about questions, like whether people will have any resolution/AUC on subclasses of questions, or how much resolution/AUC differently ranked people will have on subclasses, or whether a prediction market would answer a question better (e.g. if there is narrowly-dispersed hidden information that is very strong). Then you could avoid asking questions that were expected to be unpredictable or wasteful in some other way.

3. Go through and trying to find simple features of predictable vs unpredictable long-term questions

4. Amplify the informational signal by reducing the haze of uncertainty not specific to the thing the question is interested in (mostly important for decade+ predictions). One option is to ask conditionals, e.g. "what percent chance is there that CRISPR-edited babies account for more than 10% of births if no legislation is passed banning the procedure" or something if you know legislation is very difficult to predict; another option is to ask about upstream features, like specifically whether legislation will be passed banning CRISPR. (Had another better idea here but fell asleep and forgot it)

5. Do a sort of anti-funnel plot or other baselining of the distribution over predictors' predictions. This could look like subtracting the primary-fit beta distribution from the prediction histogram to see if there's a secondary beta, or looking for higher-order moments or outliers of high credibility, or other signs of nonrandom prediction distribution that might generalize well. A good filter here is to not anchor them by saying "chances of more than X units" where X is already ~the aggregate mean, but instead make them rederive things (or to be insidious, provide a faulty anchor and subtract an empirical distribution from around that point). Other tweaked opportunities for baseline subtraction abound.

If Luke is primarily just interested in whether OpenPhil employees can make long-term forecasts on the kind of thing they forecast on, they shouldn't be looking at resolution/AUC, just calibration, and making sure it's still good at reasonably long timescales. To bootstrap, it would speed things along if they used their best forecasters to predict metadata—if there are classes of questions that are too unpredictable for them, I'm sure they can figure that out, especially if they spot-interviewed some people about long-term predictions they made.

Maybe Lying Doesn't Exist

The folk theory of lying is a tiny bit wrong and I agree it should be patched. I definitely do not agree we should throw it out, or be uncertain whether lying exists.

Lying clearly exists.

1. Oftentimes people consider how best to lie about e.g. them being late. When they settle on the lie of telling their boss they were talking to their other boss and they weren't, and they know this is a lie, that's a central case of a lie—definitely not motivated cognition.

To expand our extensional definition to noncentral cases, you can consider some other ways people might tell maybe-lies when they are late. Among others, I have had the experiences [edit: grammar] of

2. telling someone I would be there in 10 minutes when it was going to take 20, and if you asked me on the side with no consequences I would immediately have been able to tell you that it was 20 even though in the moment I certainly hadn't conceived myself as lying, and I think people would agree with me this is a lie (albeit white)

3. telling someone I would be there in 10 minutes when it was going to take 20, and if you asked me on the side with no consequences I would have still said 10, because my model definitely said 10, and once I started looking into my model I would notice that probably I was missing some contingencies, and that maybe I had been motivated at certain spots when forming my model, and I would start calculating... and I think most people would agree with me this is not a lie

4. telling someone I would be there in 10 minutes when it was going to take 20, and my model was formed epistemically virtuously despitely obviously there being good reasons for expecting shorter timescales, and who knows how long it would take me to find enough nuances to fix it and say 20. This is not a lie.

Ruby's example of the workplace fits somewhere between numbers 1 and 2. Jessica's example of short AI timelines I think is intended to fit 3 (although I think the situation is actually 4 for most people). The example of the political fact-checking doesn't fit cleanly because politically we're typically allowed to call anything wrong a "lie" regardless of intent, but I think is somewhere between 2 and 3 and I think nonpartisan people would agree that, unless the perpetrators actually could have said they were wrong about the stat, the case was not actually a lie (just a different type of bad falsehood reflecting on the character of those involved). There are certainly many gradations here, but I just wanted to show that there is actually a relatively commonly accepted implicit theory about when things are lies that fits with the territory and isn't some sort of politicking map distortion as it seemed you were implying.

The intensional definition you found that included "conscious intent to deceive" is not actually the implicit folk theory most people operate under: they include number 2's "unconscious intent to deceive" or "in-the-moment should-have-been-very-easy-to-tell-you-were-wrong obvious-motivated-cognition-cover-up". I agree the explicit folk theory should be modified, though.

I also want to point out that this pattern of explicit vs implicit folk theories applies well to lots of other things. Consider "identity"—the explicit folk theory probably says something about souls or a real cohesive "I", but the implicit version often uses distancing or phrases like "that wasn't me" [edit: in the context of it being unlike their normal self, not that someone else literally did it] and things such that people clearly sort of know what's going on. Other examples include theory of action, "I can't do it", various things around relationships, what is real as opposed to postmodernism, etc etc. To not cherry-pick, there are some difficult cases to consider like "speak your truth" or the problem of evil, but under nuanced consideration these fit with the dynamic of the others. I just mention this generalization because LW types (incl me) learned to tear apart all the folk theories because their explicit version were horribly contradictive, and while this has been very powerful for us I feel like an equally powerful skill is figuring out how to put Humpty-Dumpty back together again.

adam_scholl's Shortform

I was initially very concerned about this but then noticed that almost all the tested secondary endpoints were positive in the mice studies too. The human studies could plausibly still be meaningless though.

Has anyone (esp you Jim) looked into fecal transplants for this instead, in case our much longer digestive system is a problem?

adam_scholl's Shortform

Possibly another good example of scientists failing to use More Dakka. The mice studies all showed solid effects, but then the human studies used the same dose range (10^9 or 10^10 CFU) and only about half showed effects! Googled for negative side effects of probiotics and the healthline result really had to stretch for anything bad. Wondering if, as much larger organisms, we should just be jacking up the dosage quite a bit.

Load More