AnnaSalamon

Sequences

Decision Theory: Newcomb's Problem

Comments

Curated conversations with brilliant rationalists

Also one Spencer recorded with me: "Lines of Retreat and Incomplete Maps". Not sure why it isn't above; maybe was from earlier than the ones listed.

Peekskill Lyme Incidence

monitoring for symptoms and taking a large dose of antibiotics within ~48 hours of symptoms is extremely effective.

I'm confused about this. Can you say more about what your threshold is for "extremely effective," or why you think so? Wikipedia states: "People who receive recommended antibiotic treatment within several days of appearance of an initial EM rash have the best prospects.[106] Recovery may not be total or immediate. The percentage of people achieving full recovery in the United States increases from about 64–71% at end of treatment for EM rash to about 84–90% after 30 months; higher percentages are reported in Europe.[171][172] Treatment failure, i.e. persistence of original or appearance of new signs of the disease, occurs only in a few people.[171] Remaining people are considered cured but continue to experience subjective symptoms, e.g. joint or muscle pains or fatigue.[173] These symptoms usually are mild and nondisabling.[173]"

This leaves me thinking that even with rapid antibiotics, the debilitation per (infection that causes a rash) is significant.

Peekskill Lyme Incidence

Yes, this is something of a crux for me -- what are our odds of noticing, if we're paranoid? Folks say a rash is only present in 70-80% of cases; do you have stats on how often the rest is present? (How would one generate those stats?)

Jefftk, can you say more about the people who know who didn't initially get the symptoms but later got arthritic symptoms, and about how many people you know who got Lyme altogether, and how sure you are that the arthritic symptoms are from Lyme?

The Practice & Virtue of Discernment

Thanks. I just bounced to LW after getting stuck in a tricky bit of writing, and found this helpful for where I was stuck.

I think the main things I found helpful from your post just now were:

  1. the examples, which for me recalled the habit of righting a wrong question; and
  2. the explicit suggestion that I could take the spirit of "righting a wrong question", or "dissolving a question" -- call it a "virtue" -- and steer toward it in the way I might steer toward curiosity or other virtues.
Trapped Priors As A Basic Problem Of Rationality

I agree an algorithm could do as you describe.

I don't think that's what's happening in me or other people. Or at least, I don't think it's a full description. One reason I don't, is that after I've e.g. been camping for a long time, with a lot of room for quiet, it becomes easier than it has been to notice that I don't have to see things the way I've been seeing them. My priors become "less stuck", if you like. I don't see why that would be, on your (zhukeepa's) model.

Introspectively, I think it's more like, that sometimes facing an unknown hypothesis (or rather, a hypothesis that'll send the rest of my map into unknownness) is too scary to manage to see as a possibility at all.

Toward A Bayesian Theory Of Willpower

One interesting datum about willpower (which I’ve observed repeatedly in many people and contexts; not sure if it’s officially documented anywhere) is that it’s much easier to take a fully scripted action than to take an action that requires creatively filling in details.

For example, suppose several people are out trying to do “rejection therapy” (a perhaps-dubious game in which folks make requests of strangers that are likely to be rejected, e.g. “Can I listen to the walkman you’re listening to for a moment?” or “Can we trade socks?”). Many many people who set out to ask a stranger such a question will… “decide” not to, once the stranger is actually near them. However, people who have written down exactly which words they plan to stay in what order to exactly which stranger, with no room for ambiguity, are (anecdotally but repeatedly) more likely to actually say the words. (Or to do many other difficult habits, I think.)

(I originally noticed this pattern in undergrad, when there was a study group I wanted to leave, but whose keeper I felt flinchy about disappointing by leaving. I planned to leave the group and then … didn’t. And the next week, planned again to leave the group and then didn’t. And the third week, came in with a fully written exact sentence, said my scripted sentence, and left.)

Further examples:

  • It’s often easier to do (the dishes, or exercise, or other ‘difficult’ tasks) if there’s a set time for it.
  • Creativity-requiring tasks are often harder to attempt than more structured/directions-following-y tasks (e.g. writing poetry, especially if you’re “actually trying” at it; or attempting alignment research in a more “what actually makes sense here?” way and a less “let me make deductions from this framework other people are using” way; or just writing a blog post vs critiquing one).

— I’ve previously taken the above observations as evidence for the “subparts of my mind with differing predictions” view — if there are different bits of my (mind/brain) that are involved in e.g. assembling the sentence, vs saying the sentence, then if I need to figure out what words to say I’ll need whatever’s involved in calling the “assemble sentences” bit to also be on board, which is to say that more of me will need to be on board.

I guess you could also try this from the “Bayesian evidence” standpoint. But I’m curious how you’d do it in detail. Like, would you say the prior against “moving muscles” extends also for “assembling sentences”?

Trapped Priors As A Basic Problem Of Rationality

The basic idea of a trapped prior is purely epistemic. It can happen (in theory) even in someone who doesn't feel emotions at all. If you gather sufficient evidence that there are no polar bears near you, and your algorithm for combining prior with new experience is just a little off, then you can end up rejecting all apparent evidence of polar bears as fake, and trapping your anti-polar-bear prior. This happens without any emotional component.

I either don't follow your more general / "purely epistemic" point, or disagree. If a person's algorithm is doing correct Bayesian epistemology, a low prior of polar bears won't obscure the accumulating likelihood ratios in favor of polar bears; a given observation will just be classified as "an update in favor of polar bears maybe being a thing, though they're still very very unlikely even after this datapoint"; priors don't mess with the direction of the update.

I guess you're trying to describe some other situation than correct Bayesian updating, when you talk about a person/alien/AI's algorithm being "just a little off". But I can't figure out what kind of "a little off" you are imagining, that would yield this.

My (very non-confident) guess at what is going on with self-reinforcing prejudices/phobias/etc. in humans, is that it involves actively insulating some part of the mind from the data (as you suggest), and that this is not the sort of phenomenon that would happen with an algorithm that didn't go out of its way to do something like compartmentalization.

If there's a mechanism you're proposing that doesn't require compartmentalization, might you clarify it?

The slopes to common sense

I find this helpful; thank you. I wonder if this is part of what was going on in the overcaution-about-covid dynamic mingyuan describes.

Takeaways from one year of lockdown

So there's definitely a measure of social inertia there that has nothing to do with fear of COVID.

My experience is that fear, or at least fear that is in the background and that I am dissociated from, creates social inertia and other inertia for me. (Also grief that I am dissociated from.)

Takeaways from one year of lockdown

AFAICT, fear (especially fear as a background that is just always there, for months and months) has huge effects on me and many others that are bad for thinking, initiative, activated caring, and real companionship (or being conscius at all, sort of), and that it requires actively training courage or bravery or action/initiative/activated-caring to overcome this. I notice this a lot in the AI risk context, and sometimes in the "what's happening to America / the West?" context, and also most times that somebody is e.g. afraid they have cancer.

My personal experience of the covid year has been good; but your post seems to me to have a lot in it that bears on the broader thing about fear-in-general, and I think talking more about the detailed effects of fear in the context of this post, covid, AI risk or any other context would be amazing.

Load More