All of MichaelStJules's Comments + Replies

Long covid: probably worth avoiding—some considerations

I think we should look further into this study, which seems somewhat reassuring, but I have reservations about it:

Kuodi et al., January 2022, "Association between vaccination status and reported incidence of post-acute COVID-19 symptoms in Israel: a cross-sectional study of patients tested between March 2020 and November 2021" (pdf, not yet peer-reviewed)

Their previous version understated the results, since it didn't include the uninfected. The new version does, and says (cutting out some text before and after, and emphasis added by me):

Methods: We invited

... (read more)
Long covid: probably worth avoiding—some considerations

Also, by Elizabath (I think her LW post was not updated since some corrections were made).

I would focus on the UK metareview she looked at, since it should better capture the risk of severe brain fog and fatigue. The intelligence study estimated the average drop in IQ by acute symptom severity, but I think there are decreasing marginal returns to IQ, so I'm more worried about a small risk of a big drop (or being unable to even focus on doing an intelligence test, due to brain fog or fatigue), and Taquet et al focused on neuro and psych diagnoses that did n... (read more)

Animal welfare EA and personal dietary options

I think what you're saying is coherent and could in principle explain some comparisons people make, although I think people can imagine what an experience with very little affective value, negative or positive, feels like, and then compare other experiences to that. For example, the vast majority of my experiences seem near neutral to me. We can also tell if something feels good or bad in absolute terms (or we have such judgements).

I also think your argument can prove too much: people would choose to skip all but their peak experiences in their lives, whic... (read more)

Long covid: probably worth avoiding—some considerations

Some other previous back of the envelope calculations (collected here):

By AdamGleave (2 shots and for Delta):

My new estimate from the calculation is 3.0 to 11.7 quality-adjusted days lost to long-term sequelae, with my all-things-considered mean at 45. 

 

By Connor_Flexman:

That being said, we can still roughly estimate risk from definitely having Delta. A healthy 30yo probably has about 4x (3x-10x) less risk than before, due to vaccination, despite Delta causing higher mortality. It almost entirely comes from Long COVID. In absolute terms this is ~

... (read more)
1MichaelStJules8dAlso, by Elizabath [https://acesounderglass.com/2021/08/30/long-covid-is-not-necessarily-your-biggest-problem/] (I think her LW post [https://www.lesswrong.com/posts/6uwLq8kofo4Tzxfe2/long-covid-is-not-necessarily-your-biggest-problem#Odds_of_long_term_outcomes] was not updated since some corrections were made). I would focus on the UK metareview she looked at, since it should better capture the risk of severe brain fog and fatigue. The intelligence study [https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(21)00324-2/fulltext] estimated the average drop in IQ by acute symptom severity, but I think there are decreasing marginal returns to IQ, so I'm more worried about a small risk of a big drop (or being unable to even focus on doing an intelligence test, due to brain fog or fatigue), and Taquet et al [https://www.thelancet.com/action/showPdf?pii=S2215-0366%2821%2900084-5] focused on neuro and psych diagnoses that did not include brain fog or fatigue. Here's what she had to say based on the metareview: I think this gives us a fairly reliable upper bound on the risk of severe long COVID cases ("affecting daily life", or in the study's wording, "limiting day-to-day function") for healthy people in the given age groups, and a more reliable upper bound than Matt Bell's, since 1. Matt Bell's started from overall prevalence estimates that don't depend on severity and then made adjustments for severity based on other studies, and this seems more prone to bias/error, and 2. the above study is more directly attempting to measure what we care about, and seems unlikely to be biased downwards. There's no comparison group here in this metareview, and this is an absolute risk estimate based on self-reported symptom duration, according to NICE's definition [https://www.bmj.com/content/372/bmj.n136.full] of post-COVID-19 syndrome (PCS), which is supposed to rule out alternative diagnoses at least, but that can still leave room for people misreportin
Long covid: probably worth avoiding—some considerations

One major concern I have with the tail of long COVID is its severity even if/when it does get better after a few years. If I have debilitating long COVID for 3 years and then I recover fully, how will my career be affected, and what kind of person will I be after that? I think it's reasonably likely that it would cause value drift away from effective altruism (in part because being connected to EA while feeling hopeless about my own future productivity seems psychologically painful, and if I become primarily preoccupied with my own recovery and wellbeing),... (read more)

Long covid: probably worth avoiding—some considerations

That's a good point. I think the comparison of severe symptoms between COVID-positive COVID-negative matched controls would be good evidence about the risk. I don't recall if any comparison studies tracked severity between positive and matched negative groups, though, rather than mostly just presence of symptoms, and I do recall studies without comparisons tracking severity, which people could use to report non-COVID-related severe symptoms, as you suggest.

6Elizabeth9dWhen I looked into this [https://acesounderglass.com/2021/08/30/long-covid-is-not-necessarily-your-biggest-problem/] there was a paper that compared psych sequelae from covid to influenza and flu-like illnesses and found "covid to be modestly worse except for myoneural junction and other muscular diseases, where covid 5xed the risk (although it’s still quite low in absolute terms). Dementia risk is also doubled, presumably mostly among the elderly." This was not controlling for age or acute severity, and data was gathered pre-vaccine. (note: I did this research months ago and haven't done any follow-up, so trust what I wrote then over what I remember now)
Long covid: probably worth avoiding—some considerations

Q. Getting covid later is probably better than earlier.

As a counter consideration, vaccine effectiveness might wane quickly, and it's likely better to get COVID while better protected than while less protected. See, e.g. https://www.webmd.com/vaccines/covid-19-vaccine/news/20211227/covid-booster-protection-wanes-new-data

That being said, I'm still leaning towards avoiding COVID to avoid long COVID.

Long covid: probably worth avoiding—some considerations

For what it's worth, I agree that the post reads to me as not very balanced, but a lot of the evidence and arguments presented are still worrying, and I am still worried about long COVID. (I also don't put myself above confirmation bias, though.)

G. Overall deaths from everything have been very unusually high at points in 2021, even in 15-64 age group

This could also be explained by things other than COVID or long COVID, too, e.g. lockdown/isolation, less exercise, increased depression, poorer access to healthcare.

Omicron variolation?

Zvi/Scott/Elizabeth's earlier analyses of earlier studies on long COVID which treat it as a minor concern

My impression from Scott's big post was that it was not overwhelmingly likely to be minor, and rather that it was fairly ambiguous. My impression from Elizabeth's analyses (that I've read several months ago) is that long COVID is not necessarily a minor concern, but that we're paying disproportionate attention to it relative to other things we can also worry about, e.g. exercise, air quality, other infections, and there can be real tradeoffs between get... (read more)

Long covid: probably worth avoiding—some considerations

I would be surprised if the worst cases, where people can't really work and it lasts about half a year or longer, were mostly psychosomatic, or at least mostly psychosomatic in a way that's easily avoidable by just having different beliefs about long COVID. Can you really believe yourself into debilitating chronic fatigue and brain fog for half a year?

(EDITED: "a year -> "half a year", since I don't recall long COVID studies going much longer than half a year, when I looked into them, which was probably 3-6 months ago.)

I broadly agree but don't think that proves covid was the culprit. Vague shitty symptoms doctors refuse to grapple with were a problem long before covid, and if people with these symptoms can get better care by calling it long covid than leaving it open or blaming something else, they'd be stupid not to.

Long covid: probably worth avoiding—some considerations

I thought omicron didn't cause as much loss of sense of smell as previous strains?

Hmm, ya, that seems right. From a quick Google search: https://www.businessinsider.com/loss-of-taste-smell-not-common-covid-symptoms-2022-1.

Maybe Omicron just doesn't get as deep in the body generally, too, then. That would be a good sign, too.

Long covid: probably worth avoiding—some considerations

EDIT: I forgot (or didn't know) that loss of sense of smell was less likely with Omicron, which Steven Byrnes pointed out. Seems like 12-20% (probably mostly vaccinated?) of Omicron infections vs 7-68% for previous variants (section 3.5, and I'm not sure to what extent vaccination status was considered here). I've also read that Delta was less likely to cause loss of sense of smell than earlier variants, too. If the probability of loss of sense of smell scales proportionally with other brain issues, then I'd guess Omicron is less severe than previous varia... (read more)

9Steven Byrnes9dI thought omicron didn't cause as much loss of sense of smell as previous strains? I was thinking of that (very tentatively) as a good sign, like "omicron is less of a destroyer of nerve cells", and I think of nerve cells as being unusually difficult to heal, cf. polio. (Omicron does cause brain fog, which is bad but not necessarily associated with the killing of nerve cells, or at least that's my vague impression / guess.) Low confidence on all this.
Animal welfare EA and personal dietary options

I think what was meant is that they'd rather experience nothing at all for the same duration, so they're comparing the concentration camp to non-experience/non-existence, not their average experience.

1frankybegs8dI don't think that that follows either, though. Because in practice temporarily not experiencing anything basically just means skipping to the next time you are experiencing something. So you may well intuit that you'd rather that any time the quality of your experience dips a lot. For example, if you have a fine but mostly quite boring job, but your life outside of work is exceptionally blissful, you may well choose to 'skip' the work parts, to not experience them and just regain consciousness when you clock off to go live your life of luxury unendingly. That certainly doesn't mean your time at work has negative value- it's just nowhere near as good as the rest, so you'd rather stick to the bliss. So I would say that no, actually this intuition merely proves that those experiences you'd prefer not to experience are below average, rather than below zero.
3Viliam17dIn other words, the question is: Would you prefer to experience X, or spend the same amount of time in coma?
Animal welfare EA and personal dietary options

I think the kind of diet you outline also makes sense on asymmetric consequentialist views, including negative utilitarianism. See Brian Tomasik's writing.  That being said, the net positive lives are for pretty large animals (cows), so you have very little impact on them either way, and the main effects are likely on wild animals, and then you'd want to judge their lives and the effects on them.

Population effects on wild-caught fish (including for fishmeal, fed mostly to farmed fish and shrimp) and other animals in their ecosystems together can be me... (read more)

Animal welfare EA and personal dietary options

See also the discussion on the EA Forum post.

After looking at the evidence, I think conventional factory farmed chickens (for eggs and meat) have net negative lives in expectation (on symmetric ethical views). See my thread.

Hardcode the AGI to need our approval indefinitely?

At least some of these seem possible to make unlikely. If we can force the AGI to have only a few routes to hack or manipulate the panel, the signal or its effects without a dominating penalty, and strongly reinforce those, then we could avoid worst-case outcomes.

It may conduct operations that appear good (and even are good) but also have a side effect of more easily allowing or hiding future bad actions.

It can only get away with killing everyone if it manages to hack or manipulate the panel, the signal or its effects.

 

It may modify itself directly to

... (read more)
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; th

... (read more)
2Lance Bush3moI don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say “I don’t know.” But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things “feel like something.” I don’t think it means anything to say things “feel like something.” Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just “what it’s like” to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another. I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi. I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think “I’m confused and probably wrong” as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, ev
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything.

 

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right? Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to. This "feeling" in qualia, too, could be a functional property. The causal diagram I'm imagining is something like

Unconscious processes (+un... (read more)

1Lance Bush3moDo you mean like a causal reason? If so then of course, but that wouldn’t have anything to do with qualia. I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. I don’t know what work “qualia” is doing here. Of course things feel various ways to me, and of course they feel different. Touching a hot stove doesn’t feel the same as touching a block of ice. But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them. Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question yo
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Ok, I think I get the disagreement now.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; r... (read more)

1Lance Bush3moOne can apply a vacuous term to multiple things, so pointing out that you could apply the term to more than one thing does not seem to me to indicate that it isn't vacuous. I could even stipulate a concept that is vacuous by design: "smorf", which doesn't mean anything, and then I can say something like "potatoes are smorf." The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything. Some of the questions you ask seem a bit obscure, like how I can tell something is hotter. Are you asking for a physiological explanation? Or the cognitive mechanisms involved? If so, I don'tknow, but I'm not sure what that would have to do with qualia. But maybe I'm not understanding the question, and I'm not sure how that could get me any closer to understanding what qualia are supposed to be. I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra.

 

Are they expecting qualia to be more than a mental state? If you're reporting the contents of your mental states, isn't that already enough? I'm not sure what extra there should be for qualia. Objects you touch can feel hot to you, and that's exactly what you'd be reporting. Or would you say something like "I know it's hot, but I don't feel it's hot"? How would you know it's... (read more)

1Lance Bush3moI don't think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they'd show me a red object. They'd ask me "What color is this?" I say red. Then they'd try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn't to me indicate that there is a "what it's like" to seeing red. I am simply ... seeing red. Like, I can report that fact, and talk about it, and say things like "it isn't blue" and "it is the same color as a typical apple" and such, but there's nothing else. There's no "what it's likeness" for me, or, if there is, I'm not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on. I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me. I can look at something like Dennett's account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don't see how anyone could access such a thing (if that's what qualia are supposed to be), and I don't think I do. It may be that that I am something akin to a native functionalist. I don't know. But part of the reason I was drawn to Dennett's views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.
Quick general thoughts on suffering and consciousness

The way I imagine any successful theory of consciousness going is that even if it has a long parts (processes) list, every feature on that list will apply pretty ubiquitously to at least a tiny degree. Even if the parts need to combine in certain ways, that could also happen to a tiny degree in basically everything, although I'm much less sure of this claim; I'm much more confident that I can find the parts in a lot of places than in the claim that basically everything is like each part, so finding the right combinations could be much harder. The full comp... (read more)

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

My main objection (or one of my main objections) to the position is that I don't think I'm self-aware to the level of passing something like the mirror test or attributing mental states to myself or others during most of my conscious experiences, so the bar for self-reflection seems set too high. My self-representations may be involved, but not to the point of recognizing my perceptions as "mine", or at least the "me" here is often only a fragment of my self-concept. My perceptions could even be integrated into my fuller self-concept, but without my awaren... (read more)

Quick general thoughts on suffering and consciousness

I would also add that the fear responses, while participating in the hallucinations, aren't themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They're just emotional responses to the contents of our dreams.

Since pain involves both sensory and affective components which rarely come apart, and the sensory precedes the affective, it's enough to not hallucinate the sensory.

I do feel like pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions, ... (read more)

5Steven Byrnes3moYeah, maybe I should have said "the amygdala responds to the hallucinations" or something. "Emotions" is kinda a fuzzy term that means different things to different people, and more specifically, I'm not sure what you meant in this paragraph. The phrase "automatic responses…to emotions" strikes me as weird because I'd be more likely to say that an "emotion" is an automatic response (well, with lots of caveats), not that an "emotion" is a thing that elicits an automatic response. Again I'm kinda confused here. You wrote "not…but" but these all seem simultaneously true and compatible to me. In particular, I think "hallucination is costly" energetically (as far as I know), and "hallucination is costly" evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I'm also inclined to say that the "default" value of this hyperparameter corresponds to "don't hallucinate", and during dreams the hyperparameter is moved to a non-"default" setting in some cortical areas but not others. Well, the word "default" here is kinda meaningless, but maybe it's a useful way to think about things. Hmm, maybe you're imagining that there's some special mechanism that's active during dreams but otherwise inactive, and this mechanism specifically "injects" hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn't call it a "hyperparameter" (although calling it a hyperparameter wouldn't really be "wrong" per se, just kinda unhelpful). However, I don't think it's a "mechanism" like that. I don't think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They're already noisy. You just need to "lower SNR thresholds" (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being
Quick general thoughts on suffering and consciousness

Once you can report fine-grained beliefs about your internal state (including your past actions, how they cohere with your present actions, how this coherence is virtuous rather than villainous, how your current state and future plans are all the expressions of a single Person with a consistent character, etc.), there's suddenly a ton of evolutionary pressure for you to internally represent a 'global you state' to yourself, and for you to organize your brain's visible outputs to all cohere with the 'global you state' narrative you share with others; where

... (read more)
2Rob Bensinger3moI'd probably say human babies and adult chickens are similarly likely to be phenomenally conscious (maybe between 10% and 40%). I gather Eliezer assigns far lower probability to both propositions, and I'm guessing he thinks adult chickens are way more likely to be conscious than human babies are, since he's said that "I’d be truly shocked (like, fairies-in-the-garden shocked) to find them [human babies] sentient", whereas I haven't heard him say something similar about chickens.
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness
  1. Does GPT-3 have any internal states/processes that look and act like its own emotions, desires or motivations? These words are in its vocabulary, but so are they in dictionaries. How could we interpret something as aversive to GPT-3? For example (although this isn't the only way it could have such a state), is there an internal state that correlates well with the reward it would get during training?
    1. In mammals, activation of the ACC seems necessary for the affective component of pain, and this of course contributes to aversive behaviour. (Also, evolution ha
... (read more)
5Logan Zoellner3moIt's easy to show that GPT-3 has internal states that it describes as "painful" and tries to avoid. Consider the following dialogue (bold text is mine) And, just so Roko's Basilisk doesn't come for me if AI ever takes over the world
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Shouldn't mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some ext... (read more)

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Of course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still,

  1. Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones.
  2. Animals might not care about the marks. Cleaner wrasse, a spe
... (read more)
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Thanks, this is helpful.

what are the cognitive causes of people talking about consciousness and qualia

Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had.

a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same w

... (read more)
7So8res3moIf I were doing the exercise, all sorts of things would go in my "stuff people say about consciousness" list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn't be able to tell if the color they saw as green was the one you saw as blue, and so on. You don't need to be miserly about what you put on that list. Mostly (on my model) because it's not at all clear from the getgo that it's meaningful to "be conscious" or "have qualia"; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where "being conscious" turns out to be a nonsense notion. Note also that higher standards on the algorithm you're supposed to produce are more conservative: if it is meanigful to say that an algorithm "is conscious", then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the "being conscious" part, assuming such a thing exists). I'd be pretty suspicious of someone who claimed to have a "conscious algorithm" if they couldn't also say "and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary's Room thought experiment for the same reason people do". Once someone demonstrated that sort of mastery (and once I'd verified it by inspection of the algorithm, and integrated the insights therefrom), I'd be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

How do you imagine consciousness would work in the moment for humans without inner/internal monologues (and with aphantasia, unable to visualize; some people can do neither)? And in general, for experiences that we don't reflect on using language in the moment, or at most simple expressive language, like "Ow!"?

2Lance Bush3moThe lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients. This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is. As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think i
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

According to Yudkowsky, is the self-model supposed to be fully recursive, so that the model feeds back into itself, rather than just having a finite stack of separate models each modelling the previous one (like here and here, although FWIW, I'd guess those authors are wrong that their theory rules out cephalopods)? If so, why does this matter, if we only ever recurse to bounded depth during a given conscious experience?

If not, then what does self-modelling actually accomplish? If modelling internal states is supposedly necessary for consciousness, how and... (read more)

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Some other less theory-heavy approaches to consciousness I find promising:

  1. What do unconscious processes in humans tell us about sentience?, and then see Rethink Priorities' table with evidence for various indicators for different species, with a column for unconscious processing in humans. (Disclaimer: I work at Rethink Priorities.)
  2. The facilitation hypothesis: "Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus." This is compatible with most popular
... (read more)
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I also don't think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I don't think it's obvious that nonhuman animals, including the vertebrates we normally farm for food, don't self-model (at least to some degree). I think it hasn't been studied much, although there seems to be more interest now. Absence of evidence is at best weak evidence of absence, especially when there's been little research on the topic to date. Here's some related evidence, although maybe some of this is closer to higher-order processes than self-modelling in particular:

  1. See the discussion of Attention Schema Theory here (section "Is an attention sch
... (read more)
2MichaelStJules3moOf course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still, 1. Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons [https://www.frontiersin.org/articles/10.3389/fpsyg.2021.669039/full]. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones [https://www.sciencedaily.com/releases/2008/06/080613145535.htm]. 2. Animals might not care about the marks. Cleaner wrasse, a species of fish, did pass the mirror test [https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000021] (the multiple phases, including the final self-directed behaviour with the visible mark), and they are particularly inclined to clean things (parasites) that look like the mark, which is where they get their name. I think the fact that they are inclined to clean similar looking marks was argued to undermine the results, but that seems off to me. 3. I would be interested in seeing the mirror test replicated in different sensory modalities, e.g. something that replays animals' smells or sounds back to them, a modification near the source in the test condition, and checking whether they direct behaviour towards themselves to investigate. 1. Some criticisms of past scent mirror test are discussed here [https://robertocazzollagatti.com/2018/06/07/self-awareness-in-dogs-needs-no-mirroring/] (paper with criticism here [https://www.sciencedirect.com/science/article/pii/S0376635717304862]). The issues were addressed recently here [https://www.tandfonline.com/doi/full/10.1080/03949370.2020.1846628] with wolves. Psychology Today summary
3MichaelStJules3moI also don't think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..
Petrov Day 2021: Mutually Assured Destruction?

Hmm, actually, it's not clear to me whether the site will go down immediately (with the button in tact) or after an hour.

Petrov Day 2021: Mutually Assured Destruction?

The site will remain up for one hour with a message that a missile is incoming (based on what I described here), and that message could be a false alarm.

1MichaelStJules4moHmm, actually, it's not clear to me whether the site will go down immediately (with the button in tact) or after an hour.
Petrov Day 2021: Mutually Assured Destruction?

I don't think you'll be able to retaliate if the site is down.

2holomanga4moIn the message sent to holders of launch codes that's repeated in this post, it says:
Petrov Day 2021: Mutually Assured Destruction?

Since the timer wasn't updating on either site, I assume they weren't testing us (yet).

Petrov Day 2021: Mutually Assured Destruction?

I briefly saw a "Missile Incoming" message with a 60:00 timer (that wasn't updating) on the buttons on the front pages of both LW and the EA Forum, at around 12pm EST, on mobile. Both messages were gone when I refreshed. Was this a bug or were they testing the functionality, testing us or preparing to test us?

3Neel Nanda4moSame happened with me, I thought it was an issue with page loading (I was using a very slow browser, and it took a few seconds to correct)
1MichaelStJules4moSince the timer wasn't updating on either site, I assume they weren't testing us (yet).
1hath4moSame thing happened to me. Might've been a bug with page loading? I've had similar things happen with other sites.
7Bjartur Tómas4moI suspect it was supposed to be a "false alarm".
Your Dog is Even Smarter Than You Think

I am willing to accept bets that general consensus in 3 years will be that Bunny and the vast majority of dogs in such studies do not have an episodic memory which they can communicate like claimed in this post.

(...)

I am offering 2:1 odds in favour of the other side.

Are you still offering this bet? I'm interested.

To clarify, you mean not just that the consensus will be that such studies find no (strong) evidence for episodic memory, but that dogs (in such studies) do not have an episodic memory that they can communicate like claimed in the post at all?

And, can you clarify what you mean by "like claimed in this post"?

Analogies and General Priors on Intelligence

Does this seem likely? I would guess this is basically true for the sensory and emotional parts, but language and mathematical reasoning seem like a large leap to me, so humans may be doing something qualitatively different from nonhuman animals. Nonhuman animals don't do recursion, as far as I know, or maybe they can, but limited to very low recursion depth in practice.

OTOH, this might be of interest; he argues the human cerebellum may help explain some of our additional capacity for language and tool use:

Long Covid Is Not Necessarily Your Biggest Problem

Ok, ya, some of these seem roughly within an order of magnitude of long COVID (higher or lower, since there's a lot of uncertainty).

I think it's worth mentioning that some of the risks here are more concentrated in older people, but can still be within an order of magnitude of COVID risk for people around my age (28). I would guess only Lyme and CFS would be concerning for a healthy person in their early 20s who doesn't take excessive risks of physical injury (low brain injury and post-ICU syndrome risk). I do wonder about recreational drug use, especially... (read more)

3Elizabeth5moThis isn't cruxy for me, but: * 95% of healthy people [https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-020-09049-x] have been infected with Epstein-Barr, it just doesn't have acute symptoms in many people * Both Epstein-Barr and chickenpox are herpes viruses, all of which [https://www.ncbi.nlm.nih.gov/books/NBK8157/] establish residence in your cells forever. "Postherpetic" doesn't necessarily mean HSV1/2, it includes multiple viruses that are (EB, cytomegalovirus [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6963600]) or were (chickenpox, pre-vaccine) nearly impossible to avoid without living in a bubble.
Long Covid Is Not Necessarily Your Biggest Problem

For this metareview it's the absolute percentage, not a comparison.

Woops, sorry, I didn't mean to suggest otherwise.

I'm interested in the other studies you think show a similar number relative to a control group.

Hmm, I only remember this one with a similar number and controls, off the top of my head (I might have been thinking of similar numbers for something else):

https://www.nature.com/articles/s41586-021-03553-9 (I'm focusing on Positive cases in figure 3, who are not hospitalized; I think this paper has gotten relatively more attention in the community... (read more)

Long Covid Is Not Necessarily Your Biggest Problem

This is based on self-reports on survey data, which will again exclude asymptomatic cases- if you use the ⅓ figure and assume no long covid among the asymptomatic, that becomes 1.8% of 25-45 year olds with covid developing long covid that affects their daily life, which is well within the Lizardman Constant.

On the other hand, medicine is notoriously bad at measuring persistent, low-level, amorphous-yet-real effects. The Lizardman Constant doesn’t mean prevalences below 4% don’t exist, it means they’re impossible to measure using naive tools.

1.8% seems simi... (read more)

2Elizabeth5moFor this metareview it's the absolute percentage, not a comparison. I'm interested in the other studies you think show a similar number relative to a control group.
Long Covid Is Not Necessarily Your Biggest Problem

Sorry, I was responding to this, but forgot to quote it:

My tentative conclusion is that the risks to me of cognitive, mood, or fatigue side effects lasting >12 weeks from long covid are small relative to risks I was already taking, including the risk of similar long term issues from other common infectious diseases.

(emphasis mine)

My expectation is that compared to other infectious diseases, (long) COVID is

  1. Much much worse, but less common (e.g. cold), or
  2. Much worse and about as common (e.g. flu), or
  3. Not as bad, but much much more common.

And these together ... (read more)

8Elizabeth5moSome very quick numbers (populations may overlap): * 13.15 CFS diagnoses [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3518652/] per 100,000 person-years (13.58 if you include idiopathic fatigue) * 430 fibromyalgia [https://journals.lww.com/pain/Abstract/2020/06000/A_review_of_the_incidence_and_risk_factors_for.6.aspx] diagnoses per 100,000 person-years * 10-20% chance [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6480773/] of "failure to treat" acute Lyme, given Lyme * 30-80% chance [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6544795/] of post-ICU syndrome, given admission to ICU (but that's not tracking the counterfactual). There are ~4 million ICU admissions [https://healthpolicy.ucsf.edu/icu-outcomes] in the US per year, although those have a heavy long tail. * Lifetime chance of 30% [https://www.cdc.gov/shingles/surveillance.html] for shingles (which is a manifestation of dormant chicken pox), although that should be trending down with the chickenpox vaccine. 10%-18% of people who develop shingles will develop postherpetic neuralgia (another source [https://www.nhs.uk/conditions/post-herpetic-neuralgia/#:~:text=Post%2Dherpetic%20neuralgia%20is%20a,full%20recovery%20within%20a%20year.] has lifetime chance of postherpetic neuralgia at 20%). * 500 traumatic brain injuries [https://www.cdc.gov/traumaticbraininjury/pubs/tbi_report_to_congress.html] per 100,000 person-years (albeit concentrated amoung children) of which 26-30 will create a long term disability and 17 will cause death.
Exercise Trade Offs

I don't think it's inevitable that everyone will come into contact with COVID or definitely catch COVID (which becomes more likely the more often you come into contact with it). You can still manage your exposure.

Furthermore, you can catch COVID multiple times.

Exercise Trade Offs

My gym is personal training focused with a single cardio machine, which you must schedule in advance. If I’m doing cardio there will be at most two clients doing weight training and two trainers in the room, plus me, all > 10 feet away, in a large room with filtration they claim is good. If I’m doing weight training there’s me, my trainer (fairly nearby), and potentially a farther away client and trainer pair. In theory there could be an additional person on the cardio machine but I’ve yet to see it happen.

 

For what it's worth, this seems unusually... (read more)

Long Covid Is Not Necessarily Your Biggest Problem

Thanks for writing this!

My impression is that we're much less likely to catch other infectious diseases that are nearly as severe in the long term (except maybe Lyme?), and unless your probability of catching COVID is very low, your risks from COVID seem worse than driving. This is based on a few people's separate BOTECs for long COVID and my own (vague and personal, not well-researched) impression of how common and bad other infectious diseases are.

Note that a lot of other infectious diseases have become rarer under lockdowns, too, and that's something to... (read more)

6Elizabeth5moThis question feels like a type error to me. My claim isn't "we precisely measured a bunch of risks and covid didn't make the top 5", it's "our measures of damage are not sufficiently precise to measure the danger of breakthrough covid against the accumulated risks we take elsewhere". Additionally, which risks are worth lowering depends heavily on the individual, both what risks they were already taking and how much joy those risks bring them. That said, I personally am focusing my energy on exercise, air quality, fixing the vaccine-induced chest congestion, and diet.
COVID/Delta advice I'm currently giving to friends

The high prevalence of neurological symptoms could be related to working in healthcare during a pandemic. Mental health also looked bad, but didn't differ significantly between cases and controls.

Can you control the past?

There could be external information you and your copy are not aware of that would distinguish you two, e.g. how far different stars appear, time since the big bang. And we can still talk about things outside Hubble volumes. These are mostly relational properties that can be used to tell spacetime locations apart.

1TAG5moAny two identical things could be distinguished by their spacetime locations...while still being identical in their own intrinsic properties. Basically , space.and time are what allow numerical.non-identity in spite of qualitative identity.
Load More