Jason Gross

Posts

Sorted by New

Comments

Is there a definitive intro to punishing non-punishers?

I think the thing you're looking for is traditionally called "third-party punishment" or "altruistic punishment", c.f. https://en.wikipedia.org/wiki/Third-party_punishment . Wikipedia cites Bendor, Jonathon; Swistak, Piot (2001). "The Evolution of Norms". American Journal of Sociology. 106 (6): 1493–1545. doi:10.1086/321298, which seems at least moderately non-technical at a glance.

 

I think I first encountered this in my Moral Psychology class at MIT (syllabus at http://web.mit.edu/holton/www/courses/moralpsych/home.html ), and I believe the citation was E. Fehr & U. Fischbacher 'The Nature of Human Altruism' Nature 425 (2003) 785-91.  The bottom of the first paragraph on page 787 in https://www.researchgate.net/publication/9042569_The_Nature_of_Human_Altruism ("In fact, it can be shown theoretically thateven a minority of strong reciprocators suffices to discipline amajority of selfish individuals when direct punishment is possible.") seems related but not exactly what you're looking for.

How good are our mouse models (psychology, biology, medicine, etc.), ignoring translation into humans, just in terms of understanding mice? (Same question for drosophila.)

I think another interesting datapoint is to look at where our hard-science models are inadequate because we haven't managed to run the experiments that we'd need to (even when we know the theory of how to run them). The main areas that I'm aware of are high-energy physics looking for things beyond the standard model (the LHC was an enormous undertaking and I think the next step up in particle accelerators requires building one the size of the moon or something like that), gravity waves (similar issues of scale), and quantum gravity (similar issues + how do you build an experiment to actually safely play with black holes?!) On the other hand, astrophysics manages to do an enormous amount (star composition, expansion rate of the universe, planetary composition) with literally no ability to run experiments and very limited ability to observe. (I think a particularly interesting case was the discovery of dark matter (which we actually still don't have a model for), which we discovered, iirc, by looking at a bunch of stars in the milky way and determining their velocity as a function of distance from the center by (a) looking at which wavelengths of light were missing to determine their velocity away/towards us (the elements that make up a star have very specific wavelengths that they absorb, so we can tell the chemical composition of a star by looking at the pattern of what wavelengths are missing, and we can get velocity/redshift/blueshift by looking at how far off those wavelengths are from what they are in the lab) and (b) picking out stars of colors that we know come only in very specific brightnesses so that we can use apparent brightness to determine how far away the star is, and (c) use it's position in the night sky to determine what vector to use so we can position it relative to the center of the galaxy, and finally (d) notice that the velocity as a function of radius function is very very different from what it would be if the only mass causing gravitational pull were the visible star mass, and then inverting the plot to determine the spatial distribution of this newfound "dark matter". I think it's interesting and cool that there's enough validated shared model built up in astrophysics that you can stick a fancy prism in front of a fancy eye and look at the night sky and from what you see infer facts about how the universe is put together. Is this sort of thing happening in biology?)

Melatonin: Much More Than You Wanted To Know

By the way,

The normal tendency to wake up feeling refreshed and alert gets exaggerated into a sudden irresistable jolt of awakeness.

I'm pretty sure this is wrong. I'll wake up feeling unable to go back to sleep, but not feeling well-rested and refreshed. I imagine it's closer to a caffeine headache? (I feel tired and headachy but not groggy.) So, at least for me, this is a body clock thing, and not a transient effect.

Melatonin: Much More Than You Wanted To Know

Van Geijlswijk makes the important point that if you take 0.3 mg seven hours before bedtime, none of it is going to be remaining in your system at bedtime, so it’s unclear how this even works. But – well, it is pretty unclear how this works. In particular, I don’t think there’s a great well-understood physiological explanation for how taking melatonin early in the day shifts your circadian rhythm seven hours later.

It seems to me there's a very obvious model for this: the body clock is a chemical clock whose current state is stored in the concentration/configuration of various chemicals in various places. The clock, like all physical systems, is temporally local. There seems to be evidence that it keeps time even in the complete absence of external cues, so most of the "what time is it" state must be encoded in the body (rather than, e.g., using the intensity of sunlight as the primary signal to set the current time). Taking melatonin seems like it's futzing directly with the state of the body clock. If high melatonin encodes the state "middle of the night", then whenever you take it should effectively set your clock to "it's now the middle of the night". I think this is why it makes it possible to fall asleep. I think that it's then the effects of sunlight and actually sleeping and waking up that drag your body clock later again (I also have the effect that at anything over 0.1mg or so, I'll wake up 5h45m later, and if my dose is much more than 0.3mg, I won't be able to fall back asleep).

I'm pretty confused what taking it 9h after waking does in this model, though; 5--6 hours later, when the "most awake" time happens in this model, is just about an hour before you want to go to bed. One plausible explanation here is that this is somehow tied to the "reset" effect you mentioned from staying up for more than 24 hours; if what really matters here is that you were awake for the entirety of your normal sleep time (or something like that), then this would predict that having melatonin any time between when you woke up and 7 hours before when you went to sleep would have the "reset" effect. An alternative (or additional) plausible explanation is that this is tied to "oversleeping" (which in this model would be about confusing your body clock enough that it thinks you're supposed to keep sleeping past when you eventually wake up). If the body clock is sensitive to going back to sleep shortly after waking up (and my experience says this is the case, though I'm not sure what exactly the window is), then taking melatonin 5--6 hours before bed should induce something akin to the "oversleeping" effect (where you wake up, are fine, go back to sleep, sleep much more than 8 hours total, and then feel groggy when you eventually get up).

Raemon's Shortform

I'm wanting to label these as (1) 😃 (smile); (2) 🍪 (cookie); (3) 🌟 (star)

Dunno if this is useful at all

Raemon's Shortform

This has been true for years. At least six, I think? I think I started using Google scholar around when I started my PhD, and I do not recall a time when it did not link to pdfs.

Raemon's Shortform

I dunno how to think about small instances of willpower depletion, but burnout is a very real thing in my experience and shows up prior to any sort of conceptualizing of it. (And pushing through it works, but then results in more extreme burn out after.)

Oh, wait, willpower depletion is a real thing in my experience: if I am sleep deprived, I have to hit the "get out of bed" button in my head harder/more times before I actually get out of bed. This is separate from feeling sleepy (it is true even when I have trouble falling back asleep). It might be mediated by distraction, but that seems like quibbling over words.

I think in general I tend to take outside view on willpower. I notice how I tend to accomplish things, and then try to adjust incentive gradients so that I naturally do more of the things I want. As was said in some CFAR unit, IIRC, if my process involves routinely using willpower to accomplish a particular thing, I've already lost.

Raemon's Shortform

People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no

Emotions are information. When I feel defensive, I'm defending something. The proper question, then, is "what is it that I'm defending?" Perhaps it's my sense of self-worth, or my right to exist as a person, or my status, or my self-image as a good person. The follow-up is then "is there a way to protect that and still seek the thing we're after?" "I'm feeling defensive" isn't a "'get out of jail free' card", it's an invitation to go meta before continuing on the object level. (And if people use "I'm feeling defensive" to accomplish this, that seems basically fine? "Thank you for naming your defensiveness, I'm not interested in looking at it right now and want to continue on the object level if you're willing to or else end the conversation for now" is also a perfectly valid response to defensiveness, in my world.)

Micro feedback loops and learning

I imagine one thing that's important to learning through this app, which I think may be under-emphasised here, is that the feedback allows for mindful play as a way of engaging. I imagine I can approach the pretty graph with curiosity: "what does it look like if I do this? What about this?" I imagine that an app which replaced the pretty graphs with just the words "GOOD" and "BAD" would neither be as enjoyable nor as effective (though I have no data on this).

Fuzzy Boundaries, Real Concepts

Another counter-example for consent: being on a crowded subway with no room to not touch people (if there's someone next to you who is uncomfortable with the lack of space). I like your definition, though, and want to try to make a better one (and I acknowledge this is not the point of this post). My stab at a refinement of "consent" is "respect for another's choices", where "disrespect" is "deliberately(?) doing something to undermine". I think this has room for things like preconsent (you can choose to do something you disprefer) and crowded subways. It allows for pulling people out of the way of traffic (either they would choose to have you save their life, or you are knowingly being paternalistic and putting their life above their consent and choices).

Load More