This is a special post for quick takes by jimrandomh. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

This post is a container for my short-form writing. See this post for meta-level discussion about shortform.

Jimrandomh's Shortform
264 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.

Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.

This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoo... (read more)

6Gunnar_Zarncke
This indicates that how we breathe plays a big role in CO2 uptake. Like, shallow or full, small or large volumes, or the speed of exhaling. Breathing technique is a key skill of divers and can be learned. I just started reading the book Breath, which seems to have a lot on it. 
5Adam Scholl
Huh, I've also noticed a larger effect from indoors/outdoors than seems reflected by CO2 monitors, and that I seem smarter when it's windy, but I never thought of this hypothesis; it's interesting, thanks.
5Gunnar_Zarncke
Ah, very related: Exhaled air contains 44000 PPM CO2 and is used for Mouth-to-mouth resuscitation without problems. 
1[anonymous]
I assume the 44k PPM CO2 exhaled air is the product of respiration (I.e. the lungs have processed it), whereas the air used in mouth-to-mouth is quickly inhaled and exhaled.
2Gunnar_Zarncke
As the respirator still has to breathe regularly, there will be still a significantly higher CO2 in the air for respiration. I'd guess maybe half - 20k PPM. Interesting to see somebody measure that. 
4kave
How did this experiment go?
3kave
I had previously guessed air movement made me feel better because my body expected air movement (i.e. some kind of biophilic effect). But this explanation seems more likely in retrospect! I'm not quite sure how to run the calculation using the diffusivity coefficient to spot check this, though.
3M. Y. Zuo
That's a really neat point, has it ever been addressed in prior literature, that you've gone over?

I am now reasonably convinced (p>0.8) that SARS-CoV-2 originated in an accidental laboratory escape from the Wuhan Institute of Virology.

1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.

2. No factor other than the presence of the Wuhan Institute of Virology and related biotech organizations distinguishes Wuhan or Hubei from the rest of China. It is not the location of the bat-caves that SARS was found in; those are in Yunnan. It is not the location of any previous outbreaks. It does not have documented higher consumption of bats than the rest of China.

3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.

4. We know

... (read more)

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.o
... (read more)
5ChristianKl
Given that there's the claim from Botao Xiao's The possible origins of 2019-nCoV coronavirus, that this seafood market was located 300m from a lab (which might or might not be true), this market doesn't seem like it reduces chances.
4[anonymous]
If it was a lab-escape and the CCP knew early enough, they could simply manufacture the data to point at the market as the origin.
1Rudi C
We need to update down on any complex, technical datapoint that we don’t fully understand, as China has surely paid researchers to manufacture hard-to-evaluate evidence for its own benefit (regardless of the truth of the accusation). This is a classic technique that I have seen a lot in propaganda against laypeople, and there is every reason it should have been employed against the “smart” people in the current coronavirus situation.

The most recent episode of the 80k podcast had Andy Weber on it. He was the US Assistant Secretary of Defense, "responsible for biological and other weapons of mass destruction".

Towards the end of the episode he casually drops quite the bomb

Well, over time, evidence for natural spread hasn’t been produced, we haven’t found the intermediate species, you know, the pangolin that was talked about last year. I actually think that the odds that this was a laboratory-acquired infection that spread perhaps unwittingly into the community in Wuhan is about a 50% possibility... And we know that the Wuhan Institute of Virology was doing exactly this type of research [gain of function research].  Some of it — which was funded by the NIH for the United States — on bat Coronaviruses. So it is possible that in doing this research, one of the workers at that laboratory got sick and went home. And now that we know about asymptomatic spread, perhaps they didn’t even have symptoms and spread it to a neighbor or a storekeeper. So while it seemed an unlikely hypothesis a year ago, over time, more and more evidence leaning in that direction has come out. And it’s wrong to dismiss that as kind

... (read more)
7Lukas_Gloor
What about allegations that a pangolin was involved? Would they have had pangolins in the lab as well or is the evidence about pangolin involvement dubious in the first place? Edit: Wasn't meant as a joke. My point is why did initial analyses conclude that the SARS-Cov-2 virus is adapted to receptors of animals other than bats, suggesting that it had an intermediary host, quite likely a pangolin. This contradicts the story of "bat researchers kept bat-only virus in a lab and accidentally released it."
4Spiracular
I think it's probably a virus that was merely identified in pangolins, but whose primary host is probably not pangolins. The pangolins they sequenced weren't asymptomatic carriers at all; they were sad smuggled specimens that were dying of many different diseases simultaneously. I looked into this semi-recently, and wrote up something here. ---------------------------------------- The pangolins were apprehended in Guangxi, which shares some of its border with Yunnan. Neither of these provinces are directly contiguous with Hubei (Wuhan's province), fwiw. (map)
5mako yass
How do you know there's only one lab in china studying these viruses?
5Pattern
This is an assumption. While it might be comparatively correct, I'm not sure about the magnitude. Under the circumstances, perhaps we should consider the possibility that there is something we don't know about Wuhan that makes it more likely. That's nice to know.
4Mati_Roy
shared here: https://pandemic.metaculus.com/questions/3681/will-it-turn-out-that-covid-19-originated-inside-a-research-lab-in-hubei/
4Chris_Leong
Maybe they don't know whether it escaped or not. Maybe they just think there is a chance that the evidence will implicate them and they figure it's not worth the risk as there'll only be consequences if there is definitely proof that it escaped from one of their labs and not mere speculation. Or maybe they want to argue that it didn't come from China? I think they've already been pushing this angle.
3Jayson_Virissimo
Not sure if you have seen this yet, but they conclude: Are they assuming a false premise or making an error in reasoning somewhere?

First, a clarification: whether SARS-CoV-2 was laboratory-constructed or manipulated is a separate question from whether it escaped from a lab. The main reason a lab would be working with SARS-like coronavirus is to test drugs against it in preparation for a possible future outbreak from a zoonotic source; those experiments would involve culturing it, but not manipulating it.

But also: If it had been the subject of gain-of-function research, this probably wouldn't be detectable. The example I'm most familiar with, the controversial 2012 US A/H5N1 gain of function study, used a method which would not have left any genetic evidence of manipulation.

3habryka
The article says:  and I think the article just says that the virus did not undergo genetic engineering or gain-of-function research, which is also what Jim says above. 
7Jayson_Virissimo
Ah, yes: their headline is very misleading then! It currently reads "The coronavirus did not escape from a lab. Here's how we know." I'll shoot the editor an email and see if they can correct it. EDIT: Here's me complaining about the headline on Twitter.
6jimrandomh
Genetic engineering is ruled out, but gain-of-function research isn't.
2Spiracular
Chinese virology researcher released something claiming that SARS-2 might even be genetically-manipulated after all? After assessing, I'm not really convinced of the GMO claims, but the RaTG13 story definitely seems to have something weird going on. Claims that the RaTG13 genome release was a cover-up (it does look like something's fishy with RaTG13, although it might be different than Yan thinks). Claims ZC45 and/or ZXC21 was the actual backbone (I'm feeling super-skeptical of this bit, but it has been hard for me to confirm either way). https://zenodo.org/record/4028830#.X2EJo5NKj0v (aka Yan Report) RaTG13 Looks Fishy Looks like something fishy happened with RaTG13, although I'm not convinced that genetic modification was involved. This is an argument built on pre-prints, but they appear to offer several different lines of evidence that something weird happened here. Simplest story (via R&B): It looks like people first sequenced this virus in 2016, under the name "BtCOV/4991", using mine samples from 2013. And for some reason, WIV re-released the sequence as "RaTG13" at a later date? (edit: I may have just had a misunderstanding. Maybe BtCOV/4991 is the name of the virus as sequenced from miner-lungs, RaTG13 is the name of the virus as sequenced from floor droppings? But in that case, why is the "fecal" sample reading so weirdly low-bacteria? And they probably are embarrassed that it took them that long to sequence the fecal samples, and should be.) A paper by by Indian researchers Rahalkar and Bahulikar ( https://doi.org/10.20944/preprints202005.0322.v1 ) notes that BtCoV/4991 sequenced in 2016 by the same Wuhan Virology Institute researchers (and taken from 2013 samples of a mineshaft that gave miners deadly pneumonia) was very similar, and likely the same, as RaTG13. A preprint by Rahalkar and Bahulikar (R&B) ( doi: 10.20944/preprints202008.0205.v1 ) notes that the fraction of bacterial genomes in in the RaTG13 "fecal" sample was ABSURDLY low ("only 0.
2habryka
I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China", which makes this sentence feel odd to me. I feel like I have reasonable priors for "direct human-to-human transmission" vs. "accidentally released from a lab", but don't have good priors for "escaped specifically from a lab in Wuhan".

I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China"

I don't think this is true. The Wuhan Institute of Virology is the only biolab in China with a BSL-4 certification, and therefore is probably the only biolab in China which could legally have been studying this class of virus. While the BSL-3 Chinese Institute of Virology in Beijing studied SARS in the past and had laboratory escapes, I expect all of that research to have been shut down or moved, given the history, and I expect a review of Chinese publications will not find any studies involving live virus testing outside of WIV. While the existence of one or two more labs in China studying SARS would not be super surprising, the existence of 100 would be extremely surprising, and would be a major scandal in itself.

5Ben Pace
Woah. That's an important piece of info. The lab in Wuhan is the only lab in China allowed to deal with this class of virus. That's very suggestive info indeed.
7jimrandomh
That's overstating it. They're the only BSL-4 lab. Whether BSL-3 labs were allowed to deal with this class of virus, is something that someone should research.

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.

---

Some evidence:

... (read more)
4Howie Lempel
Do you still think there's a >80% chance that this was a lab release?
4Ben Pace
Thank you for the correction.
1leggi
Did anyone do some research? - -- (SARSr-CoV) makes the BSL-4 list on Wikipedia. But what's the probability that animal-based coronaviruses (being very widespread in a lot of species) were restricted to BSL-4 labs? - - -- --- COVID19 and BSL according to: W.H.O. Laboratory biosafety guidance related to the novel coronavirus (2019-nCoV) The CDC: Interim Laboratory Biosafety Guidelines for Handling and Processing Specimens Associated with Coronavirus Disease 2019 (COVID-19)
1leggi
It would be important information if it was true. But is it true? (SARSr-CoV) makes the BSL-4 list on Wikipedia but coronaviruses are widespread in a lot of species and I can't find any evidence that they are restricted to BSL-4 labs.
3habryka
Ok, that makes sense to me. I didn't have much of a prior on the Wuhan lab being much more likely to have been involved in this kind of research.
1Andrew_Clough
Do we have any good sense of the extent to which researchers from the Wuhan Institute of Virology are flying out across China to investigate novel pathogens or sites where novel pathogens might emerge?

There really ought to be a parallel food supply chain, for scientific/research purposes, where all ingredients are high-purity, in a similar way to how the ingredients going into a semiconductor factory are high-purity. Manufacture high-purity soil from ultrapure ingredients, fill a greenhouse with plants with known genomes, water them with ultrapure water. Raise animals fed with high-purity plants. Reproduce a typical American diet in this way.

This would be very expensive compared to normal food, but quite scientifically valuable. You could randomize a study population to identical diets, using either high-purity or regular ingredients. This would give a definitive answer to whether obesity (and any other health problems) is caused by a contaminant. Then you could replace portions of the inputs with the default supply chain, and figure out where the problems are.

Part of why studying nutrition is hard is that we know things were better in some important way 100 years ago, but we no longer have access to that baseline. But this is fixable.

7Drake Thomas
I agree this seems pretty good to do, but I think it'll be tough to rule out all possible contaminant theories with this approach:  * Some kinds of contaminants will be really tough to handle, eg if the issue is trace amounts of radioactive isotopes that were at much lower levels before atmospheric nuclear testing. * It's possible that there are contaminant-adjacent effects arising from preparation or growing methods that aren't related to the purity of the inputs, eg "tomato plants in contact with metal stakes react by expressing obesogenic compounds in their fruits, and 100 years ago everyone used wooden stakes so this didn't happen" * If 50% of people will develop a propensity for obesity by consuming more than trace amounts of contaminant X, and everyone living life in modern society has some X on their hands and in their kitchen cabinets and so on, the food alone being ultra-pure might not be enough. Still seems like it'd provide a 5:1 update against contaminant theories if this experiment didn't affect obesity rates though.
6Durkl
Do you mean like this, but with an emphasis on purity? 
5ChristianKl
The main problem of nutritional research is that it's hard to get people to eat controlled diets. I don't think the key problem is about sourcing ingredients. 
2Viliam
I would agree for a year to only eat food that is given to me by researchers, as long as I can choose what the food is (and the give me e.g. the high-purity version of it). Especially if they would bring it to my home and I wouldn't have to pay. But yeah, for more social people it would be inconvenient.
2ChristianKl
It's not just a question of whether people agree but whether they actually comply with it. People agree to all sorts of things but then do something else. 
2Viliam
Ah, yes. Recently I volunteered for a medical research along with 3 other people I know. Two of them dropped out in the middle. I can't imagine how any medical research can be methodologically valid this way. On the other hand, me and the other person stayed there, and it's almost over, so the success rate is 50%.
2jimrandomh
I won't think that's true. Or rather, it's only true in the specific case of studies that involve calorie restriction. In practice that's a large (excessive) fraction of studies, but testing variations of the contamination hypothesis does not require it.
3ChristianKl
If it would be only true in the case of calorie restriction, why don't we have better studies about the effects of salt? People like to eat together with other people. They go together to restaurants to eat shared meals. They have family dinners. 
3Tao Lin
there is https://shop.nist.gov/ccrz__ProductList?categoryId=a0l3d0000005KqSAAU&cclcl=en_US which fulfils some of this

Some of it, but not the main thing. I predict (without having checked) that if you do the analysis (or check an analysis that has already been done), it will have approximately the same amount of contamination from plastics, agricultural additives, etc as the default food supply.

3tailcalled
Wouldn't it be much cheaper and easier to take a handful of really obese people, sample from the various things they eat, and look for contaminants?
4tailcalled
Wait, no. The obvious objection to my comment would be, what if people who are really obese are obese for different reasons than the reason obesity has increased over time? (With the latter being what I assume jimrandomh is trying to figure out.) I had thought of that counter but dimissed it because, AFAIK the rate of severe obesity has also increased a lot over time. So it seems like severe obesity would have the same cause as the increase over time. But, we could imagine something like, contaminant -> increase in moderate obesity -> societal adjustment to make obesity more feasible (e.g. mobility scooters) -> increase in severe obesity.
5jimrandomh
Studying the diets of outlier-obese people is definitely something should be doing (and are doing, a little), but yeah, the outliers are probably going to be obese for reasons other than "the reason obesity has increased over time but moreso".

LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there's demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they 

They start out closed; when open, they look like this:

When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn't, and there's a bug where if your cursor is inside a collapsible section, when you click outside the editor, eg to edit the post title, the cursor will move back. They will probably work on third-party readers like GreaterWrong, but this hasn't been tested yet.

3MondSemmel
I love the equivalent feature in Notion ("toggles"), so I appreciate the addition of collapsible sections on LW, too. Regarding the aesthetics, though, I prefer the minimalist implementation of toggles in Notion over being forced to have a border plus a grey-colored title. Plus I personally make extensive use of deeply nested toggles. I made a brief example page of how toggles work in Notion. Feel free to check it out, maybe it can serve as inspiration for functionality and/or aesthetics.
2Steven Byrnes
Nice. I used collapsed-by-default boxes from time to time when I used to write/edit Wikipedia physics articles—usually (or maybe exclusively) to hide a math derivation that would distract from the flow of the physics narrative / pedagogy. (Example, example, although note that the wikipedia format/style has changed for the worse since the 2010s … at the time I added those collapsed-by-default sections, they actually looked like enclosed gray boxes with black outline, IIRC.)

In a comment here, Eliezer observed that:

OpenBSD treats every crash as a security problem, because the system is not supposed to crash and therefore any crash proves that our beliefs about the system are false and therefore our beliefs about its security may also be false because its behavior is not known

And my reply to this grew into something that I think is important enough to make as a top-level shortform post.

It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems. OpenBSD is this paranoid, and needs to be this paranoid, because its architecture is fundamentally unsound (albeit unsound in a way that all the other operating systems born in the same era are also unsound). This presents a number of useful analogies that may be useful for thinking about future AI architectural choices.

C has a couple of operations (use-after-free, buffer-overflow, and a few multithreading-related things) which expand false beliefs in one area of the system i... (read more)

9Zac Hatfield-Dodds
I disagree. While C is indeed terribly unsafe, it is always the case that a safety-critical system exhibiting behaviour you thought impossible is a serious safety risk - because it means that your understanding of the system is wrong, and that includes the safety properties.

One of the most common, least questioned pieces of dietary advice is the Variety Hypothesis: that a more widely varied diet is better than a less varied diet. I think that this is false; most people's diets are on the margin too varied.

There's a low amount of variety necessary to ensure all nutrients are represented, after which adding more dietary variety is mostly negative. Institutional sources consistently overstate the importance of a varied diet, because this prevents failures of dietary advice from being too legible; if you tell someone to eat a varied diet, they can't blame you if they're diagnosed with a deficiency.

There are two reasons to be wary of variety. The first is that the more different foods you have, the less optimization you can put into each one. A top-50 list of best foods is going to be less good, on average, than a top-20 list. The second reason is that food cravings are learned, and excessive variety interferes with learning.

People have something in their minds, sometimes consciously accessible and sometimes not, which learns to distinguish subtly different variations of hunger, and learns to match those variations to specific foods which alleviate those s... (read more)

[-]hg00100

The advice I've heard is to eat a variety of fruits and vegetables of different colors to get a variety of antioxidants in your diet.

Until recently, the thinking had been that the more antioxidants, the less oxidative stress, because all of those lonely electrons would quickly get paired up before they had the chance to start mucking things up in our cells. But that thinking has changed.

Drs. Cleva Villanueva and Robert Kross published a 2012 review titled “Antioxidant-Induced Stress” in the International Journal of Molecular Sciences. We spoke via Skype about the shifting understanding of antioxidants.

“Free radicals are not really the bad ones or antioxidants the good ones.” Villanueva told me. Their paper explains the process by which antioxidants themselves become reactive, after donating an electron to a free radical. But, in cases when a variety of antioxidants are present, like the way they come naturally in our food, they can act as a cascading buffer for each other as they in turn give up electrons to newly reactive molecules.

https://blogs.scientificamerican.com/food-matters/antioxidant-supplements-too-much-of-a-kinda-good-thing/

On a meta level, I don't think we un... (read more)

7Viliam
I agree that "varied diet" is a non-answer, because you didn't tell me the exact distribution of food, but you are likely to blame me if I choose a wrong one. Like, if I consume 1000 different kinds of sweets, is that a sufficiently varied diet? Obviously no, I am also supposed to eat some fruit and vegetables. Okay, then what about 998 different kinds of sweets, plus one apple, and one tomato? Obviously, wrong again, I am supposed to eat less sweets, more fruit and vegetables, plus some protein source, and a few more things. So the point is that the person telling me to eat a "varied diet" actually had something more specific in mind, just didn't tell me exactly, but still got angry at me for "misinterpreting" the advice, because I am supposed to know that this is not what they meant. Well, if I know exactly what you mean, then I don't need to ask for an advice, do I? (On the other hand, there is a thing that Soylent-like meals ignore, as far as I know, that there are some things that human metabolism cannot process at the same time. I don't remember what exactly it is, but it's something like human body needs X and also needs Y, but if you eat X and Y at the same time, only X will be processed, so you end up Y-deficient despite eating a hypothetically sufficient amount of Y. Which could probably be fixed by finding combinations like this, and then making variants like Soylent-A and Soylent-B which you are supposed to alternate eating. But as far as I know, no one cares about this, which kinda reduces my trust in the research behind Soylent-like meals, although I like the idea in abstract very much.)
4Firinn
You may find this source interesting: https://onlinelibrary.wiley.com/doi/full/10.1002/ajpa.23148 I remember reading that some hunter-gatherers have diet breadth entirely set by the calorie per hour return rate: take the calories and time expended to acquire the food (eg effort to chase prey) against the calorie density of the food to get the caloric return rate, and compare that to the average expected calories per hour of continuing to look for some other food. Humans will include every food in their diet for which making an effort to go after that food has a higher expected return than continuing to search for something else, ie they'll maximise variety in order to get calories faster. I can't find the citation for it right now though. (Also I apologise if that explanation was garbled, it's 2am)
2Ann
Possibly because I consume sucralose regularly as a sweetener and have some negative impacts from sugar, it is definitely discerned and distinct from 'sugar - will cause sugar effects' to my tastes. I enjoy it for coffee and ice cream. I need more of it to balance out a bitter flavor, but don't crave it for itself; accidentally making saccharine coffee doesn't result in deciding to put splenda in tea later rather than go without or use honey. For more pure sugar (candy, honey, syrup, possibly milk even), there's definitely a saccharine-averse and a sugar-consume fighting at different kinds of craving for me. Past a certain amount, I don't want more at the level of feeling like, oh, I could really use more sugar effects now; quite the opposite. But taste alone continues to be oddly desperate for it. Fresh or frozen sweet fruit either lacks this aversion, or takes notably longer to reach it. I don't taste a fruit and immediately anticipate having a bad time at a gut level. Remains delicious, though, and craved at the taste level.
2Liron
Seems very plausible to me. Thanks for sharing.
1Morpheus
Yeah, I came to a similar conclusion after looking at this question from Metaculus. I might have steered to far in the opposite direction, though. I have currently two meals in my rotation. At the very least one of them is "complete food" (So I worry less about nutrition and more about unlearning how to plan meals/cook). 

Many people seem to have a single bucket in their thinking, which merges "moral condemnation" and "negative product review". This produces weird effects, like writing angry callout posts for a business having high prices.

I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after "business has high prices" is "shop elsewhere" rather than "coordinate punishment".

[-]lc160

Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.

I'm very familiar with this issue; e.g. I regularly see Steam devs get hounded in forums and reviews whenever they dare increase their prices.

I wonder to which extent this frustration about prices comes from gamers being relatively young and international, and thus having much lower purchasing power? Though I suppose it could also be a subset of the more general issue that people hate paying for software.

4Viliam
I do not watch this topic closely, and have never played a game with a DLC. Speaking as an old gamer, it reminds me of the "shareware" concept, where companies e.g. released the first 10 levels of their game for free, and you could buy a full version that contained those 10 levels + 50 more levels. (In modern speech, that would make the remaining 50 levels a "DLC", kind of.) I also see some differences: First, the original game is not free. So you kinda pay for a product, only to be told afterwards that to enjoy the full experience, you need to pay again. Do we have this kind of "you only figure out the full price gradually, after you have already paid a part" in other businesses, and how do their customers tolerate it? Second, somehow the entire setup works differently; I can't pinpoint it, but it feels obvious. In the days of shareware, the authors tried to make the experience of the free levels as great as possible, so that the customers would be motivated to pay for more of it. These days (but now I am speaking mostly about mobile games, that's the only kind I play recently -- so maybe it feels different there), the mechanism is more like: "the first three levels are nice, then the game gets shitty on purpose, and offers you to pay to make it playable again". For the customer, this feels like extortion, rather than "it's so great that I want more of it". Also, the usual problems with extortion: by paying once you send a strong signal that you are the kind of a person who pays when extorted, so obviously the game will soon require you to pay again, even more this time. (So unlike "get 10 levels for free, then get an offer of 50 more levels for $20", the dynamics is more like "get 20 levels, after level 10 get a surprise message that you need to pay $1 to play further, after level 13 get asked to pay $10, after level 16 get asked to pay $100, and after level 19 get asked to pay $1000 for the final level".) The situation with desktop games is not as bad as with
9cubefox
This might be a possible solution to the "supply-demand paradox": sometimes things (e.g. concert or soccer tickets, new playstations) are sold at a price such that the demand far outweighs the supply. Standard economic theory predicts that the price would be increased in such cases.
5Stephen Fowler
I don't think people who disagree with your political beliefs must be inherently irrational. Can you think of real world scenarios in which "shop elsewhere" isn't an option?
3ZY
Based on the words from this post alone - I think that would depend on what the situation is; in the scenario of price increases, if the business is a monopoly or have very high market power, and the increase is significant (and may even potentially cause harm), then anger would make sense. 
3RamblinDash
Just to push back a little - I feel like these people do a valuable service for capitalism. If people in the reviews or in the press are criticizing a business for these things, that's an important channel of information for me as a consumer and it's hard to know how else I could apply that to my buying decisions without incurring the time and hassle cost of showing up and then leaving without buying anything.
1MinusGix
I agree that it is easy to automatically lump the two concepts together. I think another important part of this is that there are limited methods for most consumers to coordinate against companies to lower their prices. There's shopping elsewhere, leaving a bad review, or moral outrage. The last may have a chance of blowing up socially, such as becoming a boycott (but boycotts are often considered ineffective), or it may encourage the government to step in. In our current environment, the government often operates as the coordination method to punish companies for behaving in ways that people don't want. In a much more libertarian society we would want this replaced with other methods, so that consumers can make it harder to put themselves in a prisoner's dilemma or stag hunt against each other. If we had common organizations for more mild coordination than the state interfering, then I believe this would improve the default mentality because there would be more options.
5Noosphere89
This sounds very much like the phenomenon described in From Personal to Prison Gangs: Enforcing Prosocial Behavior, where the main reason for regulation/getting the government to step in has become more and more common is basically the fact that at scales larger than 150-300 people, we lose the ability to iterate games, which in the absence of acausal/logical/algorithmic decision theories like FDT and UDT, basically mean that the optimal outcome is to defect, so you can no longer assume cooperation/small sacrifices from people in general, and coordination in the modern world is a very taut constraint, so any solution has very high value. (This also has a tie-in to decision theory: At the large scale, CDT predominates, but at the very small scale, something like FDT is incentivized through kin selection, though this is only relevant for 4-50 people scales at most, and the big reasons why algorithmic decision theories aren't used by people very often is because of the original decision theories that were algorithmic like UDT basically required logical omniscience, which people obviously don't have, and even the more practical algorithmic decision theories require both access to someone's source code, and also the ability to simulate another agent either perfectly or at least very, very good simulations, which we again don't have.) This link is very helpful to illustrate the general phenomenon: https://www.lesswrong.com/posts/sYt3ZCrBq2QAf3rak/from-personal-to-prison-gangs-enforcing-prosocial-behavior

I had the "your work/organization seems bad for the world" conversation with three different people today. None of them pushed back on the core premise that AI-very-soon is lethal. I expect that before EAGx Berkeley is over, I'll have had this conversation 15x.

#1: I sit down next to a random unfamiliar person at the dinner table. They're a new grad freshly hired to work on TensorFlow. In this town, if you sit down next to a random person, they're probably connected to AI research *somehow*. No story about how this could possibly be good for the world, receptive to the argument that he should do something else. I suggested he focus on making the safety conversations happen in his group (they weren't happening).

#2: We're running a program to take people who seem interested in Alignment and teach them how to use PyTorch and study mechanistic interpretability. Me: Won't most of them go work on AI capabilities? Them: We do some pre-screening, and the current ratio of alignment-to-capabilities research is so bad that adding to both sides will improve the ratio. Me: Maybe bum a curriculum off MIRI/MSFP and teach them about something that isn't literally training Transformers?

#3: We're res... (read more)

7WilliamKiely
I'm not sure whether OpenAI was one of the organizations named, but if so, this reminded me of something Scott Aaronson said on this topic in the Q&A of his recent talk "Scott Aaronson Talks AI Safety": Source: 1:12:52 in the video, edited transcript provided by Scott on his blog. In short, it seems to me that Scott would not have pushed back on a claim that OpenAI is an organization" that seem[s] like the AI research they're doing is safety research" in the way you did Jim. I assume that all the sad-reactions are sadness that all these people at the EAGx conference aren't noticing that their work/organization seems bad for the world on their own and that these conversations are therefore necessary. (The shear number of conversations like this you're having also suggests that it's a hopeless uphill battle, which is sad.) So I wanted to bring up what Scott Aaronson said here to highlight that "systemic change" interventions are necessary also. Scott's views are influential; potentially targeting talking to him and other "thought leaders" who aren't sufficiently concerned about slowing down capabilities progress (or who don't seem to emphasize enough concern for this when talking about organizations like OpenAI) would be helpful, of even necessary, for us to get to a world a few years from now where everyone studying ML or working on AI capabilities is at least aware of arguments about AI alignment and why increasing increasing AI capabilities seems harmful.

Today in LessWrong moderation: Previously-banned user Alfred MacDonald, disappointed that his YouTube video criticizing LessWrong didn't get the reception he wanted any of the last three times he posted it (once under his own name, twice pretending to be someone different but using the same IP address), posted it a fourth time, using his LW1.0 account.

He then went into a loop, disconnecting and reconnecting his VPN to get a new IP address, filling out the new-user form, and upvoting his own post, one karma per 2.6 minutes for 1 hour 45 minutes, with no breaks.

I was curious... it is a 2 hour rant (that itself selects for an audience of obsessed people), audio only, and the topics mentioned are:

  • why LW discusses AI? that is not rationality
  • IQ has diminishing returns (in terms of how many pages you can read per hour)
  • lots of complaining about a norm of not publishing screenshots of debates, in some rationalist chat
  • why don't effective altruists give money to the homeless?
  • utilitarianism doesn't make sense because people can't quantify pain
  • animals probably don't even feel pain, just like circumcised babies
  • vitamin A charity is probably nonsense, because the kids will be malnutritioned anyway
  • do not use nerdy metaphors, because that discourages non-white people

I didn't listen to the entire video.

5Yitz
This..is a human?
6Richard_Kennaway
To judge that, it is worth also glancing over the rest of his Youtube channel, his Substack, and his web site.

Despite the justness of their cause, the protests are bad. They will kill at least thousands, possibly as many as hundreds of thousands, through COVID-19 spread. Many more will be crippled. The deaths will be disproportionately among dark-skinned people, because of the association between disease severity and vitamin D deficiency.

Up to this point, R was about 1; not good enough to win, but good enough that one more upgrade in public health strategy would do it. I wasn't optimistic, but I held out hope that my home city, Berkeley, might become a green zone.

Masks help, and being outdoors helps. They do not help nearly enough.

George Floyd was murdered on May 25. Most protesters protest on weekends; the first weekend after that was May 30-31. Due to ~5-day incubation plus reporting delays, we don't yet know how many were infected during that first weekend of protests; we'll get that number over the next 72 hours or so.

We are now in the second weekend of protests, meaning that anyone who got infected at the first protest is now close to peak infectivity. People who protested last weekend will be superspreaders this weekend; the jump in cases we see over the next 72 hours will be about *

... (read more)
9jessicata
It's been over 72 hours and the case count is under 110, as would be expected from linear extrapolation.
2[comment deleted]

For reducing CO2 emissions, one person working competently on solar energy R&D has thousands to millions of times more impact than someone taking normal household steps as an individual. To the extent that CO2-related advocacy matters at all, most of the impact probably routes through talent and funding going to related research. The reason for this is that solar power (and electric vehicles) are currently at inflection points, where they are in the process of taking over, but the speed at which they do so is still in doubt.

I think the same logic now applies to veganism vs meat-substitute R&D. Considering the Impossible Burger in particular. Nutritionally, it seems to be on par with ground beef; flavor-wise it's pretty comparable; price-wise it's recently appeared in my local supermarket at about 1.5x the price. There are a half dozen other meat-substitute brands at similar points. Extrapolating a few years, it will soon be competitive on its own terms, even without the animal-welfare angle; extrapolating twenty years, I expect vegan meat-imitation products will be better than meat on every axis, and meat will be a specialty product for luddites and people with dietary restrictions. If this is true, then interventions which speed up the timeline of that change are enormously high leverage.

I think this might be a general pattern, whenever we find a technology and a social movement aimed at the same goal. Are there more instances?

According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.

What tests would people like me to perform?

Tests that I'm already planning to perform:

To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can't taste it. The normal test procedure won't work as-is because it's too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselves) spray copious amounts of Bittrex at the input fan and at any spots that seem high-risk for leaks.

To test that air exiting the Biovyzr is being filtered, I plan to put on a regular N95, and use the inside-out glove to create Bittrex aerosol inside the Biovyzr, and see whether someone in the room without a mask is able to smell it.

I will verify that the Biovyzr is positive-pressure by running a straw through an edge, creating an artificial leak, and seeing which way the air flows through the leak.

I will have everyone in my house try wearing it (5 adults of varied sizes), have them all rate its fit and comfort, and get as many of them to do Bittrex fit tests as I can.

A dynamic which I think is somewhat common, which explains some of what's going on in general, is conversations which go like this (exagerrated):

Person: What do you think about [controversial thing X]?

Rationalist: I don't really care about it, but pedantically speaking, X, with lots of caveats.

Person: Huh? Look at this study which proves not-X. [Link]

Rationalist: The methodology of that study is bad. Real bad. While it is certainly possible to make bad arguments for true conclusions, my pedantry doesn't quite let me agree with that conclusion. More importantly, my hatred for the methodological error in that paper, which is slightly too technical for you to understand, burns with the fire of a thousand suns. You fucker. Here are five thousand words about how an honorable person could never let a methodological error like that slide. By linking to that shoddy paper, you have brought dishonor upon your name and your house and your dog.

Person: Whoa. I argued [not-X] to a rationalist and they disagreed with me and got super worked up about it. I guess rationalists believe [X] really strongly. How awful!

3Dagon
Person is clearly an idiot for not understanding what "don't care but pedantically X with lots of caveats" means, and thinking that misinterpreting and giving undue importance to a useless article/study is harmless. Yes, that level of stupidity is common.

(I wrote this comment for the HN announcement, but missed the time window to be able to get a visible comment on that thread. I think a lot more people should be writing comments like this and trying to get the top comment spots on key announcements, to shift the social incentive away from continuing the arms race.)

On one hand, GPT-4 is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.

Right now there are two concurrent arms races happening. The first is between AI labs, trying to build the smartest systems they can as fast as they can. The second is the race between advancing AI capability and AI alignment, that is, our ability to understand and control these systems. Right now, OpenAI is the main force driving the arms race in capabilities–not so much because they're far ahead in the capabilities themselves, but because they're slightly ahead and are pushing the hardest for productization.

Unfortunately at the current pace of advancement in AI capability, I think a future system will reach the level of bein... (read more)

1Noosphere89
Going to write this now, but I disagree right now due to differing models of AI risk.
1JNS
When I look at the recent Stanford paper, where they retained a LLaMA model using training data generated by GPT-3, and some of the recent papers utilizing memory. I get that tinkling feeling and my mind goes "combining that and doing .... I could ..." I have not updated for faster timelines, yet. But I think I might have to.
2[anonymous]
If you look at the GPT-4 paper they used the model itself to check it's own outputs for negative content.  This lets them scale applying the constraints of "don't say <things that violate the rules>". Presumably they used an unaltered copy of GPT-4 as the "grader".  So it's not quite RSI because of this - it's not recursive, but it is self improvement.   This to me is kinda major, AI is now capable enough to make fuzzy assessments of if a piece of text is correct or breaks rules.   For other reasons, especially their strong visual processing, yeah, self improvement in a general sense appears possible.  (self improvement as a 'shorthand', your pipeline for doing it might use immutable unaltered models for portions of it)

Most philosophical analyses of human values feature a split-and-linearly-aggregate step. Eg:

  • Value is the sum (or average) of a person-specific preference function applied to each person
  • A person's happiness is the sum of their momentary happiness for each moment they're alive.
  • The goodness of an uncertain future is the probability-weighted sum of the goodness of concrete futures.
  • If you value multiple orthogonal things, your preferences are the weighted sum of a set of functions that each capture one of those values independently.

I currently think that this is not how human values work, and that many philosophical paradoxes relating to human values trace back to a split-and-linearly-aggregate step like this.

3AlexMennen
Examples 3 and 1 are justified by the VNM theorm and Harsanyi's utilitarian theorem, respectively. I agree that 2 and 4 are wrong.
2Dagon
It doesn't need to be linear (both partial-correlation of desires, and declining marginal desire are well-known), but the only alternative to aggregation in incoherency.   I think you'd be on solid ground if you argue that humans have incoherent values, and this is a fair step in that direction.
2riceissa
What alternatives to "split-and-linearly-aggregate" do you have in mind? Or are you just identifying this step as problematic without having any concrete alternative in mind?
1Jan_Kulveit
cf Non-linear perception of happiness
1Connor_Flexman
I like this a lot. I've been thinking recently about how a lot of my highly-valued experiences have a "fragility" to them, where one big thing missing would make them pretty worthless. In other words, there's a strongly conjunctive aspect. This is pretty clear to everyone in cases like fashion, where you can wear an outfit that looks good aside from clashing with your shoes, or social cases, like if you have a fun party except the guy who relentlessly hits on you is there. But I think it's underappreciated how widespread this dynamic is. Getting good relaxation in. Having a house that "just works". Having a social event where it "just flows". A song that you like except for the terrible lyrics. A thread that you like but it contains one very bad claim. A job or relationship that goes very well until a bad falling-out at the end. A related claim, maybe a corollary or maybe separate: lots of good experiences can be multiplicatively enhanced, rather than additively, if you add good things. The canonical example is probably experiencing something profound with your significant other vs without; or something good with your significant other vs something profound. Seems like it's useful as a very approximate estimate of value to split wrt time, current facets of experience, experiencers, etc, but with so many basic counterexamples it doesn't require much pushing toward edge cases at all before you're getting misleading results.

I think the root of many political disagreements between rationalists and other groups, is that other groups look at parts of the world and see a villain-shaped hole. Eg: There's a lot of people homeless and unable to pay rent, rent is nominally controlled by landlords, the problem must be that the landlords are behaving badly. Or: the racial demographics in some job/field/school underrepresent black and hispanic people, therefore there must be racist people creating the imbalance, therefore covert (but severe) racism is prevalent.

Having read Meditations on Moloch, and Inadequate Equilibria, though, you come to realize that what look like villain-shaped holes frequently aren't. The people operating under a fight-the-villains model are often making things worse rather than better.

I think the key to persuading people may be to understand and empathize with the lens in which systems thinking, equilibria, and game theory are illegible, and it's hard to tell whether an explanation coming from one of these frames is real or fake. If you think problems are driven by villainy, then it would make a lot of sense for illegible alternative explanations to be misdirection.

6Vaniver
I think I basically disagree with this, or think that it insufficiently steelmans the other groups. For example, the homeless vs. the landlords; when I put on my systems thinking hat, it sure looks to me like there's a cartel, wherein a group that produces a scarce commodity is colluding to keep that commodity scarce to keep the price high. The facts on the ground are more complicated--property owners are a different group from landlords, and homelessness is caused by more factors than just housing prices--but the basic analysis that there are different classes, those classes have different interests, and those classes are fighting over government regulation as a tool in their conflict seems basically right to me. Like, it's really not a secret that many voters are motivated by keeping property values high, politicians know this is a factor that they will be judged on. Maybe you're trying to condemn a narrow mistake here, where someone being an 'enemy' implies that they are a 'villain', which I agree is a mistake. But it sounds like you're making a more generic point, which is that when people have political disagreements with the rationalists, it's normally because they're thinking in terms of enemy action instead of not thinking in systems. But a lot of what the thinking in systems reveals is the way in which enemies act using systemic forces!
3jimrandomh
I think this is correct as a final analysis, but ineffective as a cognitive procedure. People who start by trying to identify villains tend to land on landlords-in-general, with charging-high-rent as the significant act, rather than a small subset of mostly non-landlord homeowners, with protesting against construction as the significant act.
4purrtrandrussell
Much of the progress in modern anti-racism has been about persuading more people to think of racism as a structural, systemic issue rather than one of individual villainy. See: https://transliberalism.substack.com/.../the-revolution...
2Viliam
I wonder how accurate it is to describe the structural thinking as a recent progress. Seems to me that Marx already believed that (using my own words here, but see the source) both the rich and the poor are mere cogs in the machine, it's just that the rich are okay with their role because the machine leaves them some autonomy, while the poor are stripped of all autonomy and their lives are made unbearable. The rich of today are not villains who designed the machine, they inherited it just like everyone else, and they cannot individually leave it just like no one else can. Perhaps the structural thinking is too difficult to understand for most people, who will round the story to the nearest cliche they can understand, so it needs to be reintroduced once in a while.
2Raemon
I think this would make a good top-level post.
2Ben Pace
Yep. Seems you have broadly rediscovered conflict vs mistake.
4jimrandomh
Conflict vs mistake is definitely related, but I think it's not exactly the same thing; the "villain-shaped hole" perspective is what it feels like to not have a model, but see things that look suspicious; this would lead you towards a conflict-theoretic explanation, but it's a step earlier. (Also, the Conflict vs Mistake ontology is not really capturing the whole bad-coordination-equilibrium part of explanation space, which is pretty important.)
3Viliam
Seems to me like an unspoken assumption that there are no hard problems / complexity / emergence, therefore if anything happened, it's because someone quite straightforwardly made that happen. Conflict vs mistake is not exactly the same thing; you could assume that the person who made it happen did it either by mistake, or did it on purpose to hurt someone else. It's just when we are talking about things that obviously hurt some people, that seems to refute the innocent mistake... so the villain hypothesis is all that is left (within the model that all consequences are straightforward). The villain hypothesis is also difficult to falsify. If you say "hey, drop the pitchforks, things are complicated...", that sounds just like what the hypothetical villain would say in the same situation (trying to stop the momentum and introduce uncertainty).

There are a few legible categories in which secrecy serves a clear purpose, such as trade secrets. In those contexts, secrecy is fine. There are a few categories that have been societally and legally carved out as special cases where confidentiality is enforced--lawyers, priests, and therapists--because some people would only consult them if they could do so with the benefit confidentiality, and there being deterred from consulting them would have negative externalities.

Outside of these categories, secrecy is generally bad and transparency is generally good. A group of people in which everyone practices their secret-keeping and talks a lot about how to keeps secrets effectively is *suspicious*. This is particularly true if the example secrets are social and not technological. Being good at this sort of secret keeping makes it easier to shield bad actors and to get away with transgressions, and AFAICT doesn't do much else. That makes it a signal of wanting to be able to do those things. This is true even if the secrets aren't specifically about transgressions in particular, because all sorts of things can turn out to be clues later for reasons that weren't easy to foresee.

A lot of p... (read more)

6Nisan
Suppose Alice has a crush on Bob and wants to sort out her feelings with Carol's help. Is it bad for Alice to inform Carol about the crush on condition of confidentiality?
4jimrandomh
In the most common branch of this conversation, Alice is predictably going to tell Bob about it soon, and is speaking to Carol first in order to sort out details and gain courage. If Carol went and preemptively informed Bob, before Alice talked to Bob herself, this would be analogous to sharing an unfinished draft. This would be bad, but the badness really isn't about secrecy. The contents of an unfinished draft headed for publication aren't secret, except in a narrow and time-limited sense. The problem is that the sharing undermines the impact of the later publication, causes people to associate the author with a lower quality product, and potentially misleads people about the author's beliefs. Similarly, if Carol goes and preemptively tells Bob about Alice's crush, then this is likely to give Bob a misleading negative impression of Alice. It's reasonable for Alice to ask Carol not to do that, and it's okay for them to not have a detailed model of all of the above. But if Alice never tells Bob, and five years later Bob and Carol are looking back on the preceding years and asking if they could have gone differently? In that case, I think discarding the information seems like a pure harm.
2Nisan
Ok, I think in the OP you were using the word "secrecy" to refer to a narrower concept than I realized. If I understand correctly, if Alice tells Bob "please don't tell Bob", and then five years later when Alice is dead or definitely no longer interested or it's otherwise clear that there won't be negative consequences, Carol tells Bob, and Alice finds out and doesn't feel betrayed — then you wouldn't call that a "secret". I guess for it to be a "secret" Carol would have to promise to carry it to her grave, even if circumstances changed, or something. In that case I don't have strong opinions about the OP.

I have a dietary intervention that I am confident is a good first-line treatment for nearly any severe-enough diet-related health problem. That particularly includes obesity and metabolic syndrome, but also most micronutrient deficiencies, and even mysterious undiagnosed problems, which it can solve without even needing to figure out what they are. I also think it's worth a try for many cases of depression. It has a very sound theoretical basis. It's never studied directly, but many studies test it, usually with positive results.

It's very simple. First, you characterize your current diet: write down what foods you're eating, the patterns of when you eat them, and so on. Then, you do something as different as possible from what you wrote down. I call it the Regression to the Mean Diet.

Regression to the mean is the effect where, if you have something that's partially random and you reroll it, the reroll will tend to be closer to average than the original value. For example, if you take the bottom scorers on a test and have them retake the test, they'll do better on average (because the bottom-scorers as a group are disproportionately peopple who were having a bad day when they took t... (read more)

6Rob Bensinger
My understanding is that diet RCTs generally show short-term gains but no long-term gains. Why would that be true, if the Regression to the Mean Diet is the main thing causing these results? I'd have expected something more like 'all diets work long-term' rather than 'no diets work long-term' from the model here.

I think they may be a negative correlation between short-term and long-term weight change on any given diet, causing them to pick in a way that's actually worse than random. I'm planning a future post about this. I'm not super confident in this theory, but the core of it is that "small deficit every day, counterbalanced by occasional large surplus" is a pattern that would signal food-insecurity in the EEA. Then some mechanism (though I don't know what that mechanism would be) by which the body remembers that happened, and responds by targeting a higher weight after return to ad libitum.

3Gordon Seidoh Worley
I think the obvious caveat here is that many people can't do this because they have restrictions that have taken them away from the mean. For example, allergies, sensitivities, and ethical or cultural restrictions on what they eat. They can do a limited version of the intervention of course (for example, if only eating plants, eat all the plants you don't eat now and stop eating the plants you currently eat), although I wonder if that would have similar effects or not because it's already so constrained.

I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we've failed to attribute to infections, in much the same way we failed to attribute crime to lead.

Over the past century, there have been two big changes in infectious disease. The first is that we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.

Or, put another way: the "hygiene hypothesis" is the opposite of true.

2Adam Scholl
I'm curious about your first and second hypothesis regarding obesity?
3jimrandomh
Disruption of learning mechanisms by excessive variety and separation between nutrients and flavor. Endocrine disruption from adulterants and contaminants (a class including but not limited to BPA and PFOA).
1leggi
Some comments: we've wiped out or drastically reduced some diseases in some parts of the world.   There's a lot of infectious diseases still out there: HIV, influenza, malaria, tuberculosis, cholera, ebola,  infectious forms of pneumonia, diarrhoea, hepatitis ....  Disease has always spread - wherever people go, far and wide.  It just took longer over land and sea  (rather than the nodes appearing on global maps that we can see these days).  "autoimmune conditions" covers a long list of conditions lumped together because they involve the immune system 'going wrong'. (and the immune system is, at least to me, a mind-bogglingly complex system) Given the wide range of conditions that could be "auto-immune" saying they've risen greatly over time is vague. Data for more specific conditions? Increased rates of automimmune conditions could just be due to the increase in the recognition, diagnosis and recording of cases (I don't think so but it should be considered). What things other than high speed travel have also changed in that time-frame that could affect our immune systems?   The quality of air we breathe, the food we eat, the water we drink, our environment, levels of exposure to fauna and flora, exposure to chemicals, pollutants ...? Air travel is just one factor. Fatigue and depression are clinical symptoms - they are either present or not (to what degree - mild/severe is another matter) so sub-clinical is poor terminology here.   Sub-clinical disease has no recognisable clinical findings - undiagnosed/unrecognised would be closer. But I agree there is widespread issues with health and well-being these days. Opposite of true?  Are you saying you believe the "hygiene hypothesis" is false? In which case, that's a big leap from your reasoning above.

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice mo
... (read more)
8[anonymous]
I like this post! Some evidence that security mindset generalizes across at least some domains: the same white hat people who are good at finding exploits in things like kernels seem to also be quite good at finding exploits in things like web apps, real-world companies, and hardware. I don't have a specific person to give as an example, but this observation comes from going to a CTF competition and talking to some of the people who ran it about the crazy stuff they'd done that spanned a wide array of different areas. Another slightly different example, Wei Dai is someone who I actually knew about outside of Less Wrong from his early work on cryptocurrency stuff, so he was at least at one point involved in a security-heavy community (I'm of the opinion that early cryptocurrency folks were on average much better about security mindset than the average current cryptocurrency community member). Based on his posts and comments, he generally strikes me as having security mindset style thinking from his comments and from my perspective has contributed a lot of good stuff to AI alignment. Theo de Raadt is notoriously... opinionated, so it would definitely be interesting to see him thrown on an AI team. That said, I suspect someone like Ralph Merkle, who's a bona fide cryptography wizard (he invented public key cryptography and Merkle trees!) and is heavily involved in the cryonics and nanotech communities, could fairly easily get up to speed on AI control work and contribute from a unique security/cryptography-oriented perspective. In particular, now that there seems to be more alignment/control work that involves at least exploring issues with concrete proposals, I think someone like this would have less trouble finding ways to contribute. That said, having cryptography experience in addition to security experience does seem helpful. Cryptography people are probably more used to combining their security mindset with their math intuition than your average white-hat hack

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

4Wei Dai
Combining hash functions is actually trickier than it looks, and some people are doing research in this area and deploying solutions. See https://crypto.stackexchange.com/a/328 and https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography. It does seem that if cryptography people had more of a security mindset (that are not being defeated) then there would be more research and deployment of this already.
3[anonymous]
In fairness, I'm probably over-generalizing from a few examples. For example, my biggest inspiration from the field of crypto is Daniel J. Bernstein, a cryptographer who's in part known for building qmail, which has an impressive security track record & guarantee. He discusses principles for secure software engineering in this paper, which I found pretty helpful for my own thinking. To your point about hashing the results of several different hash functions, I'm actually kind of surprised to hear that this might to protect against the sorts of advances I'd expect to break hash algorithms. I was under the very amateur impression that basically all modern hash functions relied on the same numerical algorithmic complexity (and number-theoretic results). If there are any resources you can point me to about this, I'd be interested in getting a basic understanding of the different assumptions hash functions can depend on.
2Noosphere89
The issue is that all cryptography depends on one-way functions, so any ability to break a cryptographic algorithm that depends on one-way functions in a scalable way means you have defeated almost all of cryptography in practice. So in one sense, a mathematical advance on a one-way function underlying a symmetric key algorithm would be disastrous for overall cryptographic prospects.
2Wei Dai
Can you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)
5[anonymous]
Sure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no public evidence that I have security mindset and I wouldn't claim that I have a consistent ability to do "deep security"-style analysis.) With that out of the way, here are some of the examples I was thinking of. First of all, at a high level, I've noticed that you seem to consistently question assumptions other posters are making and clarify terminology when appropriate. This seems like a prerequisite for security mindset, since it's a necessary first step towards constructing systems. Second and more substantively, I've seen you consistently raise concerns about human safety problems (also here. I see this as an example of security mindset because it requires questioning the assumptions implicit in a lot of proposals. The analogy to Eliezer's post here would be that ordinary paranoia is trying to come up with more ways to prevent the AI from corrupting the human (or something similar) whereas I think a deep security solution would look more like avoiding the assumption that humans are safe altogether and instead seeking clear guarantees that our AIs will be safe even if we ourselves aren't. Last, you seem to be unusually willing to point out flaws in your own proposals, the prime example being UDT. The most recent example of this is your comment about the bomb argument, but I've seen you do this quite a bit and could find more examples if prompted. On reflection, this may be more of an example of "ordinary paran
1riceissa
This comment feels relevant here (not sure if it counts as ordinary paranoia or security mindset).

Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I'd trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).

(The same reasoning applies to the weights of AIs which aren't destined for deployment, and some intermediate artifacts in the training process.)

It seems to me we can reconcile preservation with priv... (read more)

2ryan_greenblatt
I'm in favor of logging everything forever in human accessible formats for other reasons. (E.g. review for control purposes.) Hopefully we can resolve safety privacy trade offs. The proposal sounds reasonable and viable to me, though the fact that it can't be immediately explained might mean that it's not commercially viable.
2Vladimir_Nesov
Compute might get more expensive, not cheaper, because it would be possible to make better use of it (running minds, not stretching keys). Then it's weighing its marginal use against access to the sealed data.
9jimrandomh
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that's more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
2Vladimir_Nesov
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.