Jimrandomh's Shortform

by jimrandomh1 min read4th Jul 2019100 comments

This post is a container for my short-form writing. See this post for meta-level discussion about shortform.

94 comments, sorted by Highlighting new comments since Today at 12:40 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am now reasonably convinced (p>0.8) that SARS-CoV-2 originated in an accidental laboratory escape from the Wuhan Institute of Virology.

1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.

2. No factor other than the presence of the Wuhan Institute of Virology and related biotech organizations distinguishes Wuhan or Hubei from the rest of China. It is not the location of the bat-caves that SARS was found in; those are in Yunnan. It is not the location of any previous outbreaks. It does not have documented higher consumption of bats than the rest of China.

3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.

4. We know

... (read more)

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.o
... (read more)
3ChristianKl1yGiven that there's the claim from Botao Xiao's The possible origins of 2019-nCoV coronavirus, that this seafood market was located 300m from a lab (which might or might not be true), this market doesn't seem like it reduces chances.
2[anonymous]1yIf it was a lab-escape and the CCP knew early enough, they could simply manufacture the data to point at the market as the origin.
1Rudi C10moWe need to update down on any complex, technical datapoint that we don’t fully understand, as China has surely paid researchers to manufacture hard-to-evaluate evidence for its own benefit (regardless of the truth of the accusation). This is a classic technique that I have seen a lot in propaganda against laypeople, and there is every reason it should have been employed against the “smart” people in the current coronavirus situation.

The most recent episode of the 80k podcast had Andy Weber on it. He was the US Assistant Secretary of Defense, "responsible for biological and other weapons of mass destruction".

Towards the end of the episode he casually drops quite the bomb

Well, over time, evidence for natural spread hasn’t been produced, we haven’t found the intermediate species, you know, the pangolin that was talked about last year. I actually think that the odds that this was a laboratory-acquired infection that spread perhaps unwittingly into the community in Wuhan is about a 50% possibility... And we know that the Wuhan Institute of Virology was doing exactly this type of research [gain of function research].  Some of it — which was funded by the NIH for the United States — on bat Coronaviruses. So it is possible that in doing this research, one of the workers at that laboratory got sick and went home. And now that we know about asymptomatic spread, perhaps they didn’t even have symptoms and spread it to a neighbor or a storekeeper. So while it seemed an unlikely hypothesis a year ago, over time, more and more evidence leaning in that direction has come out. And it’s wrong to dismiss that as kind

... (read more)
7Lukas_Gloor1yWhat about allegations that a pangolin was involved? Would they have had pangolins in the lab as well or is the evidence about pangolin involvement dubious in the first place? Edit: Wasn't meant as a joke. My point is why did initial analyses conclude that the SARS-Cov-2 virus is adapted to receptors of animals other than bats, suggesting that it had an intermediary host, quite likely a pangolin. This contradicts the story of "bat researchers kept bat-only virus in a lab and accidentally released it."
4Spiracular1yI think it's probably a virus that was merely identified in pangolins, but whose primary host is probably not pangolins. The pangolins they sequenced weren't asymptomatic carriers at all; they were sad smuggled specimens that were dying of many different diseases simultaneously. I looked into this semi-recently, and wrote up something here [https://www.lesswrong.com/posts/qapqE86xrjQkD8eZ2/april-coronavirus-open-thread?commentId=cjiGtdWR5TnWtbA6w] . -------------------------------------------------------------------------------- The pangolins were apprehended in Guangxi, which shares some of its border with Yunnan. Neither of these provinces are directly contiguous with Hubei (Wuhan's province), fwiw. (map [https://en.wikipedia.org/wiki/Provinces_of_China#/media/File:China_administrative_alt.svg] )
5MakoYass1yHow do you know there's only one lab in china studying these viruses?
5Pattern1yThis is an assumption. While it might be comparatively correct, I'm not sure about the magnitude. Under the circumstances, perhaps we should consider the possibility that there is something we don't know about Wuhan that makes it more likely. That's nice to know.
4Mati_Roy1yshared here: https://pandemic.metaculus.com/questions/3681/will-it-turn-out-that-covid-19-originated-inside-a-research-lab-in-hubei/ [https://pandemic.metaculus.com/questions/3681/will-it-turn-out-that-covid-19-originated-inside-a-research-lab-in-hubei/]
4Chris_Leong1yMaybe they don't know whether it escaped or not. Maybe they just think there is a chance that the evidence will implicate them and they figure it's not worth the risk as there'll only be consequences if there is definitely proof that it escaped from one of their labs and not mere speculation. Or maybe they want to argue that it didn't come from China? I think they've already been pushing this angle.
3Jayson_Virissimo1yNot sure if you have seen this [https://www.livescience.com/coronavirus-not-human-made-in-lab.html] yet, but they conclude: Are they assuming a false premise or making an error in reasoning somewhere?

First, a clarification: whether SARS-CoV-2 was laboratory-constructed or manipulated is a separate question from whether it escaped from a lab. The main reason a lab would be working with SARS-like coronavirus is to test drugs against it in preparation for a possible future outbreak from a zoonotic source; those experiments would involve culturing it, but not manipulating it.

But also: If it had been the subject of gain-of-function research, this probably wouldn't be detectable. The example I'm most familiar with, the controversial 2012 US A/H5N1 gain of function study, used a method which would not have left any genetic evidence of manipulation.

3habryka1yThe article says: and I think the article just says that the virus did not undergo genetic engineering or gain-of-function research, which is also what Jim says above.
5Jayson_Virissimo1yAh, yes: their headline is very misleading then! It currently reads "The coronavirus did not escape from a lab. Here's how we know." I'll shoot the editor an email and see if they can correct it. EDIT: Here's [https://twitter.com/JaysonVirissimo/status/1250167277204332545] me complaining about the headline on Twitter.
4jimrandomh1yGenetic engineering is ruled out, but gain-of-function research isn't.
2Spiracular7moChinese virology researcher released something claiming that SARS-2 might even be genetically-manipulated after all? After assessing, I'm not really convinced of the GMO claims, but the RaTG13 story definitely seems to have something weird going on. Claims that the RaTG13 genome release was a cover-up (it does look like something's fishy with RaTG13, although it might be different than Yan thinks). Claims ZC45 and/or ZXC21 was the actual backbone (I'm feeling super-skeptical of this bit, but it has been hard for me to confirm either way). https://zenodo.org/record/4028830#.X2EJo5NKj0v [https://zenodo.org/record/4028830#.X2EJo5NKj0v] (aka Yan Report) RATG13 LOOKS FISHY Looks like something fishy happened with RaTG13, although I'm not convinced that genetic modification was involved. This is an argument built on pre-prints, but they appear to offer several different lines of evidence that something weird happened here. Simplest story (via R&B): It looks like people first sequenced this virus in 2016, under the name "BtCOV/4991", using mine samples from 2013. And for some reason, WIV re-released the sequence as "RaTG13" at a later date? (edit: I may have just had a misunderstanding. Maybe BtCOV/4991 is the name of the virus as sequenced from miner-lungs, RaTG13 is the name of the virus as sequenced from floor droppings? But in that case, why is the "fecal" sample reading so weirdly low-bacteria? And they probably are embarrassed that it took them that long to sequence the fecal samples, and should be.) A paper by by Indian researchers Rahalkar and Bahulikar ( https://doi.org/10.20944/preprints202005.0322.v1 [https://doi.org/10.20944/preprints202005.0322.v1] ) notes that BtCoV/4991 sequenced in 2016 by the same Wuhan Virology Institute researchers (and taken from 2013 samples of a mineshaft that gave miners deadly pneumonia) was very similar, and likely the same, as RaTG13. A preprint by Rahalkar and Bahulikar (R&B) ( doi: 10.20944/preprints202008.0205.v1 ) note
2habryka1yI agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China", which makes this sentence feel odd to me. I feel like I have reasonable priors for "direct human-to-human transmission" vs. "accidentally released from a lab", but don't have good priors for "escaped specifically from a lab in Wuhan".

I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China"

I don't think this is true. The Wuhan Institute of Virology is the only biolab in China with a BSL-4 certification, and therefore is probably the only biolab in China which could legally have been studying this class of virus. While the BSL-3 Chinese Institute of Virology in Beijing studied SARS in the past and had laboratory escapes, I expect all of that research to have been shut down or moved, given the history, and I expect a review of Chinese publications will not find any studies involving live virus testing outside of WIV. While the existence of one or two more labs in China studying SARS would not be super surprising, the existence of 100 would be extremely surprising, and would be a major scandal in itself.

5Ben Pace1yWoah. That's an important piece of info. The lab in Wuhan is the only lab in China allowed to deal with this class of virus. That's very suggestive info indeed.
7jimrandomh1yThat's overstating it. They're the only BSL-4 lab. Whether BSL-3 labs were allowed to deal with this class of virus, is something that someone should research.

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.

---

Some evidence:

... (read more)
4Howie Lempel1yDo you still think there's a >80% chance that this was a lab release?
4Ben Pace1yThank you for the correction.
1leggi1yDid anyone do some research? - -- (SARSr-CoV) makes the BSL-4 list on Wikipedia. But what's the probability that animal-based coronaviruses (being very widespread in a lot of species) were restricted to BSL-4 labs? - - -- --- COVID19 and BSL according to: W.H.O. Laboratory biosafety guidance related to the novel coronavirus (2019-nCoV) [https://www.who.int/docs/default-source/coronaviruse/laboratory-biosafety-novel-coronavirus-version-1-1.pdf?sfvrsn=912a9847_2] The CDC: Interim Laboratory Biosafety Guidelines for Handling and Processing Specimens Associated with Coronavirus Disease 2019 (COVID-19) [https://www.cdc.gov/coronavirus/2019-ncov/lab/lab-biosafety-guidelines.html]
1leggi1yIt would be important information if it was true. But is it true? (SARSr-CoV) makes the BSL-4 list on Wikipedia but coronaviruses are widespread in a lot of species and I can't find any evidence that they are restricted to BSL-4 labs.
3habryka1yOk, that makes sense to me. I didn't have much of a prior on the Wuhan lab being much more likely to have been involved in this kind of research.
1Andrew_Clough1yDo we have any good sense of the extent to which researchers from the Wuhan Institute of Virology are flying out across China to investigate novel pathogens or sites where novel pathogens might emerge?

In a comment here, Eliezer observed that:

OpenBSD treats every crash as a security problem, because the system is not supposed to crash and therefore any crash proves that our beliefs about the system are false and therefore our beliefs about its security may also be false because its behavior is not known

And my reply to this grew into something that I think is important enough to make as a top-level shortform post.

It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems. OpenBSD is this paranoid, and needs to be this paranoid, because its architecture is fundamentally unsound (albeit unsound in a way that all the other operating systems born in the same era are also unsound). This presents a number of useful analogies that may be useful for thinking about future AI architectural choices.

C has a couple of operations (use-after-free, buffer-overflow, and a few multithreading-related things) which expand false beliefs in one area of the system i... (read more)

7Zac Hatfield Dodds16dI disagree. While C is indeed terribly unsafe, it is always the case that a safety-critical system exhibiting behaviour you thought impossible is a serious safety risk - because it means that your understanding of the system is wrong, and that includes the safety properties.

Despite the justness of their cause, the protests are bad. They will kill at least thousands, possibly as many as hundreds of thousands, through COVID-19 spread. Many more will be crippled. The deaths will be disproportionately among dark-skinned people, because of the association between disease severity and vitamin D deficiency.

Up to this point, R was about 1; not good enough to win, but good enough that one more upgrade in public health strategy would do it. I wasn't optimistic, but I held out hope that my home city, Berkeley, might become a green zone.

Masks help, and being outdoors helps. They do not help nearly enough.

George Floyd was murdered on May 25. Most protesters protest on weekends; the first weekend after that was May 30-31. Due to ~5-day incubation plus reporting delays, we don't yet know how many were infected during that first weekend of protests; we'll get that number over the next 72 hours or so.

We are now in the second weekend of protests, meaning that anyone who got infected at the first protest is now close to peak infectivity. People who protested last weekend will be superspreaders this weekend; the jump in cases we see over the next 72 hours will be about *

... (read more)
9jessicata10moIt's been over 72 hours and the case count is under 110, as would be expected from linear extrapolation.
2[comment deleted]10mo

For reducing CO2 emissions, one person working competently on solar energy R&D has thousands to millions of times more impact than someone taking normal household steps as an individual. To the extent that CO2-related advocacy matters at all, most of the impact probably routes through talent and funding going to related research. The reason for this is that solar power (and electric vehicles) are currently at inflection points, where they are in the process of taking over, but the speed at which they do so is still in doubt.

I think the same logic now applies to veganism vs meat-substitute R&D. Considering the Impossible Burger in particular. Nutritionally, it seems to be on par with ground beef; flavor-wise it's pretty comparable; price-wise it's recently appeared in my local supermarket at about 1.5x the price. There are a half dozen other meat-substitute brands at similar points. Extrapolating a few years, it will soon be competitive on its own terms, even without the animal-welfare angle; extrapolating twenty years, I expect vegan meat-imitation products will be better than meat on every axis, and meat will be a specialty product for luddites and people with dietary restrictions. If this is true, then interventions which speed up the timeline of that change are enormously high leverage.

I think this might be a general pattern, whenever we find a technology and a social movement aimed at the same goal. Are there more instances?

According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.

What tests would people like me to perform?

Tests that I'm already planning to perform:

To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can't taste it. The normal test procedure won't work as-is because it's too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselves) spray copious amounts of Bittrex at the input fan and at any spots that seem high-risk for leaks.

To test that air exiting the Biovyzr is being filtered, I plan to put on a regular N95, and use the inside-out glove to create Bittrex aerosol inside the Biovyzr, and see whether someone in the room without a mask is able to smell it.

I will verify that the Biovyzr is positive-pressure by running a straw through an edge, creating an artificial leak, and seeing which way the air flows through the leak.

I will have everyone in my house try wearing it (5 adults of varied sizes), have them all rate its fit and comfort, and get as many of them to do Bittrex fit tests as I can.

I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we've failed to attribute to infections, in much the same way we failed to attribute crime to lead.

Over the past century, there have been two big changes in infectious disease. The first is that we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.

Or, put another way: the "hygiene hypothesis" is the opposite of true.

3leggi1ySome comments: we've wiped out or drastically reduced some diseases in some parts of the world. There's a lot of infectious diseases still out there: HIV, influenza, malaria, tuberculosis, cholera, ebola, infectious forms of pneumonia, diarrhoea, hepatitis .... Disease has always spread - wherever people go, far and wide. It just took longer over land and sea (rather than the nodes appearing on global maps that we can see these days). "autoimmune conditions" covers a long list of conditions lumped together because they involve the immune system 'going wrong'. (and the immune system is, at least to me, a mind-bogglingly complex system) Given the wide range of conditions that could be "auto-immune" saying they've risen greatly over time is vague. Data for more specific conditions? Increased rates of automimmune conditions could just be due to the increase in the recognition, diagnosis and recording of cases (I don't think so but it should be considered). What things other than high speed travel have also changed in that time-frame that could affect our immune systems? The quality of air we breathe, the food we eat, the water we drink, our environment, levels of exposure to fauna and flora, exposure to chemicals, pollutants ...? Air travel is just one factor. Fatigue and depression are clinical symptoms - they are either present or not (to what degree - mild/severe is another matter) so sub-clinical is poor terminology here. Sub-clinical disease has no recognisable clinical findings - undiagnosed/unrecognised would be closer. But I agree there is widespread issues with health and well-being these days. Opposite of true? Are you saying you believe the "hygiene hypothesis" is false? In which case, that's a big leap from your reasoning above.
2Adam Scholl1yI'm curious about your first and second hypothesis regarding obesity?
3jimrandomh1yDisruption of learning mechanisms by excessive variety and separation between nutrients and flavor. Endocrine disruption from adulterants and contaminants (a class including but not limited to BPA and PFOA).

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice mo
... (read more)
8NaiveTortoise2yI like this post! Some evidence that security mindset generalizes across at least some domains: the same white hat people who are good at finding exploits in things like kernels seem to also be quite good at finding exploits in things like web apps, real-world companies, and hardware. I don't have a specific person to give as an example, but this observation comes from going to a CTF competition and talking to some of the people who ran it about the crazy stuff they'd done that spanned a wide array of different areas. Another slightly different example, Wei Dai is someone who I actually knew about outside of Less Wrong from his early work [http://www.weidai.com/bmoney.txt] on cryptocurrency stuff, so he was at least at one point involved in a security-heavy community (I'm of the opinion that early cryptocurrency folks were on average much better about security mindset than the average current cryptocurrency community member). Based on his posts and comments, he generally strikes me as having security mindset style thinking from his comments and from my perspective has contributed a lot of good stuff to AI alignment. Theo de Raadt is notoriously... opinionated, so it would definitely be interesting to see him thrown on an AI team. That said, I suspect someone like Ralph Merkle [http://merkle.com/], who's a bona fide cryptography wizard (he invented public key cryptography and Merkle trees!) and is heavily involved in the cryonics and nanotech communities, could fairly easily get up to speed on AI control work and contribute from a unique security/cryptography-oriented perspective. In particular, now that there seems to be more alignment/control work that involves at least exploring issues with concrete proposals, I think someone like this would have less trouble finding ways to contribute. That said, having cryptography experience in addition to security experience does seem helpful. Cryptography people are probably more used to combining their security mindset w

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

4Wei_Dai2yCombining hash functions is actually trickier than it looks, and some people are doing research in this area and deploying solutions. See https://crypto.stackexchange.com/a/328 [https://crypto.stackexchange.com/a/328] and https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography [https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography]. It does seem that if cryptography people had more of a security mindset (that are not being defeated) then there would be more research and deployment of this already.
3NaiveTortoise2yIn fairness, I'm probably over-generalizing from a few examples. For example, my biggest inspiration from the field of crypto is Daniel J. Bernstein, a cryptographer who's in part known for building qmail [https://cr.yp.to/qmail/guarantee.html], which has an impressive security track record & guarantee [https://cr.yp.to/qmail/guarantee.html]. He discusses principles for secure software engineering in this paper [http://cr.yp.to/qmail/qmailsec-20071101.pdf], which I found pretty helpful for my own thinking. To your point about hashing the results of several different hash functions, I'm actually kind of surprised to hear that this might to protect against the sorts of advances I'd expect to break hash algorithms. I was under the very amateur impression that basically all modern hash functions relied on the same numerical algorithmic complexity (and number-theoretic results). If there are any resources you can point me to about this, I'd be interested in getting a basic understanding of the different assumptions hash functions can depend on.
2Wei_Dai2yCan you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)
5NaiveTortoise2ySure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post [https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/] reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no public evidence that I have security mindset and I wouldn't claim that I have a consistent ability to do "deep security"-style analysis.) With that out of the way, here are some of the examples I was thinking of. First of all, at a high level, I've noticed that you seem to consistently question assumptions other posters are making and clarify terminology when appropriate. This seems like a prerequisite for security mindset, since it's a necessary first step towards constructing systems. Second and more substantively, I've seen you consistently raise concerns about human safety problems [https://www.lesswrong.com/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas#1__AI_design_as_opportunity_and_obligation_to_address_human_safety_problems] (also here [https://www.lesswrong.com/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety#xh9FweNcNDLqfTRG2] . I see this as an example of security mindset because it requires questioning the assumptions implicit in a lot of proposals. The analogy to Eliezer's post here would be that ordinary paranoia is trying to come up with more ways to prevent the AI from corrupting the human (or something similar) whereas I think a deep security solution would look more like avoiding the assumption that humans are safe altogether and instead seeking clear guarantees that our AIs will be s
1riceissa1yThis comment [https://www.lesswrong.com/posts/z8afQRsH9wWsB4iMD/harsanyi-s-social-aggregation-theorem-and-what-it-means-for?commentId=fnRcZtzStENjamKjZ] feels relevant here (not sure if it counts as ordinary paranoia or security mindset).

I am working on a longer review of the various pieces of PPE that are available, now that manufacturers have had time to catch up to demand. That review will take some time, though, and I think it's important to say this now:

The high end of PPE that you can buy today is good enough to make social distancing unnecessary, even if you are risk averse, and is more comfortable and more practical for long-duration wear than a regular mask. I don't just mean Biovyzr (which has not yet shipped all the parts for its first batch) and the AIR Microclimate (which has not yet shipped anything), though these hold great promise and may be good budget options.

If you have a thousand dollars to spare, you can get a 3M Versaflo TR-300N+. This is a hospital-grade positive air pressure respirator with a pile of certifications; it is effective at protecting you from getting COVID from others. Most of the air leaves through filter fabric under the chin, which I expect makes it about as effective at protecting others from you as an N95. Using it does not require a fit-test, but I performed one anyways with Bitrex, and it passed (I could not pass a fit-test with a conventional face-mask except by taping the edges to my skin). The Versaflo doesn't block view of your mouth, gives good quality fresh air with no resistance, and doesn't muffle sound very much. Most importantly, Amazon has it in stock (https://www.amazon.com/dp/B07J4WCK6R) so it doesn't involve a long delay or worry about whether a small startup will come through.

Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.

What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?

When I tried to inner sim the "bullshit jobs as compensation" model, I expected to see a very different world than I do see. In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

The problem being that the kind of person who wants a bullshit job is not typically the kind of person you'd necessarily want a favor from. One use for bullshit jobs could be to help the friends (or more likely the family) of someone who does "play the game." This I think happens more often, but I still think the world would be very different if this was the main use case for bullshit jobs- In particular, I'd expect most bullshit jobs to be isolated from the rest of the company, such that they don't have ripple effects. This doesn't seem to be the case as many bullshit jobs exist in management.

When I inquired about the world I actually do see, I got several other potential reasons for bullshit jobs that may or may not fit the data better:

  • Bullshit jobs as pre-installed scapegoats: Lots of middle management might
... (read more)
In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

Moral Mazes claims that this is exactly what happens at the transition from object-level work to management - and then, once you're at the middle levels, the main traits relevant to advancement (and value as an ally) are the ones that make you good at coalitional politics, favor-trading, and a more feudal sort of loyalty exchange.

3Matt Goldenberg2yDo you think that the majority of direct management jobs are bullshit jobs? My direction is that especially the first level of management that is directly managing programmers is a highly important coordination position.

This tweet raised the question of whether masks really are more effective if placed on sick people (blocking outgoing droplets) or if placed on healthy people (blocking incoming droplets). Everyone in public or in a risky setting should have a mask, of course, but we still need to allocate the higher-quality vs lower-quality masks somehow. When sick people are few and are obvious, and masks are scarce, masks should obviously go on the sick people. However, COVID-19 transmission is often presymptomatic, and masks (especially lower-quality improvised masks) are not becoming less scarce over time.

If you have two people in a room and one mask, one infected and one healthy, which person should wear the mask? Thinking about the physics of liquid droplets, I think the answer is that the infected person should wear it.

  1. A mask on a sick person prevents the creation of fomites; masks on healthy people don't.
  2. Outgoing particles have a larger size and shrink due to evaporation, so they'll penetrate a mask less, given equal kinetic energy. (However, kinetic energies are not equal; they start out fast and slow down, which would favor putting the mask on the healthy person. I'm not sure how much th
... (read more)
1MakoYass1yWearing a surgical mask, I get the sense it tends to form more of a seal when inhaling, less when exhaling. (like a valve). If this is common, it would be a point in favour of having the healthy person wear them.

This was initially written in response to "Communicating effective altruism better--Jargon" by Rob Wiblin (Facebook link), but stands alone well and says something important. Rob argues that we should make more of an effort to use common language and avoid jargon, especially when communicating to audiences outside of your subculture.

I disagree.

If you're writing for a particular audience and can do an editing pass, then yes, you should cut out any jargon that your audience won't understand. A failure to communicate is a failure to communicate, and there are no excuses. For public speaking and outreach, your suggestions are good.

But I worry that people will treat your suggestions as applying in general, and trying to extinguish jargon terms from their lexicon. People have only a limited ability to code-switch. Most of the time, there's no editing pass, and the processes of writing and thinking are comingled. The practical upshot is that people are navigating a tradeoff between using a vocabulary that's widely understood outside of their subculture, and using the best vocabulary for thinking clearly and communicating within their subculture.

When it comes to thinking clearly, some of t... (read more)

2Viliam6moNow I would like to see an article that would review the jargon, find the nearest commonly used term for each term, and explain the difference the way you did (or possibly admit that there is no important difference).
1MikkW5moWhy does the link for rationality cardinality go through facebook?
3jimrandomh5moThis comment was crossposted with Facebook, and Facebook auto-edited the link while I was editing it there. Edited now to make it a direct link.

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclud... (read more)

3Elizabeth2yDid you mean "allocated a share of the costs"? If not, I am confused by that sentence.
3jimrandomh2yI'm pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit. Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren't. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.
1Elizabeth2yAh, that makes sense. Thanks for explaining.

Suppose LessWrong had a coauthor-matchmaking feature. There would be a section where you could see other peoples' ideas for posts they want to write, and message them to schedule a collaboration session. You'd be able to post your own ideas, to get collaborators. There would be some quality-sorting mechanism so that if you're a high-tier author, you can restrict the visibility of your seeking-collaborators message to other high-tier authors.

People who've written on LessWrong, and people who've *almost* written on LessWrong but haven't quite gotten a post out: Would you use this feature? If so, how much of a difference do you think it would make in the quantity and quality of your writing?

4MakoYass8moI think it could be very helpful, if only for finding people to hold me to account and encourage me to write. Showing me that someone gets what I want to do, and would appreciate it.

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?

Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.

Is this really fallacious? I'm asking because while I don't know the topic personally, I have some friends who are really into city planning. They've said that this is something which is pretty much unambiguously accepted in the literature, now that we've had the time to observe lots and lots of failed attempts to fix traffic by building more road capacity.

A quick Googling seemed to support this, bringing up e.g. this article which mentions that:

In this paper from the Victoria Transport Policy Institute, author Todd Litman looks at multiple studies showing a range of induced demand effects. Over the long term (three years or more), induced traffic fills all or nearly all of the new capacity. Litman also modeled the costs and benefits for a $25 million line-widening project on a hypothetical 10-kilometer stretch of highway over time. The initial benefits from congestion relief fade within a decade.
4habryka2yYeah, I do agree that for the case of traffic, elasticity is pretty close to 1, which importantly doesn't mean building more traffic is a bad idea, it's actually indicative of demand for traffic capacity being really high, meaning marginal value of doing so is likely also really high.

Vitamin D reduces the severity of COVID-19, with a very large effect size, in an RCT.

Vitamin D has a history of weird health claims around it failing to hold up in RCTs (this SSC post has a decent overview). But, suppose the mechanism of vitamin D is primarily immunological. This has a surprising implication:

It means negative results in RCTs of vitamin D are not trustworthy.

There are many health conditions where having had a particular infection, especially a severe case of that infection, is a major risk factor. For example, 90% of cases of cervical cancer are caused by HPV infection. There are many known infection-disease pairs like this (albeit usually with smaller effect size), and presumably also many unknown infection-disease pairs like this as well.

Now suppose vitamin D makes you resistant to getting a severe case of a particular infection, which increases risk of a cancer at some delay. Researchers do an RCT of vitamin D for prevention of that kind of cancer, and their methodology is perfect. Problem: What if that infection wasn't common in at the time and place the RCT was performed, but is common somewhere else? Then the study will give a negative result.

This throws a wrench into the usual epistemic strategies around vitamin D, and around every other drug and supplement where the primary mechanism of action is immune-mediated.

1capybaralet7moSounds like a very general criticism that would apply to any effects that are very strong/consistent in circumstances where there a very high variance (e.g. binary) latent variable takes on a certain variable (and the effect is 0 otherwise...). I wonder how meta-analyses typically deal with that...(?) http://rationallyspeakingpodcast.org/show/rs-155-uri-simonsohn-on-detecting-fraud-in-social-science.html [http://rationallyspeakingpodcast.org/show/rs-155-uri-simonsohn-on-detecting-fraud-in-social-science.html] suggested that very large anomalous effects are usually evidence of fraud, and that meta-analyses may try to prevent a single large effect size study from dominating (IIRC).

What those drug-abuse education programs we all went though should have said:

It is a mistake to take any drug until after you've read its wikipedia page, especially the mechanism, side effects, and interactions sections, and its Erowid page, if applicable. All you children on ritalin right now, your homework is to go catch up on your required reading and reflect upon your mistake. Dismissed.

(Not a vagueblog of anything recent, but sometimes when I hear about peoples' recreational-drug or medication choices, I feel like Quirrell in HPMOR chapter 26, discussing a student who cast a high-level curse without knowing what it did.)

It's looking likely that the pandemic will de facto end on the Summer Solstice.

Biden promised vaccine availability for everyone on May 1st. May 1st plus two weeks to get appointments plus four weeks spacing between two doses of Moderna plus one week waiting for full effectiveness, is June 19. The astronomical solstice is June 20, which is a Sunday.

Things might not go to plan, if the May 1st vaccine-availability deadline is missed, or a vaccine-evading strain means we have to wait for a booster. No one's organizing the details yet, as far as I know. But with all those caveats aside:

It's going to be a hell of a party.

3Measure25dMy understanding was that the May 1st date was "Everyone's now allowed to sign up for an appointment, but you may be at the end of a long queue." How long after that do you think it will take to get a vaccine to everyone who wants one?
4Gerald Monroe23dCurrently, 2.4 million shots/day. Note that it's a situation where it's always going to be limited by the rate limiting step, and there are many bottlenecks, so using the 'current' data and extrapolating only a modest increase is the most conservative estimate. 210 million adults. Only 0.7 need to be vaccinated for the risk to plummet for everyone else. A quick bit of napkin math says we need 294 million doses to fully vaccinate everyone, and we are at 52 million now. (294-52) = 242million/2.4 = 100.8 more days. This is why the lesser J&J vaccine is actually so useful - if we switched all the vaccine clinics and syringe supplies to J&J overnight (if there was enough supply of the vaccine itself) suddenly we only need 121 million doses to vaccinate everyone, or 50.4 more days. The reality is that increasing efforts are probably going to help, and the J&J is helping, but sooner or later a bottleneck will be hit that can't be bypassed quickly (like a syringe shortage), so I would predict the reality number of days to fall in that (50, 100) day interval. There are 94 days between now and June 19. Also, a certain percentage of the population are going to refuse the shot in order to be contrarian or because they earnestly believe their aunt's facebook rants. Morever, the 'get an appointment' game means the tech savvy/people who read reddit get an advantage over folks who aren't. So for those of us reading this who don't yet qualify, it doesn't appear that it will be much longer.

Twitter is an unusually angry place. One reason is that the length limit makes people favor punchiness over tact. A less well-known reason is that in addition to notifying you when people like your own tweets, it gives a lot of notifications for people liking replies to you. So if someone replies to disagree, you will get a slow drip of reminders, which will make you feel indignant.

LessWrong is a relatively calm place, because we do the opposite: under default settings, we batch upvote/karma-change notifications together to only one notification per day, to avoid encouraging obsessive-refresh spirals.

2Pattern10moI also thing there's less engagement on LW.* While it might depends on the part of twitter, there's a lot more replies going on. Sometimes it seems like there's a 100 replies to a tweet, in contrast to posts with zero comments. This necessarily means replies will overlap a lot more than they do on LW. Imagine getting 3 distinct comments to a short post on LW, versus a thread of tweets, with 30 responses that mostly boil down to the same 3 responses that are being sent because people are responding without seeing other responses. (And if there's hundreds of very similar responses, asking people to read responses is asking people to read a very boring epic.) And getting one critical reply, versus the same critical reply from 10 people, even when it's the same fraction of responses, probably affects people differently - if only because it's annoying to see the same message over and over again. *This could be the case (the medium probably helps) even if that engagement was all positive.

Some software costs money. Some software is free. Some software is free, with an upsell that you might or might not pay for. And some software has a negative price: not only do you not pay for it, but someone third party is paid to try to get you to install it, often on a per-install basis. Common examples include:

  • Unrelated software that comes bundled with software you're installing, which you have to notice and opt out of
  • Software advertised in banner ads and search engine result pages
  • CDs added to the packages of non-software products

This category of

... (read more)
4Viliam1yI wonder what would be a non-software analogy of this. Perhaps those tiny packages with labels "throw away, do not eat" you find in some products. That is, in a parallel world where 99% of customers would actually eat them anyway. But even there it isn't obvious how the producer would profit from them eating the thing. So, no good analogy.
2Matt Goldenberg1yI'm trying to wrap my head around the negative price distinction. A business can't be viable if the cost of user acquisition is lower than the lifetime value of a user. Most software spend money on advertising, then they have to make that money back somehow. In a direct business model, they'll charge the users of the software directly. In an indirect business model, they'll charge a third party for access to the users or an asset that the user has. Facebook is more of an indirect business model, where they charge advertisers for access to the users' attention and data. In my mind, the above is totally fine. I choose to pay with my attention and data as a user, and know that it will be sold to advertisers. Viewing this as "negatively priced" feels like a convoluted way to understand the business model however. Some malware makes money by trying to hide the secondary market they're selling. For instance, by sneaking in a default browser search that sells your attention to advertisers, or selling your computers idle time to a botnet without your permission. This is egregious in my opinion, but it's not the indirect business model that is bad here, it's the hidden costs that they lie about or obfuscate.
6jimrandomh1yUser acquisition costs are another frame for approximately the same heuristic. If software has ads in an expected place, and is selling data you expect them to sell, then you can model that as part of the cost. If, after accounting for all the costs, it looks like the software's creator is spending more on user acquisition than they should be getting back, it implies that there's another revenue stream you aren't seeing, and the fact that it's hidden from you implies that you probably wouldn't approve of it.
4Matt Goldenberg1yAhhh I see, so you're making roughly the same distinction of "hidden revenue streams".

Lack-of-adblock is a huge mistake. On top of the obvious drain on attention, slower loading times everywhere, and surveillance, ads are also one of the top mechanisms by which computers get malware.

When I look over someone's shoulder and see ads, I assume they were similarly careless in their choice of which books to read.

8Said Achmiz2moNote that many people don’t know about ad blockers [https://www.gwern.net/Ads#they-just-dont-know]: (I highly recommend reading that entire section of the linked page, where gwern describes the results of several follow-up surveys he ran, and conclusions drawn from them.)
5plex2moOne day we will be able to wear glasses which act as adblock for real life, replacing billboards with scenic vistas.
6Matt Goldenberg2moAnd they will also be able to do the opposite, placing ads over scenic vistas
2Viliam2moThey will also send data about "what you looked at, how long" to Google servers, to prepare even better customized ads for you. But people will be more worried about giant pop-up ads suddenly covering their view while they are trying to cross the street.

Some people have a sense of humor. Some people pretend to be using humor, to give plausible deniability to their cruelty. On April 1st, the former group becomes active, and the latter group goes quiet.

This is too noisy to use for judging individuals, but it seems to work reasonably well for evaluating groups and cultures. Humor-as-humor and humor-as-cover weren't all that difficult to tell apart in the first place, but I imagine a certain sort of confused person could be pointed at this in order to make the distinction salient.

5Yoav Ravid7dI'm not sure that's true. I think the second kind also uses April 1st as a way to justify more cruelty than usual.

There is a rumor of RSA being broken. By which I mean something that looks like a strange hoax made it to the front on Hacker News. Someone uploaded a publicly available WIP paper on integer factorization algorithms by Claus Peter Schnorr to the Cryptology ePrint Archive, with the abstract modified to insert the text "This destroyes the RSA cryptosystem." (Misspelled.)

Today is not the Recurring Internet Security Meltdown Day. That happens once every month or two, but not today in particular.

But this is a good opportunity to point out a non-obvious best pra... (read more)

1Gerald Monroe1moWhile this sounds cool, what sort of activities are you thinking you need to encrypt? Consider the mechanisms for how information leaks. a. Are you planning or coordinating illegal acts? The way you get caught is one of your co-conspirators reported you. b. Are you protecting your credit card and other financial info? The way it leaks is a third party handler, not your own machine. c. Protecting trade secrets? The way it gets leaked is one of your coworkers copied the info and brought it to a competitor. d. Protecting crypto? Use an offline wallet. Too much protection and you will have the opposite problem. Countless people - probably a substantial fraction of the entire population, maybe the majority - all their credit and identity records were leaked in various breaches. They have easily hackable webcams exposed on the internet. Skimmers trap their credit card periodically. And...nothing major happens to them.

The Diamond Princess cohort has 705 positive cases, of which 4 are dead and 36 serious or critical. In China, the reported ratio of serious/critical cases to deaths is about 10:1, so figure there will be 3.6 more deaths. From this we can estimate a case fatality rate of 7.6/705 ~= 1%. Adjust upward to account for cases that have not yet progressed from detection to serious, and downward to account for the fact that the demographics of cruise ships skew older. There are unlikely to be any undetected cases in this cohort.

5Steven Byrnes1yHang on, maybe I'm being stupid, but I don't get the 3.6. Why not say 36+4=40 serious/critical cases and the 10%=4 of them have already passed away?
5jimrandomh1yYou're right, adding deaths+.1*serious the way I did seems incorrect. But, since not all of the serious cases have recovered yet, that would seem to imply that the serious:deaths ratio is worse in the Diamond Princess than it is in China, which would be pretty strange. It's not clear to me that the number of serious cases is as up to date as the number of positive tests. So, widen the error bars some more I guess?
4Dagon1yHow many passengers were exposed? Capacity of 2670, I haven't seen (and haven't looked that hard) how many actual passengers and crew were aboard when the quarantine started. So maybe over 1/4 of exposed became positive, 6% of that positive become serious, and 10% of that fatal. Assuming it escapes quarantine and most of us are exposed at some point, that leads to an estimate of 0.0015 (call it 1/6 of 1%) of fatality. Recent annual deaths are 7.7 per 1000, so best guess is this adds 20%, assuming all deaths happen in the first year and any mitigations we come up with don't change the rate by much. I don't want to downplay 11.5 million deaths, but I also don't want to overreact (and in fact, I don't know how to overreact usefully). I'd love to know how many of the serious cases have remaining disability. Duration and impact of survival cases could easily be the differences between unpleasantness and disruption that doubles the death rate, and societal collapse that kills 10x or more as the disease directly.