Jonathan_Graehl

Wiki Contributions

Comments

Frame Control

I'm not seeing any difference between pressure and aggression these days.

Why don't our vaccines get updated to the delta spike protein?

Trying to push out a revision costs money and doesn't earn any expected money. And everyone knows this is so. Unofficial market collusion regularly manages to solve harder problems; you don't need explicit comms at all.

I'll grant that we'll hear some competitive "ours works better on variant X" marketing but a new even faster approval track would be needed if we really wanted rapid protein updates.

As evhub mentions, the antibodies you make given the first vaccine you're exposed to are what will get manufactured every time you see a similar-enough provocation. It may be impossible to switch the learned immune response without some specially designed "different enough" protein that's hoped to also be protective against the latest variant. I buy the 'original antigenic sin' concept - there has to be a reason we're not naturally immune to flu and corona-colds already after many previous encounters.

Paxlovid Remains Illegal: 11/24 Update

Why are you quoting without correction someone who thinks 5 billion divided by 10 million is 500,000 (it's 500)?

Insights from Modern Principles of Economics

presumably perfect competition defects from perfect price discrimination

Is GPT-3 already sample-efficient?

In general a language model will 'know' the sentence related to the single occurrence of a rare name. I don't think you learn much here if there are enough parameters available to support this memory.

Is GPT-3 already sample-efficient?

Perhaps GPT-3 has more parameters than are probably needed to roughly memorize its very large training data. This would be good since the data contains some low quality garbage, false claims, etc (can think of them as 'noise'). I believe GPT-n are adding parameters faster than training data Here's my summary of a paper that suggests this is the right move:

https://www.youtube.com/watch?v=OzGguadEHOU Microsoft guy Sebastian Bubeck talking about seemingly overparameterized neural models being necessary for learning (due to label noise?). Validation 'early stopping' of training duration or size scaling is a mistake. after you're over some initial hump that would trigger validation early stopping, overfitting is 'benign' [already known, dubbed 'double descent']. As soon as you can defeat adversarial attacks then you're probably using enough parameters. He (+intern) proves that in order to perfectly memorize the label-noised data set such that small perturbations in the noise don't change predicted output, you need a much larger parameter set than the data set (perfectly memorizing the training data set should be possible within some constant factor of its size). He predicts that ImageNet (image labeling task) could benefit from 10-100 billion parameters instead of the current sub-1-billion.

(obviously GPT- are language models but they can be thought of as having an output which is the masked word or the sentence-before-or-after or whatever they're using to train)

Covid 9/23: There Is a War

Two reasons you could recommend boosters for vulnerable only:

  1. global first doses first thinking
  2. awareness that eradicating covid by rapid vaccination to herd immunity is futile given current effectiveness+adoption and hope to reduce the mareks-like adaptation of more vax-resistant strains so that the vulnerable can have more of the benefit preserved to them

It does seem that, temporarily supply shortages aside, you should advocate universal 'vaccination' (say w/ moderna) iff you also advocate ongoing doses until a real vaccine is available.

Long Covid Is Not Necessarily Your Biggest Problem

Your contrary cite notwithstanding, I predict Delta will end up less damaging on average and more cases will go uncounted due to its mildness. This may also drive some overestimation of its virulence. It does appear to spread well enough that is a question of when not if you'll be exposed.

Load More