Journal to myself as I read Volume III of the Feynman Lectures in Physics (as a commitment mechanism).
Chapter 1
Feynman begins by noting that physics at very small scales is nothing like everyday experience, which means we will have to rely on an abstract approach. He then presents the double-slit experiment, first imagining bullets passing through the screen, then water waves, and finally the quantum behavior of electrons. I found myself checking I could still derive the law of cosines. He emphasizes that all things, in fact, behave in the quantum way electrons do, although for large objects it is very hard to tell. I enjoyed the "practicality" of his descriptions, for example describing the electron gun as a heated tungsten wire in a box with a small hole in it. He concludes by introducing the uncertainty principle.
Chapter 2
This chapter is largely devoted to example realizations of the uncertainty principle. For example, if particles pass through a slit of width L, we then know their position with an uncertainty of order L. However, the slit will give rise to diffraction, which reflects uncertainty regarding the particle's momentum. If we narrow the slit, the diffraction pattern gets wider. The uncertainty principle is also used for a heuristic estimate for the size of a hydrogen atom. We write an energy for the electron E = p^2/2m - q^2/r, where m and q are the mass and charge of the electron. If the momentum is of the order given by the uncertainty relation, p = h / r, we can replace it in E and find the distance r that minimizes the energy. This yields a figure on the order of angstroms, which is the correct scale for atoms. The chapter concludes with a brief philosophical discussion regarding what is real and indeterminacy in quantum and classical mechanics.
You may be interested in inteins, which are protein domains that spontaneously excise themselves from the host protein (the N and C terminal pieces of the host are left stitched together).
I think minimal inteins are usually 100-200 amino acids, and require quite specific residues positioned appropriately, so will not meaningfully affect the 20^100 number here. Nevertheless, it is an existence proof of the kind of activity you have in mind.
The option to buy SPY at $855 in January 2027 is going for $1.80 today, because most people don’t expect the price to get that high. But if in fact SPY increases in the intervening time by 50% from its present value ($582), as stipulated by kairos, then the option will ultimately be worth 1.5*582 - 855 ~ $18. I think this is where the 12x figure is coming from.
Has anyone thought about Kremer/Jones-like economic growth models (where larger populations generate more ideas, leading to superexponential growth) but where some ideas are bad? I think there’s an interesting, loose analogy between these growth models and a model of the "tug of war" between passengers and drivers in cancer. In the absence of deleterious mutations the tumor in this model grows superexponentially. The fact that fixation of a driver makes the whole population grow better is a bit like the non-rival nature of ideas. But the growth models seem to have no analog to the deleterious passengers—bad ideas that might still fix, stochastically, and reduce the technology prefactor "A".
Such a model might then exhibit a "critical population size" (as for lesion size) below which there is techno-cultural decline (ancient Tasmania?). And is there a social analog of "mutational meltdown"—in population genetics, if mutations arrive too quickly, beneficial and deleterious mutations get trapped in the same lineages (clonal interference) and cannot be independently selected. Perhaps cultural/technological change that comes too rapidly leads to memeplexes with mixtures of good and bad ideas, which are linked and so cannot be independently selected for / against…
Note that addition to any achiral antibiotics, we could also use the mirror image versions of any chiral antibiotic. Even more powerful, we could use mirror image versions of toxins to all life (e.g. nucleoside analogs) that are normally hard to use because we share chirality with regular bacteria
Is that TinyStories model a super-wide attention-only transformer (the topic of the mechanistic interp work and Buck’s post you cite). I tried to figure it out briefly and couldn’t tell, but I bet it isn’t, and instead has extra stuff like an MLP block.
Regardless, in my view it would be a big advance to really understand how the TinyStories models work. Maybe they are “a bunch of heuristics” but maybe that’s all GPT-4, and our own minds, are as well…
Just want to flag that oseltamivir is not a vaccine, it is an antiviral drug.
I think in your first paragraph, you may be referring to: https://mason.gmu.edu/~gjonesb/IQandNationalProductivity.pdf
I believe the key issue here is with (i). Standard theories where the universe is infinitely large also suppose it was infinitely large at the moment of the big bang.
The discussion here may be helpful.
In the case of epigenetic memory based on freely-diffusing factors, the alternative "stable" states can probably be thought of as long-lived metastable states in "real" stochastic system, which become stable fixed points in the limit as the number of particles N goes to infinity. In models, the switching time often grows exponentially with the number of particles. You may enjoy https://arxiv.org/abs/q-bio/0410003 or https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.208101.
For memory based on chemical modifications embedded along the genome, like DNA methylation, there isn't really a "large N" limit to take, and in my view things are less settled. You may enjoy https://pubmed.ncbi.nlm.nih.gov/17512413/ or (shameless plug) https://www.science.org/doi/10.1126/science.adg3053