(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
- Saharon Shelah
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
I remembered hearing about the paper from a friend and thinking it couldn't possibly be true in a non-trivial sense. To someone with even a modicum of experience in logic - a computable procedure assigning probabilities to arbitrary logical statements in a natural way is surely to hit a no-go diagonalization barrier.
Logical Inductors get around the diagonalization barrier in a very clever way. I won't spoil how it does here. I recommend the interested reader to watch Andrew's Critch talk on Logical Induction.
It was the main reason convincing that MIRI != clowns but were doing substantial research.
The Logical Induction paper has a fairly thorough discussion of previous work. Relevant previous work to mention is de Finetti's on betting and probability, previous work by MIRI & associates (Herreshof, Taylor, Christiano, Yudkowsky...), the work of Shafer-Vovk on financial interpretations of probability & Shafer's work on aggregation of experts. There is also a field which doesn't have a clear name that studies various forms of expert aggregation. Overall, my best judgement is that nobody else was close before Garrabrant.
Actually, since we're on the subject of scientific discoveries
Singular Learning Theory is another way of "talking about the breadth of optima" in the same sense that Newton's Universal Law of Gravitation is another way of "talking about Things Falling Down".
Yes, beautiful example ! Van Leeuwenhoek was the one-man ASML of the 17th century. In this case, we actually have evidence to the counterfactual impact as other lensmakers trailed van Leeuwenhoek by many decades.
It's plausible that high-precision measurement and fabrication is the key bottleneck in most technological and scientific progress- it's difficult to oversell the importance of van Leeuwenhoek.
Antonie van Leeuwenhoek made more than 500 optical lenses. He also created at least 25 single-lens microscopes, of differing types, of which only nine have survived. These microscopes were made of silver or copper frames, holding hand-made lenses. Those that have survived are capable of magnification up to 275 times. It is suspected that Van Leeuwenhoek possessed some microscopes that could magnify up to 500 times. Although he has been widely regarded as a dilettante or amateur, his scientific research was of remarkably high quality.[39]
The single-lens microscopes of Van Leeuwenhoek were relatively small devices, the largest being about 5 cm long.[40][41] They are used by placing the lens very close in front of the eye. The other side of the microscope had a pin, where the sample was attached in order to stay close to the lens. There were also three screws to move the pin and the sample along three axes: one axis to change the focus, and the two other axes to navigate through the sample.
Van Leeuwenhoek maintained throughout his life that there are aspects of microscope construction "which I only keep for myself", in particular his most critical secret of how he made the lenses.[42] For many years no one was able to reconstruct Van Leeuwenhoek's design techniques, but in 1957, C. L. Stong used thin glass thread fusing instead of polishing, and successfully created some working samples of a Van Leeuwenhoek design microscope.[43] Such a method was also discovered independently by A. Mosolov and A. Belkin at the Russian Novosibirsk State Medical Institute.[44] In May 2021 researchers in the Netherlands published a non-destructive neutron tomography study of a Leeuwenhoek microscope.[22] One image in particular shows a Stong/Mosolov-type spherical lens with a single short glass stem attached (Fig. 4). Such lenses are created by pulling an extremely thin glass filament, breaking the filament, and briefly fusing the filament end. The nuclear tomography article notes this lens creation method was first devised by Robert Hooke rather than Leeuwenhoek, which is ironic given Hooke's subsequent surprise at Leeuwenhoek's findings.
Here are some reflections I wrote on the work of Grothendieck and relations with his contemporaries & predecessors.
Take it with a grain of salt - it is probably too deflationary of Grothendieck's work, pushing back on mythical narratives common in certain mathematical circles where Grothendieck is held to be an Christ-like figure. I pushed back on that a little. Nevertheless, it would probably not be an exaggeration to say that Grothendieck's purely scientific contributions [as opposed to real-life consequences] were comparable to those of Einstein.
Here's a document called "Upper and lower bounds for Alien Civilizations and Expansion Rate" I wrote in 2016. Hanson et al. Grabby Aliens paper was submitted in 2021.
The draft is very rough. Claude summarizes it thusly:
The document presents a probabilistic model to estimate upper and lower bounds for the number of alien civilizations and their expansion rates in the universe. It shares some similarities with Robin Hanson's "Grabby Aliens" model, as both attempt to estimate the prevalence and expansion of alien civilizations, considering the idea of expansive civilizations that colonize resources in their vicinity.
However, there are notable differences. Hanson's model focuses on civilizations expanding at the highest possible speed and the implications of not observing their visible "bubbles," while this document's model allows for varying expansion rates and provides estimates without making strong claims about their observable absence. Hanson's model also considers the idea of a "Great Filter," which this document does not explicitly discuss.
Despite these differences, the document implicitly contains the central insight of Hanson's model – that the expansive nature of spacefaring civilizations and the lack of observable evidence for their existence imply that intelligent life is sparse and far away. The document's conclusions suggest relatively low numbers of spacefaring civilizations in the Milky Way (fewer than 20) and the Local Group (up to one million), consistent with the idea that intelligent life is rare and distant.
The document's model assumes that alien civilizations will become spacefaring and expansive, occupying increasing volumes of space over time and preventing new civilizations from forming in those regions. This aligns with the "grabby" nature of aliens in Hanson's model. Although the document does not explicitly discuss the implications of not observing "grabby" aliens, its low estimates for the number of civilizations implicitly support the idea that intelligent life is sparse and far away.
The draft was never finished as I felt the result wasn't significant enough. To be clear, the Hanson-Martin-McCarter-Paulson paper contains more detailed models and much more refined statistical analysis. I didn't pursue these ideas further.
I wasn't part of the rationality/EA/LW community. Nobody I talked to was interested in these questions.
Let this be a lesson for young people: Don't assume. Publish! Publish in journals. Publish on LessWrong. Make something public even if it's not in a journal!
Idk the Nobel prize committee thought it wasn't significant enough to give out a separate prize 🤷
I am not familiar enough with the particulars to have an informed opinion. My best guess is that in general statements to the effect of "yes X also made scientific contribution A but Y phrased it better' overestimate the actual scientific counterfactual impact of Y. It generically weighs how well outsiders can understand the work too much vis a vis specialists/insiders who have enough hands-on experience that the value-add of a simpler/neater formalism is not that high (or even a distraction).
The reason Dick Feynmann is so much more well-known than Schwinger and Tomonaga surely must not be entirely unrelated with the magnetic charisma of Dick Feynmann.
Depending on what one means by 'learn' this is provably impossible. The reason has nothing to do with the transformer architecture (which one shouldn't think of as a canonical architecture in the grand scheme of things anyway).
There is a 2-state generative HMM such that the optimal predictor of the output of said generative model provably requires an infinite number of states. This is for any model of computation, any architecture.
Of course, that's maybe not what you intend by 'learn'. If you mean by 'learn' express the underlying function of an HMM then the answer is yes by the Universal Approximation Theorem (a very fancy name for a trivial application of the Stone-Weierstrass theorem).
Hope this helped. 😄
Not inconceivable, I would even say plausible, that surreal numbers & combinatorial game theories impact is still in the future.
Did I just say SLT is the Newtonian gravity of deep learning? Hubris of the highest order!
But also yes... I think I am saying that
EDIT: no hype about future work. Wait and see ! :)