Thanks for the links—I definitely do focus in on the essential parts when I have limited resources. So I personally don't need versions without comments, but I find the alternate link for the Sequences quite aesthetically appealing, which is nice.
As for the anthropic reasoning, there are definitely all kinds of different scenarios that can play out, but I would argue that they can be clumped into one of three categories for anthropics. One is doom soon, meaning that everyone dies soon (no more souls). The second is galactic expansion with huge numbers of new conscious entities (many souls). The third is galactic expansion with only the expansion of conscious entities that have existed (same souls). Assuming many-worlds, no more souls is too unlikely to happen in all the worlds, but it will surely happen in some. Same with many souls. But given that we live in the current time period, one can infer that most worlds are same soul worlds.
Hi all, I’m Hari. Funnily enough, I found LessWrong after watching a YouTube video on R***’s b*******. (I already had some grasp of the dynamics of internet virality, so no I did not see it as saying anything substantive about the community at large.)
My background spans many subjects, but I tend to focus on computer science, psychology, and statistics. I’m really interested in figuring out the most efficient way to do various things—the most efficient way to learn, the fastest way of arriving at the correct belief, how to communicate the most information with the least amount of words, etc. So I read the Sequences and LessWrong just felt like a natural fit. And as you can imagine, I don’t have much tolerance for broken, inefficient systems, so I quit college and avoid large parts of the internet.
LessWrong is like a breath of fresh air away from all the dysfunction, which I’m really grateful for. (My only problem is that I can spend hours lost in comment sections and rabbit holes!). I think it’s a good time for me to start contributing some of my own thoughts. Here’s a few questions/requests I have:
Firstly, I’ve been trying to refine my information diet more, but it seems more difficult with some blogs that have valuable older posts. For example, I see Marginal Revolution often mentioned, but they don’t have a “best of” post that I can start with. There’s also the dreaded linkrot.
Secondly, I’m wondering to what extent expert blind spot has been covered on LW? It seems really important given the varied backgrounds and number of polymaths here.
Thirdly, I wanted to get some feedback on some of my thoughts on anthropics. After scanning through some prior work, it looks like a lot of it is unnecessarily long and more technical than it needs to be. But I think it does have real practical implications that are important to think through.
If you combine anthropics, many-worlds, timeless physics, and some decision theory, there is a consistent logic here. The simplest way I can think of to explain this is if one imagines a timeless dartboard that has the distribution of everyone's conscious experience across time. The arbitrary dart throw is more likely to land on people with the most conscious experience across time. This addresses the anthropic trilemma—you still lose because your conscious experience across time in losing worlds vastly outweighs the trillion yous in winning worlds in that thin slice of time.
This then implies doom soon, as Nick Bostrom points out with the Doomsday argument. But your probability of doom would have to be way too high here. So perhaps humanity decides to expand current consciousnesses rather than creating new ones. There are decision-theoretic reasons for humanity to support this—if you didn’t contribute anything to the intelligence explosion, then why should you exist?
One major implication here is that you don't need to despair because aligned ASI is practically guaranteed in at least a few worlds. (But that doesn’t mean existential risk reduction is useless! It’s more like the work that’s being done is to expand the range of worlds that make it, rather than saving only one.)
What do you think?
Two more passes of my own:
Sprinting—Usain Bolt is a world-class sprinter, but does he know the underlying physics behind sprinting? No. What he has is his genes and the muscle memory that resulted from years of training his form. The fact that he doesn't know the physics implies what I might call a ghost that's acting when Usain sprints. It's a ghost because there is no knowledge of physics in there, just neural firing patterns, remnants, that imply lots of training in the past. Now if you were to capture his sprinting with a camera, and feed those pixels to a biomechanist to be interpreted, only then is there a deeper understanding present. The biomechanist can look at those pixels and gain a deeper understanding that generalizes further.
Forecasting—Suppose that you have no prior knowledge of physics and you're forecasting the result of a collision between two objects inside some predefined volume. (So there's no influence from anything outside.) If all the starting examples you are given contain air, you might naively predict that there would be noise in a vacuum, but you would be wrong. In order to generalize further, you need a deeper understanding, a model of physics with more gears.[1]
All that is to say, at least at the moment, I have longer timelines than 2028. I don't think LLMs are capable of kicking off real RSI where they can improve themselves over and over again. A hint here is that you get better results with tokens rather than letters. That implies that they are mostly just ghosts combining tokens into interesting patterns that imply lots of training in the past. But since the output is tokens, that can be easily interpreted by a human, and that is where the real understanding lies.
See Hofstadter, Gödel, Escher, Bach, 82. ↩︎