I think enshittification is playing a role here in people perceiving things to be worse. When things like streaming services or ride-hailing get noticeably worse in a short timeframe, that has a much bigger effect on human perception—see the availability heuristic. In truth, things may be much better in the long-term, but you have to say “remember when you had to deal with X?” in order to surface that over the latest greedy moves. And they aren’t exactly wrong in pointing out certain things getting worse in the short term.
I don't think this is quite as true for high-status people (when they get recognized). Maybe this is part of the pain of being invisible—it implies that you're low-status, which is true for most people in most contexts in the modern world, so it's not actually that bad.
You also run into Dunbar's number and related phenomena with high concentrations of people, so it's presumably even harder to be salient or high-status in an urban area, for example.
it seems weird to ask for images of handwriting? So it’s not clear how much this matters.
This is a thing—in my stats class I had to scan my handwriting, turn it into a PDF, and turn it in online.
Explain?
I'd argue that this is more broadly about having a forcing function, of which status dynamics are a subset. I find that when I tie myself to the mast, I have more willpower, which probably has something to do with the dopaminergic anticipation of reward changing. In the case of status, high-status people have a lot of sway over others in their tribal context, so most people are naturally averse to disobeying them in that context. You wouldn't want to get exiled from the tribe, after all.
The problem is that this doesn't work for more serious cases that might also involve e.g. depression. My best guess at a more comprehensive theory is that willpower is intricately linked with consciousness and intelligence. More intelligent minds learn and get tired out more quickly (from faster fine-tuning maybe?), which needs to be overridden with more willpower. The less conscious you are and the less willpower you exercise, the more likely you are to be doing a rote task that's already been fine-tuned, and in that case there's not much efficient cross-domain optimization going on.
Two more passes of my own:
Sprinting—Usain Bolt is a world-class sprinter, but does he know the underlying physics behind sprinting? No. What he has is his genes and the muscle memory that resulted from years of training his form. The fact that he doesn't know the physics implies what I might call a ghost that's acting when Usain sprints. It's a ghost because there is no knowledge of physics in there, just neural firing patterns, remnants, that imply lots of training in the past. Now if you were to capture his sprinting with a camera, and feed those pixels to a biomechanist to be interpreted, only then is there a deeper understanding present. The biomechanist can look at those pixels and gain a deeper understanding that generalizes further.
Forecasting—Suppose that you have no prior knowledge of physics and you're forecasting the result of a collision between two objects inside some predefined volume. (So there's no influence from anything outside.) If all the starting examples you are given contain air, you might naively predict that there would be noise in a vacuum, but you would be wrong. In order to generalize further, you need a deeper understanding, a model of physics with more gears.[1]
All that is to say, at least at the moment, I have longer timelines than 2028. I don't think LLMs are capable of kicking off real RSI where they can improve themselves over and over again. A hint here is that you get better results with tokens rather than letters. That implies that they are mostly just ghosts combining tokens into interesting patterns that imply lots of training in the past. But since the output is tokens, that can be easily interpreted by a human, and that is where the real understanding lies.
See Hofstadter, Gödel, Escher, Bach, 82. ↩︎
Thanks for the links—I definitely do focus in on the essential parts when I have limited resources. So I personally don't need versions without comments, but I find the alternate link for the Sequences quite aesthetically appealing, which is nice.
As for the anthropic reasoning, there are definitely all kinds of different scenarios that can play out, but I would argue that they can be clumped into one of three categories for anthropics. One is doom soon, meaning that everyone dies soon (no more souls). The second is galactic expansion with huge numbers of new conscious entities (many souls). The third is galactic expansion with only the expansion of conscious entities that have existed (same souls). Assuming many-worlds, no more souls is too unlikely to happen in all the worlds, but it will surely happen in some. Same with many souls. But given that we live in the current time period, one can infer that most worlds are same soul worlds.
Hi all, I’m Hari. Funnily enough, I found LessWrong after watching a YouTube video on R***’s b*******. (I already had some grasp of the dynamics of internet virality, so no I did not see it as saying anything substantive about the community at large.)
My background spans many subjects, but I tend to focus on computer science, psychology, and statistics. I’m really interested in figuring out the most efficient way to do various things—the most efficient way to learn, the fastest way of arriving at the correct belief, how to communicate the most information with the least amount of words, etc. So I read the Sequences and LessWrong just felt like a natural fit. And as you can imagine, I don’t have much tolerance for broken, inefficient systems, so I quit college and avoid large parts of the internet.
LessWrong is like a breath of fresh air away from all the dysfunction, which I’m really grateful for. (My only problem is that I can spend hours lost in comment sections and rabbit holes!). I think it’s a good time for me to start contributing some of my own thoughts. Here’s a few questions/requests I have:
Firstly, I’ve been trying to refine my information diet more, but it seems more difficult with some blogs that have valuable older posts. For example, I see Marginal Revolution often mentioned, but they don’t have a “best of” post that I can start with. There’s also the dreaded linkrot.
Secondly, I’m wondering to what extent expert blind spot has been covered on LW? It seems really important given the varied backgrounds and number of polymaths here.
Thirdly, I wanted to get some feedback on some of my thoughts on anthropics. After scanning through some prior work, it looks like a lot of it is unnecessarily long and more technical than it needs to be. But I think it does have real practical implications that are important to think through.
If you combine anthropics, many-worlds, timeless physics, and some decision theory, there is a consistent logic here. The simplest way I can think of to explain this is if one imagines a timeless dartboard that has the distribution of everyone's conscious experience across time. The arbitrary dart throw is more likely to land on people with the most conscious experience across time. This addresses the anthropic trilemma—you still lose because your conscious experience across time in losing worlds vastly outweighs the trillion yous in winning worlds in that thin slice of time.
This then implies doom soon, as Nick Bostrom points out with the Doomsday argument. But your probability of doom would have to be way too high here. So perhaps humanity decides to expand current consciousnesses rather than creating new ones. There are decision-theoretic reasons for humanity to support this—if you didn’t contribute anything to the intelligence explosion, then why should you exist?
One major implication here is that you don't need to despair because aligned ASI is practically guaranteed in at least a few worlds. (But that doesn’t mean existential risk reduction is useless! It’s more like the work that’s being done is to expand the range of worlds that make it, rather than saving only one.)
What do you think?
Alternative hypothesis: it's all about status. Or more specifically, the markers of status, similar to how beauty is a marker of health and fertility in women. Traits like bravery and confidence are markers of high status, which is required to ask women out, escalate, initiate sex, etc.
Evolution didn't settle on something robustly aligned to maximize fitness, but it does seem somewhat robust. That is, if people are consciously aware that status/health is being "faked" with confidence/makeup, that seems to have some negative effect if it's taken too far, but it's also respected in and of itself if done in moderation, which is interesting.
(This is ofc generalizing, and there are surely many data points that don't align with the overall statistical trend as I understand it.)
Reality could ofc be some combination of the hypotheses, or nonconsent could be swallowed up by status. That is, nonconsent is itself a signal of high status because high-status men tend to be more ambitious/driven, are unfortunately more able to get away with such things, etc.