Sequences

Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind
Concept Safety
Multiagent Models of Mind
Keith Stanovich: What Intelligence Tests Miss

Wiki Contributions

Comments

I'm unlikely to reply to further object-level explanation of this, sorry. 

No worries! I'll reply anyway for anyone else reading this, but it's fine if you don't respond further.

Giving up on transhumanism as a useful idea of what-to-aim-for or identify as, separate from how much you personally can contribute to it.

It sounds like we have different ideas of what it means to identify as something. For me, one of the important functions of identity is as a model of what I am, and as what distinguishes me from other people. For instance, I identify as Finnish because of reasons like having a Finnish citizenship, having lived in Finland for my whole life, Finnish being my native language etc.; these are facts about what I am, and they're also important for predicting my future behavior.

For me, it would feel more like rationalization if I stopped contributing to something like transhumanism but nevertheless continued identifying as a transhumanist. My identity is something that should track what I am and do, and if I don't do anything that would meaningfully set me apart from people who don't identify as transhumanists... then that would feel like the label was incorrect and imply wrong kinds of predictions. Rather, I should just update on the evidence and drop the label.

As for transhumanism as a useful idea of what to aim for, I'm not sure of what exactly you mean by that, but I haven't started thinking "transhumanism bad" or anything like that. I still think that a lot of the transhumanist ideals are good and worthy ones and that it's great if people pursue them. (But there are a lot of ideals I think are good and worthy ones without identifying with them. For example, I like that museums exist and that there are people running them. But I don't do anything about this other than occasionally visit one, so I don't identify as a museum-ologist despite approving of them.)

More directly: avoiding "pinning your hopes on AI" (which, depending on how I'm supposed to interpret this, could mean "avoiding solutions that ever lead to aligned AI occurring" or "avoiding near-term AI, period" or "believing that something other than AI is likely to be the most important near-future thing"

Hmm, none of these. I'm not sure of what the first one means but I'd gladly have a solution that led to aligned AI, I use LLMs quite a bit, and AI clearly does seem like the most important near-future thing.

"Pinning my hopes on AI" meant something like "(subconsciously) hoping to get AI here sooner so that it would fix the things that were making me anxious", and avoiding that just means "noticing that therapy and conventional things like that work better for fixing my anxieties than waiting for AI to come and fix them". This too feels to me like actually updating on the evidence (noticing that there's something better that I can do already and I don't need to wait for AI to feel better) rather than like rationalizing something.

Okay! It wasn't intended as prescriptive but I can see it as being implicitly that.

What do you think I'm rationalizing? 

That's a pseudonym Duncan used at one point, see e.g. the first line of this comment.

That makes sense to me, though I feel unclear about whether you think this post is an example of that pattern / whether your comment has some intent aimed at me?

There's something about this framing that feels off to me and makes me worry that it could be counterproductive. I think my main concerns are something like:

1) People often figure out what they want by pursuing things they think they want and then updating on the outcomes. So making them less certain about their wants might prevent them from pursuing the things that would give them the information for actually figuring it out.

2) I think that people's wants are often underdetermined and they could end up wanting many different things based on their choices. E.g. most people could probably be happy in many different kinds of careers that were almost entirely unlike each other, if they just picked one that offered decent working conditions and committed to it. I think this is true for a lot of things that people might potentially want, but to me the framing of "figure out what you want" implies that people's wants are a lot more static than this.

I think this 80K article expresses these kinds of ideas pretty well in the context of career choice:

The third problem [with the advice of "follow your passion"] is that it makes it sound like you can work out the right career for you in a flash of insight. Just think deeply about what truly most motivates you, and you’ll realise your “true calling”. However, research shows we’re bad at predicting what will make us happiest ahead of time, and where we’ll perform best. When it comes to career decisions, our gut is often unreliable. Rather than reflecting on your passions, if you want to find a great career, you need to go and try lots of things.

The fourth problem is that it can make people needlessly limit their options. If you’re interested in literature, it’s easy to think you must become a writer to have a satisfying career, and ignore other options.

But in fact, you can start a career in a new area. If your work helps others, you practice to get good at it, you work on engaging tasks, and you work with people you like, then you’ll become passionate about it. The ingredients of a dream job we’ve found are most supported by the evidence, are all about the context of the work, not the content. Ten years ago, we would have never imagined being passionate about giving career advice, but here we are, writing this article.

Many successful people are passionate, but often their passion developed alongside their success, rather than coming first. Steve Jobs started out passionate about zen buddhism. He got into technology as a way to make some quick cash. But as he became successful, his passion grew, until he became the most famous advocate of “doing what you love”.

Comment retracted because right after writing it, I realized that the "leastwrong" is a section on LW, not its own site. I thought there was a separate leastwrong.com or something. In this case, I have much less of a feeling that it makes a global claim.

Edit: An initial attempt is "The LeastWrong" feels a bit like a global claim of "these are the least wrong things on the internet". 

This is how it feels to me. 

Whether you can find a logic in which that interpretation is not coherent doesn't seem relevant to me. You can always construct a story according to which a particular association is actually wrong, but that doesn't stop people from having that association. (And I think there are reasonable grounds for people to be suspicious about such stories, in that they enable a kind of motte-and-bailey: using a phrasing that sends the message X, while saying that of course we don't mean to send that message and here's an alternative interpretation that's compatible with that phrasing. So I think that a lot of the people who'd find the title objectionable would be unpersuaded by your alternative interpretation, even assuming that they bothered to listen to it, and they would not be unreasonable to reject it.)

[This comment is no longer endorsed by its author]Reply

Software/internet gives us much better ability to find.

And yet...

The past few decades have recorded a steep decline in people’s circle of friends and a growing number of people who don’t have any friends whatsoever. The number of Americans who claim to have “no close friends at all” across all age groups now stands at around 12% as per the Survey Center on American Life.

The percentage of people who say they don’t have a single close friend has quadrupled in the past 30 years, according to the Survey Center on American Life.1

It’s been known that friendlessness is more common for men, but it is nonetheless affecting everyone. The general change since 1990 is illustrated below.

Taken from "Adrift: America in 100 Charts" (2022), pg. 223. As a detail, note the drastic drop of people with 10+ friends, now a small minority.

The State of American Friendship: Change, Challenges, and Loss (2021), pg. 7

Although these studies are more general estimates of the entire population, it looks worse when we focus exclusively on generations that are more digitally native. When polling exclusively American millennials, a pre-pandemic 2019 YouGov poll found 22%have “zero friends” and 30% had “no best friends.” For those born between 1997 to 2012 (Generation Z), there has been no widespread, credible study done yet on this question — but if you’re adjacent to internet spaces, you already intuitively grasp that these same online catalysts are deepening for the next generation.

Still, the fact that individual companies, for instance, develop layers of bureaucracy is not an argument against having a large economy.

This is true in principle, but population growth has led to the creation of larger companies in practice. ChatGPT when I asked it what proportion of the economy is controlled by the biggest 100 companies: 

For a rough estimate, consider the market capitalization of the 100 largest public companies relative to GDP. As of early 2023, the market capitalization of the S&P 100, which includes the 100 largest U.S. companies by market cap, was several trillion USD, while the U.S. GDP was about 23 trillion USD. This suggests a significant but not dominant share, with the caveat that market cap doesn't directly translate to economic contribution.

And if the population in every country would grow, then we'd end up with larger governments even if we kept the current system and never established a world government. To avoid governments getting bigger, you'd need to actively break up countries into smaller ones as their population increased. That doesn't seem like a thing that's going to happen.

A possible countertrend would be something like diseconomies of scale in governance. I don't know the right keywords to find the actual studies on this. Still, it generally seems to me like smaller nations and companies are better run than bigger ones, as the larger ones develop more middle management and organizational layers mainly incentivized to manipulate themselves rather than to do the thing they're supposedly doing. This does not just waste the resources of the government itself, it also damages everyone else as the legislation they enact starts getting worse and worse. And the larger the system becomes, the harder any attempts to reform it become.

Load More