Since April this year, there's been a huge growth in the the number of posts about AI, while posts about rationality, world modeling, etc. have remained constant. The result is that much of the time, the LW frontpage is almost entirely AI content.

Looking at the actual numbers, we can see that during 2021, no core LessWrong tags[1] represented more than 30% of LessWrong posts. In 2022, especially starting around April, AI has started massively dominating the LW posts.

Here's the total posts for each core tag each month for the past couple years. On April 2022, most tags' popularity remains constant, but AI-tagged posts spike dramatically:

All posts with 0 or greater karma are included in this graph. When applying a higher threshold, the percentage AI tagged posts is even higher.

Even people pretty involved with AI alignment research have written to say "um, something about this feels kinda bad to me."

I'm curious to hear what various LW users think about the situation. Meanwhile, here's my own thoughts.

Is this bad?

Maybe this is fine. 

My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented.

Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying dividends, and resulting in a new wave of people working fulltime on AGI safety.

Maybe it's just fine to have a ton of people working on the most important problem in the world?

Maybe. But it felt worrisome to Ruby and me. Some of those worries felt easier to articulate, others harder. Two major sources of concern:

There's some kind of illegible good thing that happens when you have a scene exploring a lot of different topics. It's historically been the case that LessWrong was a (relatively) diverse group of thinkers thinking about a (relatively) diverse group of things. If people show up and just see the All AI All the Time, people who might have other things to contribute may bounce off. We probably wouldn't lose this immediately

AI needs Rationality, in particular. Maybe AI is the only thing that matters. But, the whole reason I think we have a comparative advantage at AI Alignment is our culture of rationality. A lot of AI discourse on the internet is really confused. There's such an inferential gulf about what sort of questions are even worth asking. Many AI topics deal with gnarly philosophical problems, while mainstream academia is still debating whether the world is naturalistic. Some AI topics require thinking clearly about political questions that tend to make people go funny in the head

Rationality is for problems we don't know how to solve, and AI is still a domain we don't collectively know how to solve. 

Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed). But it's my current epistemic state. 

Whispering "Rationality" in your ear

Paul Graham says that different cities whisper different ambitions in your ear. New York whispers "be rich". Silicon Valley whispers "be powerful." Berkeley whispers "live well." Boston whispers "be educated."

It seems important for LessWrong to whisper "be rational" in your ear, and to give you lots of reading, exercises, and support to help you make it so.

As a sort of "emergency injection of rationality", we asked Duncan to convert the CFAR handbook from a PDF into a more polished sequence, and post it over the course of a month. But commissioning individual posts is fairly expensive, and over the past couple months the LessWrong team's focus has been to find ways to whisper "rationality" that don't rely on what people are currently posting.

Some actions we've done:

Improve Rationality Onboarding Materials

Historically, if you wanted to get up to speed on the LessWrong background reading, you had to click over to the /library page and start reading Rationality: A-Z. It required multiple clicks to even start reading, and there was no easy way to browse the entire collection and see what posts you had missed.

Meanwhile Rationality A-Z is just super long. I think anyone who's a longterm member of LessWrong or the alignment community should read the whole thing sooner or later – it covers a lot of different subtle errors and philosophical confusions that are likely to come up (both in AI alignment and in other difficult challenges). But, it's a pretty big ask for newcomers to read all ~400 posts. It seemed useful to have a "getting started" collection that people could read through in a weekend, to get the basics of the site culture.

This led us to redesign the library collection page (making it easier to browse all posts in a collection and see which ones you've already read), and to create the new Sequences Highlights collection.

Sequence Spotlights

There's a lot of other sequences that the LessWrong community has generated over the years, which seemed good to expose people to. We've had a "Curated Sequences" section of the library but never quite figured out a good way to present it on the frontpage. 

We gave curated sequences a try in 2017 but kept forgetting to rotate them. Now we've finally built an automated rotation system, and are building up a large repertoire of the best LW sequences which the site will automatically rotate through. 

More focused recommendations

We're currently filtered the randomized "from the archives" posts to show Rationality and World Modeling posts. I'm not sure whether this makes sense as a longterm solution, but it still seems useful as a counterbalancing force for the deluge of AI content, and helping users orient to the underlying culture that generated that AI content.

Rewritten About Page

We rewrote the About page to both simplify it, clarify what LessWrong is about, and contextualize all the AI content.

[Upcoming] Update Latest Tag-Filters

Ruby is planning to update the default Latest Posts tag-filters to either show more rationality and worldbuilding content by default (i.e. rationality/world-modeling posts get treated as having higher karma, and thus get more screentime via our sorting algorithm). Or, maybe just directly deemphasize AI content. 

We're also going to try making the filters more prominent and easier to understand, so people can adjust the content they receive.

Can't we just move it to Alignment Forum?

When I've brought this up, a few people asked why we don't just put all the AI content on the Alignment Forum. This is a fairly obvious question, but:

a) It'd be a pretty big departure from what the Alignment Forum is currently used for.

b) I don't think it really changes the fundamental issue of "AI is what lots of people are currently thinking about on LessWrong." 

The Alignment Forum's current job is not to be a comprehensive list of all AI content, it's meant to especially good content with a high signal/noise ratio. All Alignment Forum posts are also LessWrong posts, and LessWrong is meant to be the place where most discussion happens on them. The AF versions of posts are primarily meant to be a thing you can link to professionally without having to explain the context of a lot of weird, not-obviously-related topics that show up on LessWrong.

We created the Alignment Forum ~5 years ago, and it's plausible the world needs a new tool now. BUT, it still feels like a weird solution to try and move the AI discussion off of LessWrong. AI is one of the central topics that motivate a lot of other LessWrong interests. LessWrong is about the art of rationality, but one of the important lenses here is "how would you build a mind that was optimally rational, from scratch?". 

Content I'd like to see more of

It's not obvious I want to nudge anyone away from AI alignment work. It does sure seem like this is the most urgent and important problem in the world. I also don't know that I want the site flooded with mediocre rationality content. 

World Modeling / Optimization

Especially for newcomers who're considering posting more, I'd be interested in seeing more fact posts, which explore a topic curiously, and dig into the details of how one facet of the world works. Some good examples include Scott Alexander "Much More Than You Wanted To Know" type posts, or Martin Sustrik's exploration of the Swiss Political System.

I also really like to see subject matter experts write up stuff about their area of expertise that people might not know about (especially if they have reason to think this is relevant to LW readers). I liked this writeup about container logistics, which was to discussion of whether we could have quick wins in civilizational adequacy that could snowball into something bigger.

Parts of the world that might be important, but which aren't currently on the radar of the rationalsphere, are also good topics to write about.

Rationality Content

Rationality content is a bit weird because... the content I'm most interested in is from people who've done a lot of serious thinking that's resulted in serious accomplishment. But, the people in that reference class in the LessWrong community are increasingly focused on AI.

I worry about naively incentivizing more "rationality content" – a lot of rationality content is ungrounded and insight-porn-ish. 

But, I'm interested in accomplished thinkers trying to distill out their thinking process (see: many John Wentworth posts, and Mark Xu and Paul Christiano's posts on their research process). I'm interested in people like Logan Strohl who persistently explore the micro-motions of how cognition works, while keeping it very grounded, and write up a trail for others to follow.

I think in some sense The Sequences are out of date. They were written as a reaction to a set of mistakes people were making 20 years ago. Some people are still making those mistakes, but ideas like probabilistic reasoning have now made it more into the groundwater, and the particular examples that resonate today are different, and I suspect we're making newer more exciting mistakes. I'd like to see people attempting to build a framework of rationality that feels like a more up-to-date foundation.

What are your thoughts?

I'm interested in hearing what people's takes on this. I'm particularly interested in how different groups of people feel about it. What does the wave of AI content feel like to established LessWrong users? To new users just showing up? To AI alignment researchers?

Does this feel like a problem? Does the whole worry feel overblown? If not, I'm interested in people articulating exactly what feels likely to go wrong.

 

  1. ^

    Core Tags are the most common LessWrong topics: Rationality, AI, World Modeling, World Optimization, Community and Practical. 

New to LessWrong?

New Comment
141 comments, sorted by Click to highlight new comments since: Today at 12:51 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here is my take: since there's so much AI content, it's not really feasible to read all of it, so in practice I read almost none of it (and consequently visit LW less frequently).

The main issue I run into is that for most posts, on a brief skim it seems like basically a thing I have thought about before. Unlike academic papers, most LW posts do not cite previous related work nor explain how what they are talking about relates to this past work. As a result, if I start to skim a post and I think it's talking about something I've seen before, I have no easy way of telling if they're (1) aware of this fact and have something new to say, (2) aware of this fact but trying to provide a better exposition, or (3) unaware of this fact and reinventing the wheel. Since I can't tell, I normally just bounce off.

I think a solution could be to have a stronger norm that posts about AI should say, and cite, what they are building on and how it relates / what is new. This would decrease the amount of content while improving its quality, and also make it easier to choose what to read. I view this as a win-win-win.

[-]Viliam2y1813

and consequently visit LW less frequently

Tangentially, "visiting LW less frequently" is not necessarily a bad thing. We are not in the business of selling ads; we do not need to maximize the time users spend here. Perhaps it would be better if people spent less time online (including on LW) and more time doing whatever meaningful things they might do otherwise.

But I agree that even assuming this, "the front page is full of things I do not care about" is a bad way to achieve it.

tools for citation to the existing corpus of lesswrong posts and to off-site scientific papers would be amazing; eg, rolling search for related academic papers as you type your comment via the semanticscholar api, combined with search over lesswrong for all proper nouns in your comment. or something. I have a lot of stuff I want to say that I expect and intend is mostly reference to citations, but formatting the citations for use on lesswrong is a chore, and I suspect that most folks here don't skim as many papers as I do. (that said, folks like yourself could probably give people like me lessons on how to read papers.)

also very cool would be tools for linting emotional tone. I remember running across a user study that used a large language model to encourage less toxic review comments; I believe it was in fact an intervention study to see how usable a system was. looking for that now...

1Sinityy1y
Maybe GPT-3 could be used to find LW content related to the new post, using something like this: https://gpt-index.readthedocs.io Unfortunately, I didn't get around to doing anything with it yet. But it seems useful: https://twitter.com/s_jobs6/status/1619063620104761344
3Ruby2y
Over the years I've thought about a "LessWrong/Alignment" journal article format the way regular papers have Abstract-Intro-Methods-Results-Discussion. Something like that, but tailored to our needs, maybe also bringing in OpenPhil-style reasoning transparency (but doing a better job of communicating models). Such a format could possibly mandate what you're wanting here. I think it's tricky. You have to believe any such format actually makes posts better rather than constraining them, and it's worth the effort of writers to confirm. It is something I'd like to experiment with though.

AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don't feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I've been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I'm taking my fate in my hands every time I browse here.

I don't know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can't stomach the volume and tone of alignment discourse.

2mingyuan2y
Yeah this is a point that I failed to make in my own comment — it's not just that I'm not interested in AIS content / not technically up to speed, it's that seeing it is often actively extremely upsetting
3the gears to ascension2y
I'm sorry to hear that! Do you have any thoughts on ways to rephrase the ai content to make it less upsetting? would it help to have news that emphasizes successes, so that you have frequent context that it's going relatively alright and is picking up steam? in general, my view is that yudkowskian paranoia about ai safety is detrimental in large part because it's objectively wrong, and while it's great for him to be freaked out about it, his worried view shouldn't be frightening us; I'm quite excited for superintelligence and I just want us to hurry up and get the safe version working so we can solve a bunch of problems. IMO you should feel able to feel comfy that AI now is pretty much nothing but super cool. [edit to clarify: this is not to say the problem isn't hard; it's that I really do think the capabilities folks know that safety and capabilities were always the same engineering task]
1Alex Flint2y
Thank you for writing this comment. Just so you know, probably you can contribute to the field, if that is your desire. I would start by joining a community where you will be happy and where people are working seriously on the problem.
7swarriner2y
I feel like you mean this in kindness, but to me it reads as "You could risk your family's livelihood relocating and/or trying to get recruited to work remotely so that you can be anxious all the time! It might help on the margins ¯\_(ツ)_/¯ "
2Alex Flint2y
Why would you risk your family's livelihood? That doesn't seem like a good idea. And why would you go somewhere that you'd be anxious all the time?
3swarriner2y
Yes, that's my point. I'm not aware of a path to meaningful contribution to the field that doesn't involve either doing research or doing support work for a research group. Neither is accessible to me without risking the aforementioned effects.
2Alex Flint2y
Yeah right. It does seem like work in alignment at the moment is largely about research, and so a lot of the options come down to doing or supporting research. I would just note that there is this relatively huge amount of funding in the space at the moment -- OpenPhil and FTX both open to injecting huge amounts of funding and largely not having enough places to put it. It's not that it's easy to get funded -- I wouldn't say it's easy at all -- but it does really seems like the basic conditions in the space are such that one would expect to find a lot of opportunities to be funded to do good work.
1coryfklein2y
This reader is a software engineer with over a decade of experience. I'm paid handsomely and live in a remote rural area. I am married with three kids. The idea that my specialized experience of building SaaS products in Scala would somehow port over to AI research seems ludicrous. I am certain I'm cognitively capable enough to contribute to AI research, but I'd be leaving a career where I'm compensated based on my experience for one where I'm starting over anew. Surely OpenPhil and FTX would not match my current salary in order to start my career over, all while allowing me to remain in my current geography (instead of uprooting my kids from friends and school)? It seems unlikely I'd have such a significant leg up over a recent college graduate with a decent GPA so as to warrant matching my software engineering salary.
2Alex Flint2y
Right -- you probably could contribute to AI alignment, but your skills mostly wouldn't port over, and you'd very likely earn less than your current job.
0Noosphere892y
I'll say one thing. I too do not like the AI doomtide/doomerism, despite thinking it's a real problem. You can take breaks from LW or hide posts for AI from your frontpage if you're upset.

I'm here pretty much just for the AI related content and discussion, and only occasionally click on other posts randomly: so I guess I'm part of the problem ;). I'm not new, I've been here since the beginning, and this debate is not old. I spend time here specifically because I like the LW format/interface/support much better than reddit, and LW tends to have a high concentration of thoughtful posters with a very different perspective (which I tend to often disagree with, but that's part of the fun). I also read /r/MachineLearning/ of course, but it has different tradeoffs.

You mention filtering for Rationality and World Modeling under More Focused Recommendations - but perhaps LW could go farther in that direction? Not necessarily full subreddits, but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience. Folks more interested in Rationality than AI could uprank and then see more of the former rather than the latter, etc.

AI needs Rationality, in particular. Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed).

There is still a sign... (read more)

but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience.

Just to be clear, this does indeed exist. You can give a penalty or boost to any tag on your frontpage, and so shift the content in the direction of topics you are most interested in.

4jacob_cannell2y
LOL that is exactly what I wanted! Thanks :)
7habryka2y
It currently gives fixed-size karma bonuses or penalties. I think we should likely change it to be multipliers instead, but either should get the basic job done.
4Ben2y
I can see the logic of multipliers, but in the edge case of posts with zero or negative karma they do weird stuff. If you set big mulitpliers for 5 topics, and  there is a -1 karma post that ticks every single one of those topics, then you will never see it. But you of all people are the one who should see that post, which the addition achieves. (Not significant really though.)
2Eli Tyre1y
You could just not have the multipliers apply to negative karma posts.
3ChristianKl2y
I would expect that if someone wants to only see AI alignment post (a wish someone mentioned) saying +1000 karma would provide that result but also mess up the sorting as the karma differences become less. A modifier of 100x should allow a user to actually only see one tag.
3Wes F2y
What? How? I've found like 3 different "customize" options, and none of them are this. Side note: I've noticed that web & app developers these days try to make settings "intuitive" instead of just putting them all in one place, which I think is silly. Just put all settings under settings. Why on Earth are there multiple "customize" options?
3Wes F2y
Nevermind. Another comments explained it. I would greatly appreciate that option also being put under settings! I would have found it much easier.
4Raemon2y
Putting it under settings does sound reasonable.
2coryfklein2y
I only visit the site every month or so and I use All Posts grouped by Weekly to "catch up". It looks like that particular page does not have support for this kind of tag-specific penalty. :/

I would say "tragically flawed": noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.

What concretely do you have in mind here?

Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.

If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the "and everything else approach". One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.

From Evolutionary Psychology and the Emotions:

An evolutionary perspective leads one to view the mind as a crowded zoo of evolved, domain-specific programs. Each is functionally specialized for solving a different adaptive problem that arose during hominid evolutionary history, such as face recognition, foraging, mate choice, heart rate regulation, sleep management, or predator vigilance, and each is activated by a different set of cues from the environment.

From t... (read more)

-3lc1y
I think in order to be "concrete" you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
6jacob_cannell1y
I did. My comment has 8 links. The first is a link to "Adaptation-Executers, not Fitness-Maximizers", which is from the sequences ("The Simple Math of Evolution"), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying: And a bit later: My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY's paper LOGI - his reply to T&C. On OB in 2006 "The martial art of rationality", EY writes: So in his words, his understanding of rationality comes from thinking about AGI - so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them. These quotes - especially the LOGI quote - clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them "giant inscrutable matrices") and especially "physics envy" universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks. More key to the AI risk case are posts such as "Value is Fragile" which clearly builds on his larger wordview: And of course "The Design Space of Minds in General" These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
-2lc1y
None of which is a concrete reply to anything Eliezer said inside "Adaption Executors, not Fitness Optimizers", just a reply to what you extrapolate Eliezer's opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
4jacob_cannell1y
And your reply isn't a concrete reply to any of my points. The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It's not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless. This is not a law case where I'm critiquing some super specific thing EY said. Instead I'm tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
-8lc1y
2the gears to ascension1y
I think you may have flipped something here

I'm a 100% with you. I don't like the current trend of LW becoming a blog about AI, and much less about a blog about how AGI doom is inevitable, (and in my opinion there have been too many blog posts about that, with some exceptions of course). I have found myself lately downvoting AI related posts more easily and upvoting content non related to AI more easily too

I weakly downvoted your comment:

I think the solution to "too much AI content" is not to downvote the AI content less discriminately. If there were many posts with correct proofs in harmonic analysis being posted to LessWrong, I would not want to downvote them, after all, they are not wrong in any important sense, and maybe even important for the world!

But I would like to filter them out, at least until I've learned the basics of harmonic analysis to understand them better (if I desired to do so).

For what it's worth, I think I am actually in favor of downvoting content of which you think there is too much. The general rule for voting is "upvote this if you want to see more like this" and "downvote this if you want to see less like this". I think it's too easy to end up in a world where the site is filled with content that nobody likes, but everyone thinks someone else might like. I think it's better for people to just vote based on their preferences, and we will get it right in the aggregate.

Generally, I would want people to vote on articles they have actually read. 

If posts, nobody wants to read because they seem very technical, get zero votes I think that's a good outcome. They don't need to be downvoted. 

1mukashi2y
Sorry, I think I wasn't clear enough. I meant that my threshold to downvote an AI related post was somehow lower, not that I was downvoting them indiscriminately.
4niplav2y
I still think that's bad, but I was also wrong to downvote you (your comment was true and informative!). So I removed the downvote.

Meanwhile Rationality A-Z is just super long. I think anyone who's a longterm member of LessWrong or the alignment community should read the whole thing sooner or later – it covers a lot of different subtle errors and philosophical confusions that are likely to come up (both in AI alignment and in other difficult challenges)

My current guess is that the meme "every alignment person needs to read the Sequences / Rationality A-Z" is net harmful.  They seem to have been valuable for some people but I think many people can contribute to reducing AI x-risk without reading them. I think the current AI risk community overrates them because they are selected strongly to have liked them.

Some anecodtal evidence in favor of my view:

  1. To the extent you think I'm promising for reducing AI x-risk and have good epistemics, I haven't read most of the Sequences. (I have liked some of Eliezer's other writing, like Intelligence Explosion Microeconomics.)
  2. I've been moving some of my most talented friends toward work on reducing AI x-risk and similarly have found that while I think all have great epistemics, there's mixed reception to rationalist-style writing. e.g. one is trialing at a top alignment org and doesn't like HPMOR, while another likes HPMOR, ACX, etc.
4Ruby2y
I think the tough thing here is it's very hard to evaluate who, if anyone, is making any useful contributions. After all, no one has successfully aligned a superintelligence to date. Maybe it's all way-off track. All else equal, I trust people who've read the Sequences to be better judges of whether we're making progress in the absence of proper end-to-end feedback than those who haven't. Caveat: I am not someone who could plausibly claim to have made any potential contribution myself. :P
3Joseph Bloom2y
I think it's plausible that it is either harmful to perpetuate "every alignment person needs to read the Sequences / Rationality A-Z" or maybe even inefficient. For example, to the extent that alignment needs more really good machine learning engineers, it's possible they might benefit less from the sequences than a conceptual alignment researcher. However, relying on anecdotal evidence seems potentially unnecessary. We might be able to use polls, or otherwise systemically investigate the relationship between interest/engagement with the sequences and various paths to contribution with AI. A prediction market might also work for information aggregation. I'd bet that all else equal, engagement with the sequences is beneficial but that this might be less pronounced among those growing up in academically inclined cultures.

I'm in favor of subforums — from these comments it seems to me that a significant fraction of people are coming to LW either for AI content, or for explicitly non-AI content (including some people who sometimes want one and sometimes the other); if those use cases are already so separate, it seems dumb to keep all those posts in the same stream, when most people are unhappy with that. (Yeah maybe I'm projecting because I'm personally unhappy with it, but, I am very unhappy.)

I used to be fine with the amount of AI content. Maybe a year ago I set a karma penalty on AI, and then earlier this year I increased that penalty to maximum and it still wasn't enough, so a few months ago I hid all things tagged AI, and now even that is insufficient, because there are so many AI posts and not all of them are tagged correctly. I often go to Latest and see AI-related posts, and then go tag those posts with 'AI' and refresh, but this is a frustrating experience. The whole thing has made me feel annoyed and ugh-y about using LW, even though I think there are still a lot of posts here I want to see. 

I also worry that the AI stuff — which has a semi-professionalized flavor — discourages more playful and exploratory content on the rest of the site. I miss what LW2.0 was like before this :(

9Raemon2y
Ah. We'd had on our todo list "make it so post authors are prompted to tag their posts before publishing them", but it hadn't been super prioritized. This comment updates me to ship this sooner so there'll hopefully be fewer un-tagged AI posts. 
2mingyuan2y
Logging on today, I noticed that all of the posts with this problem (today) were personal blogposts; are those treated differently? Also some of these are tagged 'ML', but that makes it through the AI filter, which.... I guess is intended behavior :/
6Raemon2y
The situation is that posts show up in the moderator-queue, moderators take a few hours to get to them, and in the meanwhile they are personal blogposts. So if you're okay with hiding all personal blogposts you can solve the problem that way. This would probably also hide other posts you want to see.  I'm hoping we can ship a "authors are nudged to give their post a core-tag" feature soon, which should alleviate a lot of the problem, although might not solve it entirely.
1Algon2y
Wouldn't generating psuedo-tags which users can opt in to see/filter by mostly solve this problem? Like, I'd have thought even a pre-DL-revolution classifier or clustering algorithm or GDA or something would have worked. Let alone querying GPT-instruct (or whatever) on whether or not an article has to do with AI. The pricing is quite cheap for Goose.AI or other models. 
3Ruby2y
Could work, but it's a bunch of upfront developer time.
4mingyuan2y
I don't actually know how subforums are implemented on EA Forum but I was imagining like a big thing on the frontpage that's like "Do you want to see the AI stuff or the non-AI stuff?". Does this sound clunky when I write it out?... yes
4Wes F2y
I would love an option to say "I don't want to read another word about AI alignment ever"

Feature proposal: when a new user joins the forum, they get a list of the most popular tags and a chance to choose which of these topics especially interest/disinterest them. This addresses the problem of new users being unaware of the filtering/sorting by tags feature.

Re: "Content I'd like to see more of":

Naturally paying people to write essays on specific topics is very expensive, but one can imagine more subtle ways in which LW could incentivize people to write on specific topics. To brainstorm a few ideas:

  • Some kind of feature where prospective writers can make public lists of things they could write about (in the form of Post Title + 1-paragraph summary), and a corresponding way for LW users to indicate which posts they're interested in. (E.g. I liked this post of blog post ideas by lsusr, but for our purposes that's insufficient context for prospective readers.) Maybe by voting on stuff that sounds interesting, or by doing the equivalent of subscribing to a newsletter. (In fact, there's already a LW feature to subscribe to all comments by a user, as well as all comments on a post, so this would be like subscribing to a draft post so you'd get a notification if and when it's released.) Of course one could even offer bounties here, but that might not be worth the complexity and the adverse incentives. Anyway, the main benefit here would be for prolific writers to gauge interest on their ideas and prioritize what to write about. I don't know to
... (read more)

I read via RSS, and I mostly just skip LW articles because it's almost all AI-related, and I'm not interested in that. It would be very nice if I could get a non-AI RSS feed (or even better - a customizable RSS feed where I can include or exclude certain tags). 

It really does feel to me, though, that LessWrong is not for me because I'm not interested in AI alignment. LW doesn't strike me as a rationality-focused site. It's a site for talking about AI alignment that offers some rationality content as an afterthought. It sounds like you don't really want to change that, though, so it is what it is.

(Ep. vibes: I went to few EA cons, and subscribed to the forum digest.)

I blame EA. They were simply too successful.

There are the following effects at play:

  • Bad AI gonna kill us all :(
  • Preparing for emergent threats is one of the most effective ways to help others.
  • The best way to have good ideas is to have a lot of ideas; and the best way to have a lot of ideas is to have a lot of people.
  • Large funnels were built for new AI Safety researchers.
  • The largest discussions about the topic happened at LW and rat circles.
  • The general advice I heard at EA conferences in late Feb/Mar (notice the spike! it's March, before the big doompost edit: it's really after the doompost, I misread the graphs) is that you should go to LW for AI-specific stuff.

What a coincidence that the AI-on-LW flood and the cries for the drop in EA Forum quality happened at the same time. I think with the EA Movement growing exponentially in numbers, both sites are getting eternal septembered.

I think the solution could be to create a new frontpage for ai related discussions, like "personal blog", "LW frontpage", "AI Safety frontpage" categories. Or go through the whole subforum routes, with childboards and stuff like that.

2mruwnik2y
There's also the encouragement to post to LW as a method of getting noticed and in that way finding an alignment job/team/project

I really agree that lesswrong derives some of its richness from being a community where people do a few different things all in the same place -- rationality training, curiosity-driven research, searching for novel altruistic interventions, AI research. Providing tools for people to self-segregate into niches will "work" in the sense that people will do it and probably say that they like it, but will probably lose some of that richness.

Has there been an increase in people reading AI content, proportional to the increase in people writing AI content?

5Bask⚡️2y
Hmm, I might be misunderstanding your intention, but I don't believe this is the correct question to ask, as it assumes that 'reading an article' ~= satisfaction with LessWrong. A classic counter example here is click-bait, where 'visits' and 'time on page' don't correlate with 'user reported value.' If we take a simple view of LessWrong as a 'product' then success is typically measured by retention: what % of users choose to return to the site each week following their first visit. If we're seeing weekly retention for existing cohorts drop over time, it suggests that long-time LessWrong readers net-net value the content less. Now, the prickly parts here are: a) Users who've joined in the past year likely value the current post landscape more than the old post landscape. Given this, a retention 'win' for older cohorts would likely be a 'loss' for newer cohorts.  b) LessWrong's "purpose" is much less clearcut than that of a typical business/product.  I think it would be fun & productive to reflect on a 'north star metric' for LW, similar to the QALY for EA, but it's beyond the scope of this reply : )

I'm fine with everything on LW ultimately being tied to alignment. Hardcore materialism being used as a working assumption seems like a good pragmatic measure as well. But ideally there should also be room for foundational discussions like "how do we know our utility function?" and "what does it mean for something to be aligned?" Having trapped priors on foundational issues seems dangerous to me.

Currently, every AI alignment post gets frontpaged. If there are too many AI alignment posts on the frontpage it's worth thinking about whether that policy should change.

I personally have AI alignment on -25 karma and Rationality on +25. For my purposes, the current system works well, but then I understand how it works and it's likely that there are other people who don't. New users likely won't understand that they have that choice.

I think it would give the wrong impression to a new users when they see that AI alignment is by default on -25 karma, so it's better for new users to give Rationality / Worldbuilding a boost than to set negative values for AI alignment.

I would suspect that most new users to LessWrong are not in... (read more)

Even after reading this comment it took me a while to find this option, so for anyone who similarly didn't know about that option:

On the start page, below "Latest", you can add a new filter. Then, click on that filter and adjust the numbers or entirely hide a category.

I'm one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I've been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.

Since AI (however you define it) is a permanent fixture in the world, I'm happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it's SEO'd well too.

I'd think newcomers and non-technical contributors ar... (read more)

2the gears to ascension2y
Strongly agreed here. My view is that ai takeover is effectively just the scaled up version of present-day ai best practice concerns, and the teams doing good work on either end up helping both. Both "sides of the debate" have critical things to say about each other, but in my view, that's simply good scientific arguing. I'd love to hear more on your thoughts on most effective actions for shorttermist ai safety and ai bias, if you were up for writing a post! I'd especially like to hear your thoughts on how cutting edge psychology emergency-deescalation-tactics research on stuff like how to re-knit connections between humans who've lost trust for political-fighting reasons can relate to ai safety; that example might not be your favorite focus, though it's something I worry about a lot myself and have thoughts about. Or perhaps if you've encountered the socio-environmental synthesis center's work on interdisciplinary team science (see also their youtube channel), I'm curious if you have thoughts about that. or, well, more accurately, I give those examples as prompt so you can see what kind of thing I'm thinking about writing about and generalize it into giving similar references or shallow dives into research that you're familiar with and I'm not.
2_self_1y
So I didn't know this was a niche philosophy forum, with its own subculture. I'm way out of my element. My suggestions were not very relevant taking that into context, I thought it was a general forum. I'm still glad there are people thinking about it. The links you sent are awesome! - I'll follow those researchers. I think a lot of my thoughts here are outdated as things keep changing, and I'm still putting thoughts together. So, I probably won't be writing much for a few months until my brain settles down a little. Am I "shorttermism"? Long term, as in fate of humanity, I think I am not good to debate there Thanks for commenting on my weird intro!
1the gears to ascension1y
imo, shorttermism = 1 year, longtermism = 10 years. ai is already changing very rapidly. as far as I'm concerned your posts are welcome; don't waste time worrying about being out of your element, just tell it as you see it and let's debate - this forum is far too skeptical of people with your background and you should be more self-assured that you have something to contribute.

I agree this is rather a thing, and I kinda feel like the times I look at LessWrong specifically to read up on what people are saying about their latest AI thoughts feel different to me from the times I am just in a reflective / learning mood and want to read about rationality and worldview building. For me personally, I'm using LessWrong for AI content daily, and would prefer to just have a setting in my account which by-default showed nothing but that. Other stuff for me is a distracting akrasia-temptation at this point. I also agree that for a novice / ... (read more)

One major reason why there is so much AI content on LessWrong is that very few people are allowed to post on the Alignment Forum.

I analyzed some recent AI posts on LessWrong and found that only about 15% of the authors were also members of the Alignment Forum. I'm personally very interested in AI but I post all of my AI content on LessWrong and not the Alignment Forum because I'm not a member.

Anecdotally, I several people working full-time on AI safety who are still not members of the Alignment Forum and consequently post all their work on LessWrong.

My rec... (read more)

6Thomas Larsen2y
Everything on the alignment forum gets crossposted to LW, so letting more people post on AF wouldn't decrease the amount of AI content on LW. 

I'm trying to shift my focus more toward AI, due to the likelihood that it will have big impacts over the next decade.

I'd like newbies to see some encouragement to attend a CFAR workshop. But there's not much new to say on that topic, so it's hard to direct people's attention there.

1the gears to ascension2y
I don't think a CFAR workshop is an appropriate recommendation for folks who visit semanticscholar.com for the first time without epistemic background necessary to reliably understand papers; what makes lesswrong.com different that warrants such a high-touch recommendation? is there another kind of intro that would work? perhaps interactive topic-games, like brilliant, or perhaps play money prediction markets on the site?

I'm one of those LW readers who is less interested in AI-related stuff (in spite of having a CS degree with an AI concentration; that's just not what I come here for). I would really like to be able to filter "AI Alignment Forum" cross-posts, but the current filter setup does not allow for that so far as I can see.

3habryka2y
Filtering out the AI tag should roughly do that.
2lise2y
I disagree that filtering the AI tag would accomplish this, at least for my purposes. The thing about Alignment Forum crossposts is that they're usually quite technical & focused purely on AI, containing the bulk of things I don't want to see. The rest of the AI tag however often contains ideas about the human brain, analogies with the real world, and other content that I find interesting, even though the post ultimately ties those ideas back into an AI framing. So a separate filter for this would be useful IMO.

I'm confused by the sudden upsurge in AI content. People in technical AI alignment are there because they already had strong priors that AI capabilities are growing fast. They're aware of major projects. I doubt DALL-E threw a brick through Paul Christiano's window, Eliezer Yudkowsky's window, or John Wentworth's window. Their window was shattered years ago.

Here are some possible explanations for the proliferation of AI safety content. As a note, I have no competency in AI safety and haven't read the posts. These are questions, not comments on the quality ... (read more)

5ChristianKl2y
There used to be very strong secrecy norms at MIRI. There was a strategic update on the usefulness of public debate and reducing secrecy. Everything that's in the AI alignment forum gets per default also shown on LessWrong. The AI alignment forum is a way to filter out amateur work. 
6Ben Pace2y
I don't believe there was a strategic update in favor of reducing secrecy at MIRI. My model is that everything that they said would be secret, is still secret. The increase in public writing is not because it became more promising, but because all their other work became less.
4ChristianKl2y
Maybe saying "secrecy" is the wrong way to phrase it. The main point is that MIRI strategy shifted toward more public writing.
4Raemon2y
I think we're primarily seeing: and
3aogara2y
For some n=1 data, this describes my situation. I've posted about AI safety six times in the last six months despite having posted only once in the four years prior. I'm an undergrad who started working full-time on AI safety six months ago thanks to funding and internship opportunities that I don't think existed in years past. The developments in AI over the last year haven't dramatically changed my views. It's mainly about the growth of career opportunities in alignment for me personally.  Personally I agree with jacob_cannell and Nathan Helm-Burger that I'd prefer an AI-focused site and I'm mainly just distracted by the other stuff. It would be cool if more people could post on the Alignment Forum, but I do appreciate the value of having a site with a high bar that can be shared to outsiders without explaining all the other content on LessWrong. I didn't know you could adjust karma by tag, but I'll be using that to prioritize AI content now. I'd encourage anyone who doesn't want my random linkposts about AI to use the tags as well.  This also feels relevant. I share links with a little bit of context when I think some people would find them interesting, even when not everybody will. I don't want to crowd out other kinds of content, I think it's been well received so far but I'm open to different norms. 

I recently analyzed the past 6 months of LessWrong posts about found that about 25% were related to AI.

I liked this writeup about container logistics, which was relevant to .

Think you have a missing link here. :)

In my experience, LW and AI safety gain a big chunk of legitimacy from being the best at Rationality and among the best places on earth for self-improvement. That legitimacy goes a long way, but only in systems that are externalities to the alignment ecosystem (i.e. the externality is invisible to the 300 AI safety researchers who are already being AI safety researchers).

I don't see the need to retool rationality for alignment. If it helps directly, it helps directly. If it doesn't help much directly, then it clearly helps indirectly. No need to get territ... (read more)

we urgently need to distill huge amounts of educational content. I don't know with what weapons sequences 2 will be fought, but sequences 3 will be fought with knowledge tracing, machine teaching, online courses like brilliant, inline exercises, play money prediction markets, etc.

the first time around, it was limited to eliezer's knowledge - and he made severe mistakes because he didn't see neural networks coming. now it almost seems like we need to write an intro to epistemics for a wide variety of audiences, including AIs - it's time to actually write cl... (read more)

1ChristianKl2y
https://www.lesswrong.com/posts/ryx4WseB5bEm65DWB/six-months-of-rose is essentially something like that. 

the content I'm most interested in is from people who've done a lot of serious thinking that's resulted in serious accomplishment.

Raemon, do you selectively read posts by people you know to be seriously accomplished? Or are you saying that you think that a background of serious accomplishment by the writer just makes their writing more likely to be worthwhile?

2Raemon2y
I think people who have accomplished notable stuff are more likely to have rationality lessons worth learning from.
4DirectedEvolution2y
Does that mean you’d find it useful if people posted their CV or a list of their accomplishments with their LessWrong posts?
2DirectedEvolution2y
I am having a hard time knowing what it means when someone disagrees with my question. It’s not meant rhetorically.
4ChristianKl2y
Without being the person who voted, I would expect that it means: "No, it's not useful".
2DirectedEvolution2y
Yeah, I was hoping specifically for Raemon's model. Are we interested in stories that only a very successful person can tell (i.e. Paul Graham on being a venture capitalist)? Or are we interested in rationality posts that anyone could write in theory, but that gain their aura of credibility due to the success of the person posting it? If the former, a CV wouldn't be that useful, except as it came across in the storytelling. If the latter, it seems like you would want the person's track record of accomplishment in some form, perhaps as part of the "epistemic status" disclaimer.
6Raemon2y
I meant the former – someone who's actually accomplished something is more likely to have rationality lessons that are actually useful. But I don't think people should have an attititude of deferral – I just didn't want to try to solve the current overbalance of AI by having a bunch of college students write up some untested armchair rationality. I don't know that I'd put the bar at "Paul Graham" – if you've done any kind of reasonably hard project and learned something along the way I think that's good to share. I also don't want to discourage anyone inexperienced who is naturally drawn to write up their rationality thoughts – I just didn't want to go out of my way to especially encourage it.
2DirectedEvolution2y
I endorse this.

We're currently filtered the randomized "from the archives" posts to show Rationality and World Modeling posts. I'm not sure whether this makes sense as a longterm solution, but it still seems useful as a counterbalancing force for the deluge of AI content, and helping users orient to the underlying culture that generated that AI content.

I would suggest having this filter applied 2/3rds of the time, but to function as it did previously, showing from all posts, the other 1/3rd of the time. That way, it is still biased towards Rationality and World Modelling, but not exclusively focused on it.

First some background on me, then some thoughts.

I am an alignment researcher and I read LW and AF occasionally. I tend to focus more on reading academic papers, not the alignment blogosphere. I read LW and AF mostly to find links to academic papers I might otherwise overlook, and for the occasional long-from analysis blogpost that the writer(s) put several months in to write. I am not a rationalist.

What I am seeing on LW is that numerically, many of the AI posts are from from newcomers to the alignment field, or from people who are just thinking abou... (read more)

1Noosphere891y
I absolutely agree, at least here, and I'm not a fan of this. I think a large part of the problem is dubious assumptions combined with dubious solutions. One good example is the FOOM assumption, which has much higher probability mass in MIRI than they should. The probability of FOOM is more like 3% in the first AI than 60-90%. Second, their solutions are not really what is necessary here. In my view, interpretability and making sure that deceptive aligned models never arise is of paramount importance. Crucially, this will look far more empirical than past work. That doesn't mean we will make it, but it does mean we can probably deal with the problem.

How do I find more 'fact posts'? That's probably my favourite type of content here, and I clicked on the hyperlink hoping that it was a content tag and I could spend a few hours gorging!

2Raemon2y
We do have a content tag for it: https://www.lesswrong.com/tag/fact-posts 
1frankybegs1y
Thank you! How do I browse content tags in general?
2Raemon1y
See the ‘concepts’ page linked on the sidebar on the home page.

Randomise karma-boosts for each topic every day. Or let there be an "AI day", "Rationality day", "Practical day", etc. where the topic gets relatively promoted to the frontpage, but have it be luck of the draw rather than specific days. Just so writers have less of an incentive to withhold posting something to wait for the perfect day.

If readers visit the forum and see 90% AI every day, they'll probably have more of an impression that this is an AI forum, compared to if they see the same proportion of AI posts over a week, but not every day is an AI day.

3Raemon2y
Upvoted for a creative solution. Not sure whether this would work well but is definitely interesting and might be worth trying once even if not a permanent thing.

I agree that I would like to see LessWrong be a place for rationality, not just for AI. A concern: The listed ways you try to encourage rationality discussion seem to be too little dakka to me.

People are busy and writing up ideas takes a lot of time. If you want to encourage people to post, you'll probably have to provide a lot of value for posting to LessWrong. Commissioning is the straightforward approach, but as you mention it is expensive. I like the proofreading service and it's probably one of the main things that's made me post to LessWrong.

I'm not ... (read more)

As a new member and hardcore rationalist/mental optimizer who knows little about AI, I've certainly noticed the same thing in the couple weeks I've been around. The most I'd say of it is that it's a little tougher to find the content I'm really looking for, but it's not like the site has lost its way in terms of what is still being posted. It doesn't make me feel less welcome in the community, the site just seems slightly unfocused.

I don’t know, but sounds like an obvious use case for a sub forum? The solutions listed above seem hackish.

4Raemon2y
Creating subforums still leaves you with the question of "but what do you see when you go to the main page on lesswrong.com". You still somehow want the overall site to have a reasonable balance of stuff on the main list that everyone reads. I do think we're approaching the point where it might make sense to consider subforums, but IMO they don't solve the core problem here.
3DragonGod1y
My sense is that some people are here for AI and some people are here for rationality, and we shouldn't default to either. New users should be prompted to choose a main stream to follow which will populate their feed.   (I'm pretty annoyed about all the time I spent scrolling past non AI posts in the past because I wasn't aware of the filtering mechanisms.)

I'm personally rather annoyed by all the AI discussion here and have been a lot less engaged recently. I would like to see much more rationality-type content.

[-][anonymous]2y2-13

Is AI not itself rationality?

Every AI system is trying to minimize error in some way, regardless of the method. While systems have biases and get stuck in local minima, the algorithms used to develop ML models are rational. The machine doesn't believe in things that it has no evidence for, it doesn't develop policies that don't improve reward if RL, training systems stop once the model starts getting better, the machine doesn't age or ask it's friends for faulty input data it gets the same data it always gets.

AI systems can be wrong and suboptimal but do... (read more)

2Raemon2y
I'm not sure I parse and agree with the entirety of this comment, but I do think a reason to keep AI intertwined with a rationality forum is that yes, the study of AI is importantly intertwined with the study of rationality (and I think this has been a central thing about the lesswrong discussion areas since the beginning of the site) 
1[anonymous]2y
Well like I was in San Diego yesterday when Scott Alexander answered our fan questions. Rationality is knowing what you don't know, and being very careful to hedge what you do know. It doesn't scale. If everyone in the world were rational but we didn't have the possibility of AI it would be a somewhat better world. But not hugely better. Medical advances would happen at a still glacial pace, just slightly less glacial. Probably the world would mostly look like the better parts of the EU, since someone rational analyzing what government structure works the best in practice would have no choice but to conclude theirs is the current best known.
2ChristianKl2y
A lot of what makes medical advances glacial is FDA/EMA regulation. More rationality about how to do regulation would be helpful in advancing medical progress. 
1[anonymous]2y
These regulations were written in blood. A lot of people have died from either poorly tested experimental treatments or treatments that have no effect. My implicit assumption is advanced enough AI could fix this because people would be less likely to just mysteriously die from new side effects. This would make experimenting safer. The reasons they would be less likely to die is partly advanced AI could solve the problems with living mockups of human bodies, so you have real pre clinical testing on an actual human body (it's a mockup so it's probably a bunch of tissues in separate life support systems). And partly that a more advanced model could react much faster and in more effective ways that human healthcare providers, who know a limited number of things to try and if the patient doesn't respond they just write down the time of death. Like a human Go player getting a move they don't know the response to.

I just want to register the less wrong team seems like they're doing a great job here! I didn't even notice Whispering rationality in my ear as a deliberate thing, very subtle :)

3Raemon2y
Upon reflection is it a thing that successfully happened? Curious to hear about what you think actually worked when you stop to think about it. 
3Ulisse Mini2y
I don't think it had a big effect on me, but I'm trying hard to be intentional about my LW time & not randomly browsing.[1] Mostly I was expressing "wow, all these random website improvements were done for a strategic reason! that's cool." ---------------------------------------- 1. Otherwise I'd be on here all day... ↩︎

Has there also been an upsurge in posting on the alignment forum? If so, given that AF content is automatically cross posted here, that would explain the upsurge in LW AI posts.

4Raemon2y
LW and AF are secretly just the same forum. Yes, there's been an upsurge, but it doesn't really tell you anything you didn't already know. (I guess it matters somewhat for site-culture whether people are primarily orienting as "Alignment Forum user" or "LessWrong user", but either way in practice they end up commenting on LessWrong since that's where most of the discussion is)
3DirectedEvolution2y
My understanding is the AI forum has a higher quality bar and automatically crossposts to LW. I was wondering if LW is absorbing a lot of new lower-quality posts that didn't make the cut for AI forum, with AI forum being in a steady state. It sounds like there's just more AI posting across the board.

Concerning the Sequences:

I believe the main thing they lack is structure. They address lots of topics from lots of angles and I don't see the "map", I often fail to see them in context. Introducing a tree structure[1] would not only help to orient yourself while reading, but could also make maintaining easier. A (non restricting) progression system with prerequesits could also be implemented for better guidance. However, I am very aware of the time cost and very unsure of the efficiency. 

  1. ^

    eg. epistemic/operative could be the first layer categorie

... (read more)

Thank you for writing this.

2DirectedEvolution2y
I noticed you haven't been posting much over the last few months. I was wondering if the proliferation of AI posts had anything to do with that?
5lsusr2y
Nope. It's just due to personal life circumstances.
4DirectedEvolution2y
Hope to hear more from you in the future!

It felt to me like there's too much for my taste. My impression was that you guys were optimizing for it being about AI content, somewhat related to the % of people involved at Lightcone coworking being AI researchers vs other subjects.

My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented.

Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying div

... (read more)

For my part, I often find myself annoyingly scrolling past all the non-AI posts. Most of the value I get from LW is AI Safety discussion from a wider audience (e.g. I don't have AF access and many good alignment posts are from AF nonmembers).

I don't really like trying to suppress LW's AI flavour.

3Raemon1y
Have you set your tag filtering to emphasize AI stuff more? (see the top of the Latest section on the home page)
1DragonGod1y
Rationality -1000 doesn't get rid of the curated rationality posts. Is there a way I could properly suppress curated posts for tag I'm downweighting?
3Raemon1y
Ah, no, curated is handled separately. (They normally live in a separate section where this is more obvious)

AI content is boring, if you're not in that field. I'd like a button to quickly filter out all AI-tagged posts.

5Kaj_Sotala2y
Hover over the "AI" tag in "Latest", click on the "Hidden" button.
1David Gretzschel2y
Thanks, did not see nor thought to explore, as websites aren't normally so sophisticated and feature-rich.

I see only one clean solution to this problem: Let anyone post at the AI Alignment Forum, do no longer automatically crosspost to Less Wrong, and (somehow) nudge people who post AI content to Less Wrong to instead post it to the AI Alignment Forum. There should be three separate platforms:

  • Less Wrong, for rationality
  • AI Alignment Forum, for AI
  • EA Forum, for effective altruism

Currently, only effective altruism has its own platform, while Less Wrong and the AI Alignment Forum are insufficiently separated.

This way people interested in rationality don't hav... (read more)

I enjoyed a lot of the other content and hence am now much more inclined to read the EA forums rather than lesswrong. These changes could mean that people like me, who are primarily interested in progress studies and applying science and reasoning to better humanity and themselves, may miss out on relevant AI content when they move to another site. Then again perhaps the EA forums are more relevant to me anyway and I should spend more time reading these.

Love this post! What attracted me to Lesswrong was Rationality/World Modeling content, and AI safety content has mostly been secondary for me.

[+][comment deleted]2y73