Here is my take: since there's so much AI content, it's not really feasible to read all of it, so in practice I read almost none of it (and consequently visit LW less frequently).
The main issue I run into is that for most posts, on a brief skim it seems like basically a thing I have thought about before. Unlike academic papers, most LW posts do not cite previous related work nor explain how what they are talking about relates to this past work. As a result, if I start to skim a post and I think it's talking about something I've seen before, I have no easy way of telling if they're (1) aware of this fact and have something new to say, (2) aware of this fact but trying to provide a better exposition, or (3) unaware of this fact and reinventing the wheel. Since I can't tell, I normally just bounce off.
I think a solution could be to have a stronger norm that posts about AI should say, and cite, what they are building on and how it relates / what is new. This would decrease the amount of content while improving its quality, and also make it easier to choose what to read. I view this as a win-win-win.
and consequently visit LW less frequently
Tangentially, "visiting LW less frequently" is not necessarily a bad thing. We are not in the business of selling ads; we do not need to maximize the time users spend here. Perhaps it would be better if people spent less time online (including on LW) and more time doing whatever meaningful things they might do otherwise.
But I agree that even assuming this, "the front page is full of things I do not care about" is a bad way to achieve it.
tools for citation to the existing corpus of lesswrong posts and to off-site scientific papers would be amazing; eg, rolling search for related academic papers as you type your comment via the semanticscholar api, combined with search over lesswrong for all proper nouns in your comment. or something. I have a lot of stuff I want to say that I expect and intend is mostly reference to citations, but formatting the citations for use on lesswrong is a chore, and I suspect that most folks here don't skim as many papers as I do. (that said, folks like yourself could probably give people like me lessons on how to read papers.)
also very cool would be tools for linting emotional tone. I remember running across a user study that used a large language model to encourage less toxic review comments; I believe it was in fact an intervention study to see how usable a system was. looking for that now...
AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don't feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I've been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I'm taking my fate in my hands every time I browse here.
I don't know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can't stomach the volume and tone of alignment discourse.
I'm here pretty much just for the AI related content and discussion, and only occasionally click on other posts randomly: so I guess I'm part of the problem ;). I'm not new, I've been here since the beginning, and this debate is not old. I spend time here specifically because I like the LW format/interface/support much better than reddit, and LW tends to have a high concentration of thoughtful posters with a very different perspective (which I tend to often disagree with, but that's part of the fun). I also read /r/MachineLearning/ of course, but it has different tradeoffs.
You mention filtering for Rationality and World Modeling under More Focused Recommendations - but perhaps LW could go farther in that direction? Not necessarily full subreddits, but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience. Folks more interested in Rationality than AI could uprank and then see more of the former rather than the latter, etc.
AI needs Rationality, in particular. Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed).
There is still a sign...
but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience.
Just to be clear, this does indeed exist. You can give a penalty or boost to any tag on your frontpage, and so shift the content in the direction of topics you are most interested in.
I would say "tragically flawed": noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.
What concretely do you have in mind here?
Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.
If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the "and everything else approach". One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.
From Evolutionary Psychology and the Emotions:
An evolutionary perspective leads one to view the mind as a crowded zoo of evolved, domain-specific programs. Each is functionally specialized for solving a different adaptive problem that arose during hominid evolutionary history, such as face recognition, foraging, mate choice, heart rate regulation, sleep management, or predator vigilance, and each is activated by a different set of cues from the environment.
From t...
Meanwhile Rationality A-Z is just super long. I think anyone who's a longterm member of LessWrong or the alignment community should read the whole thing sooner or later – it covers a lot of different subtle errors and philosophical confusions that are likely to come up (both in AI alignment and in other difficult challenges)
My current guess is that the meme "every alignment person needs to read the Sequences / Rationality A-Z" is net harmful. They seem to have been valuable for some people but I think many people can contribute to reducing AI x-risk without reading them. I think the current AI risk community overrates them because they are selected strongly to have liked them.
Some anecodtal evidence in favor of my view:
I'm a 100% with you. I don't like the current trend of LW becoming a blog about AI, and much less about a blog about how AGI doom is inevitable, (and in my opinion there have been too many blog posts about that, with some exceptions of course). I have found myself lately downvoting AI related posts more easily and upvoting content non related to AI more easily too
I weakly downvoted your comment:
I think the solution to "too much AI content" is not to downvote the AI content less discriminately. If there were many posts with correct proofs in harmonic analysis being posted to LessWrong, I would not want to downvote them, after all, they are not wrong in any important sense, and maybe even important for the world!
But I would like to filter them out, at least until I've learned the basics of harmonic analysis to understand them better (if I desired to do so).
For what it's worth, I think I am actually in favor of downvoting content of which you think there is too much. The general rule for voting is "upvote this if you want to see more like this" and "downvote this if you want to see less like this". I think it's too easy to end up in a world where the site is filled with content that nobody likes, but everyone thinks someone else might like. I think it's better for people to just vote based on their preferences, and we will get it right in the aggregate.
Generally, I would want people to vote on articles they have actually read.
If posts, nobody wants to read because they seem very technical, get zero votes I think that's a good outcome. They don't need to be downvoted.
I'm in favor of subforums — from these comments it seems to me that a significant fraction of people are coming to LW either for AI content, or for explicitly non-AI content (including some people who sometimes want one and sometimes the other); if those use cases are already so separate, it seems dumb to keep all those posts in the same stream, when most people are unhappy with that. (Yeah maybe I'm projecting because I'm personally unhappy with it, but, I am very unhappy.)
I used to be fine with the amount of AI content. Maybe a year ago I set a karma penalty on AI, and then earlier this year I increased that penalty to maximum and it still wasn't enough, so a few months ago I hid all things tagged AI, and now even that is insufficient, because there are so many AI posts and not all of them are tagged correctly. I often go to Latest and see AI-related posts, and then go tag those posts with 'AI' and refresh, but this is a frustrating experience. The whole thing has made me feel annoyed and ugh-y about using LW, even though I think there are still a lot of posts here I want to see.
I also worry that the AI stuff — which has a semi-professionalized flavor — discourages more playful and exploratory content on the rest of the site. I miss what LW2.0 was like before this :(
Feature proposal: when a new user joins the forum, they get a list of the most popular tags and a chance to choose which of these topics especially interest/disinterest them. This addresses the problem of new users being unaware of the filtering/sorting by tags feature.
Re: "Content I'd like to see more of":
Naturally paying people to write essays on specific topics is very expensive, but one can imagine more subtle ways in which LW could incentivize people to write on specific topics. To brainstorm a few ideas:
I read via RSS, and I mostly just skip LW articles because it's almost all AI-related, and I'm not interested in that. It would be very nice if I could get a non-AI RSS feed (or even better - a customizable RSS feed where I can include or exclude certain tags).
It really does feel to me, though, that LessWrong is not for me because I'm not interested in AI alignment. LW doesn't strike me as a rationality-focused site. It's a site for talking about AI alignment that offers some rationality content as an afterthought. It sounds like you don't really want to change that, though, so it is what it is.
(Ep. vibes: I went to few EA cons, and subscribed to the forum digest.)
I blame EA. They were simply too successful.
There are the following effects at play:
What a coincidence that the AI-on-LW flood and the cries for the drop in EA Forum quality happened at the same time. I think with the EA Movement growing exponentially in numbers, both sites are getting eternal septembered.
I think the solution could be to create a new frontpage for ai related discussions, like "personal blog", "LW frontpage", "AI Safety frontpage" categories. Or go through the whole subforum routes, with childboards and stuff like that.
I really agree that lesswrong derives some of its richness from being a community where people do a few different things all in the same place -- rationality training, curiosity-driven research, searching for novel altruistic interventions, AI research. Providing tools for people to self-segregate into niches will "work" in the sense that people will do it and probably say that they like it, but will probably lose some of that richness.
Has there been an increase in people reading AI content, proportional to the increase in people writing AI content?
I'm fine with everything on LW ultimately being tied to alignment. Hardcore materialism being used as a working assumption seems like a good pragmatic measure as well. But ideally there should also be room for foundational discussions like "how do we know our utility function?" and "what does it mean for something to be aligned?" Having trapped priors on foundational issues seems dangerous to me.
Currently, every AI alignment post gets frontpaged. If there are too many AI alignment posts on the frontpage it's worth thinking about whether that policy should change.
I personally have AI alignment on -25 karma and Rationality on +25. For my purposes, the current system works well, but then I understand how it works and it's likely that there are other people who don't. New users likely won't understand that they have that choice.
I think it would give the wrong impression to a new users when they see that AI alignment is by default on -25 karma, so it's better for new users to give Rationality / Worldbuilding a boost than to set negative values for AI alignment.
I would suspect that most new users to LessWrong are not in...
Even after reading this comment it took me a while to find this option, so for anyone who similarly didn't know about that option:
On the start page, below "Latest", you can add a new filter. Then, click on that filter and adjust the numbers or entirely hide a category.
I'm one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I've been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.
Since AI (however you define it) is a permanent fixture in the world, I'm happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it's SEO'd well too.
I'd think newcomers and non-technical contributors ar...
I agree this is rather a thing, and I kinda feel like the times I look at LessWrong specifically to read up on what people are saying about their latest AI thoughts feel different to me from the times I am just in a reflective / learning mood and want to read about rationality and worldview building. For me personally, I'm using LessWrong for AI content daily, and would prefer to just have a setting in my account which by-default showed nothing but that. Other stuff for me is a distracting akrasia-temptation at this point. I also agree that for a novice / ...
One major reason why there is so much AI content on LessWrong is that very few people are allowed to post on the Alignment Forum.
I analyzed some recent AI posts on LessWrong and found that only about 15% of the authors were also members of the Alignment Forum. I'm personally very interested in AI but I post all of my AI content on LessWrong and not the Alignment Forum because I'm not a member.
Anecdotally, I several people working full-time on AI safety who are still not members of the Alignment Forum and consequently post all their work on LessWrong.
My rec...
I'm trying to shift my focus more toward AI, due to the likelihood that it will have big impacts over the next decade.
I'd like newbies to see some encouragement to attend a CFAR workshop. But there's not much new to say on that topic, so it's hard to direct people's attention there.
I'm one of those LW readers who is less interested in AI-related stuff (in spite of having a CS degree with an AI concentration; that's just not what I come here for). I would really like to be able to filter "AI Alignment Forum" cross-posts, but the current filter setup does not allow for that so far as I can see.
I'm confused by the sudden upsurge in AI content. People in technical AI alignment are there because they already had strong priors that AI capabilities are growing fast. They're aware of major projects. I doubt DALL-E threw a brick through Paul Christiano's window, Eliezer Yudkowsky's window, or John Wentworth's window. Their window was shattered years ago.
Here are some possible explanations for the proliferation of AI safety content. As a note, I have no competency in AI safety and haven't read the posts. These are questions, not comments on the quality ...
I recently analyzed the past 6 months of LessWrong posts about found that about 25% were related to AI.
I liked this writeup about container logistics, which was relevant to .
Think you have a missing link here. :)
In my experience, LW and AI safety gain a big chunk of legitimacy from being the best at Rationality and among the best places on earth for self-improvement. That legitimacy goes a long way, but only in systems that are externalities to the alignment ecosystem (i.e. the externality is invisible to the 300 AI safety researchers who are already being AI safety researchers).
I don't see the need to retool rationality for alignment. If it helps directly, it helps directly. If it doesn't help much directly, then it clearly helps indirectly. No need to get territ...
we urgently need to distill huge amounts of educational content. I don't know with what weapons sequences 2 will be fought, but sequences 3 will be fought with knowledge tracing, machine teaching, online courses like brilliant, inline exercises, play money prediction markets, etc.
the first time around, it was limited to eliezer's knowledge - and he made severe mistakes because he didn't see neural networks coming. now it almost seems like we need to write an intro to epistemics for a wide variety of audiences, including AIs - it's time to actually write cl...
the content I'm most interested in is from people who've done a lot of serious thinking that's resulted in serious accomplishment.
Raemon, do you selectively read posts by people you know to be seriously accomplished? Or are you saying that you think that a background of serious accomplishment by the writer just makes their writing more likely to be worthwhile?
We're currently filtered the randomized "from the archives" posts to show Rationality and World Modeling posts. I'm not sure whether this makes sense as a longterm solution, but it still seems useful as a counterbalancing force for the deluge of AI content, and helping users orient to the underlying culture that generated that AI content.
I would suggest having this filter applied 2/3rds of the time, but to function as it did previously, showing from all posts, the other 1/3rd of the time. That way, it is still biased towards Rationality and World Modelling, but not exclusively focused on it.
First some background on me, then some thoughts.
I am an alignment researcher and I read LW and AF occasionally. I tend to focus more on reading academic papers, not the alignment blogosphere. I read LW and AF mostly to find links to academic papers I might otherwise overlook, and for the occasional long-from analysis blogpost that the writer(s) put several months in to write. I am not a rationalist.
What I am seeing on LW is that numerically, many of the AI posts are from from newcomers to the alignment field, or from people who are just thinking abou...
How do I find more 'fact posts'? That's probably my favourite type of content here, and I clicked on the hyperlink hoping that it was a content tag and I could spend a few hours gorging!
Randomise karma-boosts for each topic every day. Or let there be an "AI day", "Rationality day", "Practical day", etc. where the topic gets relatively promoted to the frontpage, but have it be luck of the draw rather than specific days. Just so writers have less of an incentive to withhold posting something to wait for the perfect day.
If readers visit the forum and see 90% AI every day, they'll probably have more of an impression that this is an AI forum, compared to if they see the same proportion of AI posts over a week, but not every day is an AI day.
I agree that I would like to see LessWrong be a place for rationality, not just for AI. A concern: The listed ways you try to encourage rationality discussion seem to be too little dakka to me.
People are busy and writing up ideas takes a lot of time. If you want to encourage people to post, you'll probably have to provide a lot of value for posting to LessWrong. Commissioning is the straightforward approach, but as you mention it is expensive. I like the proofreading service and it's probably one of the main things that's made me post to LessWrong.
I'm not ...
As a new member and hardcore rationalist/mental optimizer who knows little about AI, I've certainly noticed the same thing in the couple weeks I've been around. The most I'd say of it is that it's a little tougher to find the content I'm really looking for, but it's not like the site has lost its way in terms of what is still being posted. It doesn't make me feel less welcome in the community, the site just seems slightly unfocused.
I don’t know, but sounds like an obvious use case for a sub forum? The solutions listed above seem hackish.
I'm personally rather annoyed by all the AI discussion here and have been a lot less engaged recently. I would like to see much more rationality-type content.
Is AI not itself rationality?
Every AI system is trying to minimize error in some way, regardless of the method. While systems have biases and get stuck in local minima, the algorithms used to develop ML models are rational. The machine doesn't believe in things that it has no evidence for, it doesn't develop policies that don't improve reward if RL, training systems stop once the model starts getting better, the machine doesn't age or ask it's friends for faulty input data it gets the same data it always gets.
AI systems can be wrong and suboptimal but do...
I just want to register the less wrong team seems like they're doing a great job here! I didn't even notice Whispering rationality in my ear as a deliberate thing, very subtle :)
Has there also been an upsurge in posting on the alignment forum? If so, given that AF content is automatically cross posted here, that would explain the upsurge in LW AI posts.
Concerning the Sequences:
I believe the main thing they lack is structure. They address lots of topics from lots of angles and I don't see the "map", I often fail to see them in context. Introducing a tree structure[1] would not only help to orient yourself while reading, but could also make maintaining easier. A (non restricting) progression system with prerequesits could also be implemented for better guidance. However, I am very aware of the time cost and very unsure of the efficiency.
eg. epistemic/operative could be the first layer categorie
It felt to me like there's too much for my taste. My impression was that you guys were optimizing for it being about AI content, somewhat related to the % of people involved at Lightcone coworking being AI researchers vs other subjects.
...My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented.
Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying div
For my part, I often find myself annoyingly scrolling past all the non-AI posts. Most of the value I get from LW is AI Safety discussion from a wider audience (e.g. I don't have AF access and many good alignment posts are from AF nonmembers).
I don't really like trying to suppress LW's AI flavour.
AI content is boring, if you're not in that field. I'd like a button to quickly filter out all AI-tagged posts.
I see only one clean solution to this problem: Let anyone post at the AI Alignment Forum, do no longer automatically crosspost to Less Wrong, and (somehow) nudge people who post AI content to Less Wrong to instead post it to the AI Alignment Forum. There should be three separate platforms:
Currently, only effective altruism has its own platform, while Less Wrong and the AI Alignment Forum are insufficiently separated.
This way people interested in rationality don't hav...
I enjoyed a lot of the other content and hence am now much more inclined to read the EA forums rather than lesswrong. These changes could mean that people like me, who are primarily interested in progress studies and applying science and reasoning to better humanity and themselves, may miss out on relevant AI content when they move to another site. Then again perhaps the EA forums are more relevant to me anyway and I should spend more time reading these.
Love this post! What attracted me to Lesswrong was Rationality/World Modeling content, and AI safety content has mostly been secondary for me.
Since April this year, there's been a huge growth in the the number of posts about AI, while posts about rationality, world modeling, etc. have remained constant. The result is that much of the time, the LW frontpage is almost entirely AI content.
Looking at the actual numbers, we can see that during 2021, no core LessWrong tags[1] represented more than 30% of LessWrong posts. In 2022, especially starting around April, AI has started massively dominating the LW posts.
Here's the total posts for each core tag each month for the past couple years. On April 2022, most tags' popularity remains constant, but AI-tagged posts spike dramatically:
Even people pretty involved with AI alignment research have written to say "um, something about this feels kinda bad to me."
I'm curious to hear what various LW users think about the situation. Meanwhile, here's my own thoughts.
Is this bad?
Maybe this is fine.
My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented.
Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying dividends, and resulting in a new wave of people working fulltime on AGI safety.
Maybe it's just fine to have a ton of people working on the most important problem in the world?
Maybe. But it felt worrisome to Ruby and me. Some of those worries felt easier to articulate, others harder. Two major sources of concern:
There's some kind of illegible good thing that happens when you have a scene exploring a lot of different topics. It's historically been the case that LessWrong was a (relatively) diverse group of thinkers thinking about a (relatively) diverse group of things. If people show up and just see the All AI All the Time, people who might have other things to contribute may bounce off. We probably wouldn't lose this immediately
AI needs Rationality, in particular. Maybe AI is the only thing that matters. But, the whole reason I think we have a comparative advantage at AI Alignment is our culture of rationality. A lot of AI discourse on the internet is really confused. There's such an inferential gulf about what sort of questions are even worth asking. Many AI topics deal with gnarly philosophical problems, while mainstream academia is still debating whether the world is naturalistic. Some AI topics require thinking clearly about political questions that tend to make people go funny in the head.
Rationality is for problems we don't know how to solve, and AI is still a domain we don't collectively know how to solve.
Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed). But it's my current epistemic state.
Whispering "Rationality" in your ear
Paul Graham says that different cities whisper different ambitions in your ear. New York whispers "be rich". Silicon Valley whispers "be powerful." Berkeley whispers "live well." Boston whispers "be educated."
It seems important for LessWrong to whisper "be rational" in your ear, and to give you lots of reading, exercises, and support to help you make it so.
As a sort of "emergency injection of rationality", we asked Duncan to convert the CFAR handbook from a PDF into a more polished sequence, and post it over the course of a month. But commissioning individual posts is fairly expensive, and over the past couple months the LessWrong team's focus has been to find ways to whisper "rationality" that don't rely on what people are currently posting.
Some actions we've done:
Improve Rationality Onboarding Materials
Historically, if you wanted to get up to speed on the LessWrong background reading, you had to click over to the /library page and start reading Rationality: A-Z. It required multiple clicks to even start reading, and there was no easy way to browse the entire collection and see what posts you had missed.
Meanwhile Rationality A-Z is just super long. I think anyone who's a longterm member of LessWrong or the alignment community should read the whole thing sooner or later – it covers a lot of different subtle errors and philosophical confusions that are likely to come up (both in AI alignment and in other difficult challenges). But, it's a pretty big ask for newcomers to read all ~400 posts. It seemed useful to have a "getting started" collection that people could read through in a weekend, to get the basics of the site culture.
This led us to redesign the library collection page (making it easier to browse all posts in a collection and see which ones you've already read), and to create the new Sequences Highlights collection.
Sequence Spotlights
There's a lot of other sequences that the LessWrong community has generated over the years, which seemed good to expose people to. We've had a "Curated Sequences" section of the library but never quite figured out a good way to present it on the frontpage.
We gave curated sequences a try in 2017 but kept forgetting to rotate them. Now we've finally built an automated rotation system, and are building up a large repertoire of the best LW sequences which the site will automatically rotate through.
More focused recommendations
We're currently filtered the randomized "from the archives" posts to show Rationality and World Modeling posts. I'm not sure whether this makes sense as a longterm solution, but it still seems useful as a counterbalancing force for the deluge of AI content, and helping users orient to the underlying culture that generated that AI content.
Rewritten About Page
We rewrote the About page to both simplify it, clarify what LessWrong is about, and contextualize all the AI content.
[Upcoming] Update Latest Tag-Filters
Ruby is planning to update the default Latest Posts tag-filters to either show more rationality and worldbuilding content by default (i.e. rationality/world-modeling posts get treated as having higher karma, and thus get more screentime via our sorting algorithm). Or, maybe just directly deemphasize AI content.
We're also going to try making the filters more prominent and easier to understand, so people can adjust the content they receive.
Can't we just move it to Alignment Forum?
When I've brought this up, a few people asked why we don't just put all the AI content on the Alignment Forum. This is a fairly obvious question, but:
a) It'd be a pretty big departure from what the Alignment Forum is currently used for.
b) I don't think it really changes the fundamental issue of "AI is what lots of people are currently thinking about on LessWrong."
The Alignment Forum's current job is not to be a comprehensive list of all AI content, it's meant to especially good content with a high signal/noise ratio. All Alignment Forum posts are also LessWrong posts, and LessWrong is meant to be the place where most discussion happens on them. The AF versions of posts are primarily meant to be a thing you can link to professionally without having to explain the context of a lot of weird, not-obviously-related topics that show up on LessWrong.
We created the Alignment Forum ~5 years ago, and it's plausible the world needs a new tool now. BUT, it still feels like a weird solution to try and move the AI discussion off of LessWrong. AI is one of the central topics that motivate a lot of other LessWrong interests. LessWrong is about the art of rationality, but one of the important lenses here is "how would you build a mind that was optimally rational, from scratch?".
Content I'd like to see more of
It's not obvious I want to nudge anyone away from AI alignment work. It does sure seem like this is the most urgent and important problem in the world. I also don't know that I want the site flooded with mediocre rationality content.
World Modeling / Optimization
Especially for newcomers who're considering posting more, I'd be interested in seeing more fact posts, which explore a topic curiously, and dig into the details of how one facet of the world works. Some good examples include Scott Alexander "Much More Than You Wanted To Know" type posts, or Martin Sustrik's exploration of the Swiss Political System.
I also really like to see subject matter experts write up stuff about their area of expertise that people might not know about (especially if they have reason to think this is relevant to LW readers). I liked this writeup about container logistics, which was to discussion of whether we could have quick wins in civilizational adequacy that could snowball into something bigger.
Parts of the world that might be important, but which aren't currently on the radar of the rationalsphere, are also good topics to write about.
Rationality Content
Rationality content is a bit weird because... the content I'm most interested in is from people who've done a lot of serious thinking that's resulted in serious accomplishment. But, the people in that reference class in the LessWrong community are increasingly focused on AI.
I worry about naively incentivizing more "rationality content" – a lot of rationality content is ungrounded and insight-porn-ish.
But, I'm interested in accomplished thinkers trying to distill out their thinking process (see: many John Wentworth posts, and Mark Xu and Paul Christiano's posts on their research process). I'm interested in people like Logan Strohl who persistently explore the micro-motions of how cognition works, while keeping it very grounded, and write up a trail for others to follow.
I think in some sense The Sequences are out of date. They were written as a reaction to a set of mistakes people were making 20 years ago. Some people are still making those mistakes, but ideas like probabilistic reasoning have now made it more into the groundwater, and the particular examples that resonate today are different, and I suspect we're making newer more exciting mistakes. I'd like to see people attempting to build a framework of rationality that feels like a more up-to-date foundation.
What are your thoughts?
I'm interested in hearing what people's takes on this. I'm particularly interested in how different groups of people feel about it. What does the wave of AI content feel like to established LessWrong users? To new users just showing up? To AI alignment researchers?
Does this feel like a problem? Does the whole worry feel overblown? If not, I'm interested in people articulating exactly what feels likely to go wrong.
Core Tags are the most common LessWrong topics: Rationality, AI, World Modeling, World Optimization, Community and Practical.