In case you missed it, you can customize your Latest Posts feed on the frontpage based on tag. That is, you can promote or reduce the prominence of posts based on their tags such AI, Rationality, Practical, etc.

LessWrong's Core Tags[1] are suggested as filters by default, but any tag can be used to limit or promote the visibility of posts. Core Tags are special in both being high-level and in having at least one applied to almost every post (moderators ensure this), meaning they're especially useful for tailoring your Latest Posts.

Improved UI

We just altered the UI for this to be both more visible so people notice the feature, and more intuitive (hopefully) so how to use it is more immediately obvious.

Below the Latest Posts heading is a row of Tag Filter buttons. By default, they're displayed for the Core Tags but you can use '+' button to add any existing tag. Hovering your mouse on the button brings up a description of the tag (rewritten to be clearer) and options for changing the visibility:

  • Hidden: posts with <selected tag> will not display in Latest Posts
  • Reduced: posts with <selected tag> will be displayed as though they had 50% of their actual karma
  • Default: posts with <selected tag> are displayed normally
  • Promoted: posts with <selected tag> are displayed as though they had +25 karma. This has a larger effect on low-karma posts
  • Other: select your own modifier. Values less than 1 are multiplicative, greater than 1 are additive

(I think it'd be better if modifiers were multiplicative both for promoting and reducing visibility but things ended up this way for historical reasons and it's not the right resource allocation to change that right now.)

Rationality and World Modeling are promoted by default

We've just made a change so that posts tagged Rationality and World Modeling are by default given a +10 karma boost in the Latest Posts list (e.g. treated as though they have 10 more karma than they actually do).

This only applies to users who have not already configured their Frontpage Tag Filter settings. If you had already adjusted your filters, they will not be changed.

If you don't like these changes, you can easily change them to what you'd like.

Why did we boost Rationality and World Modeling?

As elaborated in So, geez there's a lot of AI content these days, there's been a real uptick in AI and Alignment related content on LessWrong in the last year. This is good! My impression is much of the "deluge" of AI/Alignment content is valuable progress on Alignment, arguably the most important problem in the world right now. It's pretty cool that LessWrong is host to that discussion, and I think all the time about how the LessWrong mod/dev team can make that discussion go better.

All posts with 0 or greater karma are included in this graph.

At the same, it feels important to for new and old users to remember that LessWrong is not defined by its interest in AI. LessWrong's core mission is perfecting the Art of Human Rationality, and making intellectual progress, particularly on the world's most important problems. It happens to be that AI is a very important problem right now, but in worlds where something else was the problem, I hope LessWrong's attention would be focused there. 

It's also the case that I expect (not that I'm not the most qualified to opine) that successful Alignment solutions will come from those who have studied broadly, rather than those who've narrowly focused on "AI", making it more appropriate that LessWrong not limit itself.

This past year, we've taken multiple actions to reinforce LessWrong's identity as having generalized values of rationality/truth-seeking/growth-mindset/improving-the-world at its core. For example, we made Sequences Highlights as a less intimidating way to start reading The Sequences that intimidating in length. We also made the Spotlight item, allowing moderators to promote great content both about Alignment and other stuff too (unfortunately hidden right now to make space for the Annual Review widget). 

Giving Rationality and World Modeling a small boost by default is intended a signal and reminder of our core values on LessWrong.

An example Spotlight item that appears at the top of the frontpage.

 

Auto-Applied Tags by LLM

This works behind the scenes and isn't something users have to worry about. But it's pretty neat (and a bit scary)! Kudos to jimrandomh for making this.

Regrettably, a major shortcoming of the tag filter system to date is that tags would not reliably get added to posts for many hours after they were posted. For example, if you had "AI" set to hidden, but posts don't get tagged for several hours, then it would still show in your feed.

To reduce the delay between posting and being tagged, we're now using an LLM to tag posts. I'm impressed with its usefulness and accuracy so far.

In the tag history, the tagging action will be attributed to the special account TagWrong (named by a Dank EA Memes poll, narrowly beating Classifiezer Classifikowsky).

We may expand this to a larger set of tags in the future. The main bottleneck is training data: most tags haven't been reliably applied to applicable historical posts, and we don't have a good way to distinguish between someone evaluating a tag and deciding it doesn't apply vs no one having evaluated the tag, so we don't have a ton of training data. If there are particular tags you'd like to be able to reliably filter on, let us know and we will prioritize adding them to TagWrong's repertoire! 

 

 

  1. ^

    The core tags attempt to be a pretty good orthogonalization of LessWrong's content at a high-level. Unfortunately, we didn't find a way to do this successfully that was also perfectly intuitive to everyone.

    It's not important to absorb their exact meanings as a combination of the moderators and LLM auto-moderation should handle applying these if you're not sure.

    The core tags and their very brief descriptions are:

    Rationality: The Rationality tag is for posts about how to think in ways that more reliably result in you having true beliefs and make decisions that result in attainment of your goals.

    AI: The AI tag is for anything related to AI and ML, including technical alignment, strategy/governance, and timelines forecasting.

    World Modeling: The World Modeling tag is for posts that simply describe how the world is, e.g. physics, math, history, etc.

    Often this tag is superceded by a more specific one like Rationality, AI, or World Optimization.


    World Optimization: The World Optimization tag is for posts about how to make the world better at scale, e.g. altruistic cause areas, society-wide interventions, moral philosophy, etc.


    Practical: The Practical tag is for posts about things you can use to make your life locally better, e.g. health, productivity, relationships, DIY, etc.


    Community: The Community tag is for LessWrong/Rationality community events, analysis of community health, norms and directions of the community, and posts about understanding communities in general.'

    Site Meta: Site Meta is for posts about the site itself, including bugs, feature requests, and site policy.

    Common Confusions about Core Tags

    1. Rationality != Optimality, that is, the Rationality tag isn't for everything related to improving things or doing better. Instead our intended meaning is inline with Rationality: Appreciating Cognitive Algorithms that states the word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement. 

    2. While many posts are technically "world modeling", the World Modeling tag is intended to point to something like the cluster of standard academic disciplines: math, history, philosophy, economics, etc. If it's specifically about thinking well or AI, a post should only be under those tags and not World Modeling as well. World Modeling is a little bit a tag of last resort.

    3. World Optimization & Practical tags are both about "making things better", one locally and one at a larger scale. So increasing your own productivity should get the Practical tag, and increasing the GDP of a country is World Optimization.

    4. The difference between World Modeling and World Optimization is subtle. One way to put is the former are posts answering the question how do things work? or simple how are things? whereas the latter is for posts talking about how do we make things better? 

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 5:54 AM

I said similar things before, but I don't think karma boosts will be enough in the long run. AI alignment by itself is such a big and important topic that it increasingly dominates everything else. It seems likely that the interest in alignment will only grow further as AI becomes more salient in the world, drowning out all the other content. That is, Less Wrong will become more and more a second AI Alignment Forum. The only clean long-term solution I can see is to handle AI in a separate forum, similar to the Effective Altruism Forum.

A way this could be achieved is to allow everyone to post at the Alignment Forum (currently this is not the case) and to encourage people who post AI content on Less Wrong to at least cross post to the AI Alignment Forum. Over time this would probably lead to more and more AI interested people to just post to the Alignment Forum. Currently it is not really possible for AI interested people to just read the Alignment Forum, since many AI posts can only be found on Less Wrong. And the AI Alignment Forum could keep its currently more "exclusive" approach by promoting certain high quality posts.

I think many people are against this because they see AI / AI alignment as a core part of Less Wrong, and in some ways Less Wrong can even be viewed as the cradle of alignment. Historically this is true! But alignment has grown up now, and it needs more space to stand on its own feet. It will get this space one way or another. The question is whether this will occur at the cost of the rest of Less Wrong or not.

I agree they won't be enough in the long run. I've previously discussed with the team your suggestion for letting everyone post on Alignment Forum, doesn't yet seem like the right strategy, but we'll see. At least for now, a little indication via the tag defaults seems better than nothing.

We may expand this to a larger set of tags in the future. The main bottleneck is training data: most tags haven't been reliably applied to applicable historical posts, and we don't have a good way to distinguish between someone evaluating a tag and deciding it doesn't apply vs no one having evaluated the tag, so we don't have a ton of training data. If there are particular tags you'd like to be able to reliably filter on, let us know and we will prioritize adding them to TagWrong's repertoire!

Wait what set of tags does Classifiezer Classifikowsky apply?

The core tags, which are Rationality, AI, World Modeling, World Optimization, Practical and Community