Random Tag
Contributors
You are viewing revision 1.17.0, last edited by jimrandomh

World Modeling is building an understanding of how the world works. Examples of World Modeling content are math [e.g. 1, 2], physics [1], biology [1, 23], history [1, 2], economics [12, 3] , sociology [1, 2 ], and other scientific fields. World modelling here does typically not include the parts of artificial intelligence classified as AI Alignment and parts of psychology classified as Rationality.

From The Twelve Virtues of Rationality:

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains.

A definition by elimination

Properly considered, the overwhelming majority of content LessWrong is about modeling how the world is, including almost all posts on Rationality and all practical advice. The intended usage of World Modeling is to capture all content describing how the world is that is not captured by the more specific major tags of Rationality, World Optimization, and AI.

  • The Rationality tag is for content that is about how the world is in relation to how minds works and what one ought to do in order to reach true beliefs. The question for that category is does this relate to how I ought to think?
  • The World Optimization tag is for content about how the world is which is relevant to choosing actions in a relatively immediate way. By this definition, it encompasses most posts discussing altruistic methods and targets, as well practical personal advice. The question for that category is is this content motivated by the desire to optimize the world?
  • The AI tag is for content about how the world is which is relevant to questions of how advanced artificial intelligence will affect the world and how to ensure outcomes are good. The question is does this help me make predictions about AI or ensure AI will have good outcomes?

If content warrants a no to all of the above questions, then it is likely to be both relatively pure world modeling (not about optimizing in any direct way) and not already covered by an existing major category. It is then a good fit for the World Modeling category. Stuff like math, science, history

Some more examples

A study of how people historically exercised is World Modeling. Advice on the optimal way to exercise in the present day is World Optimization. A study of the Fall of Rome would be World Modeling. A review of current policies being discussed by people who want to cause changes in a present government should be classified as Optimization. It would be World Modeling too only if it is expected to be of interest to people with no immediate plans to try to alter government, for example a review on the effects of marijuana on productivity, driving, IQ, etc.