Wiki-Tags in Need of Work

Mind Crime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.... (read more)

dath ilan is a fictional world and civilization invented by Eliezer Yudkowsky. It is a parallel reality of earth where society is much better at coordination and various good policies have been implemented, though technology is only as advanced as earth's, if not slightly less. It can be thought as a more practical and realistic form of Economist's Paradise. Eliezer first introduced it in his April Fool's day post 'My April Fools Day Confession', where he claimed that he actually came from that world and none of his ideas were original.... (read more)

Lightcone Infrastructure is the parent organization of LessWrong. Its mission is to build infrastructure for people who are helping the long term future of humanity. It was announced on October 1st, 2021.... (read more)

A Formal Proof is a finite sequence of steps from axiom(s) or previous derived proof(s) which strictly follow the allowed rules of inference of the mathematical system in which it exists. They are used to establish statements as true within a mathematical framework in a way which can be independently verified with extremely high certainty, with the most reliable flavor of proof being machine-checked proofs generated by proof assistants since they have even less room for human error.

Language Models are a class of AI trained on text, usually to predict the next word or a word which has been obscured. They have the ability to generate novel prose or code based on an initial prompt, which gives rise to a kind of natural language programming called prompt engineering. The most popular architecture for very large language models is called a transformer, which follows consistent scaling laws with respect to the size of the model being trained, meaning that a larger model trained with the same amount of compute will produce results which are better by a predictable amount (when measured by the 'perplexity', or how surprised the AI is by a test set of human-generated text).... (read more)

Autonomous Weapons are military systems designed to operate without a human in the loop. Some have argued that they may present a destabilizing influence on geopolitics, or even a potential existential risk.... (read more)

Extraterrestrial Life... (read more)

stub

Recent Tag & Wiki Activity

("The Correspondence Bias", paper by Gilbert and Malone(PDF)

Scaling Laws

It is possible, you just paste the image apparently, thanks Yoav Ravid for the tip.

Agency

There seems to be a minor autocorrect issue in this article. It seems to me the example might have been intended to read "or ability" instead of "of ability"

dath ilan

"God of the EA community"? The majority of my city's EA community doesn't even know who Yudkowsky is, and of the few who do most have ambivalent opinions of him.

Related Pages:Pages

The policies inSome examples of dath ilan include:policies:

  • Standard education includes rationality training
  • Land Value Tax, Tax, positional-goods tax, status-goods tax, marketing-tax - and no income tax.
  • Movable Homes
  • Autonomous electric cars in tunnels instead of ICE cars on roads.
  • No streetlights at night, except for red lights along walkways, which for 45 minutes each night also turn off to see the sky. And on winter solstice (night of stars) the lights stay off the whole night, to give a perfect view of the night sky.
2Bae's Theorem6d"God of the EA community"? The majority of my city's EA community doesn't even know who Yudkowsky is, and of the few who do most have ambivalent opinions of him.

Mind Crime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mind crime on a large scale constitutes a risk of astronomical suffering.

Mind crime is different from other AI risks in that the AI need not even affect anything an agent can do justoutside its box for the catastrophe to occur.

The term was coined by using its own computational resources that is morally wrong. The most commonly used example is an AI creatingNick Bostrom in Superintelligence: Paths, Dangers, Strategies.

Not the same as thoughtcrime, a conscious simulation of a human being tortured.term for having beliefs considered unacceptable by society.

See also: Risks of Astronomical Suffering

Mind Crime

"Mind Crime" was the term Bostrom used in Superintelligence. I don't know of a better term that covers the same things.

Usually when people talk about mind crime they're talking about torture simulations or something similar, which is different than the usual use of "thought crime". My sense is that if you really believed that thinking certain thoughts was immoral, thought crime would be a type of mind crime, but I'm not sure if anyone has used the term in that way.

Edit: https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models says:

Many computations may produce entities that are morally relevant because, for example, they constitute sentient beings that experience pain or pleasure. Bostrom calls improper treatment of such entities “mind crime”. 

so maybe the accepted meaning is narrower than I thought and this wiki page should be updated accordingly.

Edit x2:

I reread the relevant section of Superintelligence, which is in line with that, and have rewritten the page.

Mind Crime

This is different from a thought crime, right? I would distinguish in the page description. Otherwise, if it's not already an accepted term, I would consider changing it to avoid confusion.

5Multicore8d"Mind Crime" was the term Bostrom used in Superintelligence. I don't know of a better term that covers the same things. Usually when people talk about mind crime they're talking about torture simulations or something similar, which is different than the usual use of "thought crime". My sense is that if you really believed that thinking certain thoughts was immoral, thought crime would be a type of mind crime, but I'm not sure if anyone has used the term in that way. Edit: https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models [https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models] says: so maybe the accepted meaning is narrower than I thought and this wiki page should be updated accordingly. Edit x2: I reread the relevant section of Superintelligence, which is in line with that, and have rewritten the page.

Mind Crime is anything an agent can do just by thinkingusing its own computational resources that is morally wrong. The most commonly used example is an AI creating a conscious simulation of a human being tortured.

Mind Crime is anything an agent can do just by thinking that is morally wrong. The most commonly used example is an AI creating a conscious simulation of a human being tortured.

Mind Crime

Mind Crime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mind crime on a large scale constitutes a risk of astronomical suffering.

Mind crime is different from other AI risks in that the AI need not even affect anything outside its box for the catastrophe to occur....

(Read More)

The Nonlinear Library is a project that uses high quality text-to-speech software to read out loud posts from LessWrong and the EA forum. You can read their announcement post here, or listen to it here.