Mindcrime

Mind CrimeMindcrime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mind crimeMindcrime on a large scale constitutes a risk of astronomical suffering.

Mind crimeMindcrime is different from other AI risks in that the AI need not even affect anything outside its box for the catastrophe to occur.

Mind Crime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mind crime on a large scale constitutes a risk of astronomical suffering.

Mind crime is different from other AI risks in that the AI need not even affect anything an agent can do justoutside its box for the catastrophe to occur.

The term was coined by using its own computational resources that is morally wrong. The most commonly used example is an AI creatingNick Bostrom in Superintelligence: Paths, Dangers, Strategies.

Not the same as thoughtcrime, a conscious simulation of a human being tortured.term for having beliefs considered unacceptable by society.

See also: Risks of Astronomical Suffering

"Mind Crime" was the term Bostrom used in Superintelligence. I don't know of a better term that covers the same things.

Usually when people talk about mind crime they're talking about torture simulations or something similar, which is different than the usual use of "thought crime". My sense is that if you really believed that thinking certain thoughts was immoral, thought crime would be a type of mind crime, but I'm not sure if anyone has used the term in that way.

Edit: https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models says:

Many computations may produce entities that are morally relevant because, for example, they constitute sentient beings that experience pain or pleasure. Bostrom calls improper treatment of such entities “mind crime”. 

so maybe the accepted meaning is narrower than I thought and this wiki page should be updated accordingly.

Edit x2:

I reread the relevant section of Superintelligence, which is in line with that, and have rewritten the page.

This is different from a thought crime, right? I would distinguish in the page description. Otherwise, if it's not already an accepted term, I would consider changing it to avoid confusion.

5Multicore
"Mind Crime" was the term Bostrom used in Superintelligence. I don't know of a better term that covers the same things. Usually when people talk about mind crime they're talking about torture simulations or something similar, which is different than the usual use of "thought crime". My sense is that if you really believed that thinking certain thoughts was immoral, thought crime would be a type of mind crime, but I'm not sure if anyone has used the term in that way. Edit: https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models says: so maybe the accepted meaning is narrower than I thought and this wiki page should be updated accordingly. Edit x2: I reread the relevant section of Superintelligence, which is in line with that, and have rewritten the page.
Applied to Thoughts on Human Models by Multicore ago

Mind Crime is anything an agent can do just by thinkingusing its own computational resources that is morally wrong. The most commonly used example is an AI creating a conscious simulation of a human being tortured.

Applied to Nonperson Predicates by Multicore ago
Applied to The Aliens have Landed! by Multicore ago

Mind Crime is anything an agent can do just by thinking that is morally wrong. The most commonly used example is an AI creating a conscious simulation of a human being tortured.

Created by Multicore at