An AI, trying to develop highly accurate models of the people it interacts with, may develop models which are conscious themselves. For ethical reasons, it would be preferable if the AI wasn't creating and destroying people in the course of interpersonal interactions. Resolving this issue requires making some progress on the hard problem of conscious experience. We need some rule which definitely identifies all conscious minds as conscious. We can make do if it still identifies some nonconscious minds as conscious.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Devil's Offers, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.