Matter can experience things. For instance, when it is a person. Matter can also do work, and thereby provide value to the matter that can experience things. For instance, when it is a machine. Or also, when it is a person.

An important question for what the future looks like, is whether it is more efficient to carry out these functions separately or together.

If separately, then perhaps it is best that we end up with a huge pile of unconscious machinery, doing all the work to support and please a separate collection of matter specializing in being pleased.

If together, then we probably end up with the value being had by the entities doing the work.

I think we see people assuming that it is more efficient to separate the activities of producing and consuming value. For instance, that the entities whose experiences matter in the future will ideally live a life of leisure. And that lab grown meat is a better goal than humane farming.

Which seems plausible. It is at least in line with the general observation that more efficient systems seem to be specialized.

However I think this isn’t obvious. Some reasons we might expect working and benefiting from work to be done by overlapping systems:

  • We don’t know which systems are conscious. It might be that highly efficient work systems tend to be unavoidably conscious. In which case, making their experience good rather than bad could be a relatively cheap way to improve the overall value of the world.
  • For humans, doing purposeful activities is satisfying, so much so that there are concerns about how humans will cope when they are replaced by machines. It might be hard for humans to avoid being replaced, since they are probably much less efficient than other possible machines. But if doing useful things tends to be gratifying for creatures—or for the kinds of creatures we decide are good to have—then it is less obvious that highly efficient creatures won’t be better off doing work themselves, rather than being separate from it.
  • Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.
  • Efficient production doesn’t seem to evolve to be entirely specialized, especially if we take an abstract view of ‘production’. For instance, it is helpful to produce the experience of being a sports star alongside the joy of going to sports games.
  • Specialization seems especially helpful if keeping track of things is expensive. However technology will make that cheaper, so perhaps the world will tend less toward specialization than it currently seems. For instance, you would prefer plant an entire field of one vegetable than a mixture, because then when you harvest them, you can do it quickly without sorting them. But if sorting them is basically immediate and free, you might prefer to plant the mixture. For instance, if they take different nutrients from the soil, or if one wards of insects that would eat the other.
New Comment
5 comments, sorted by Click to highlight new comments since: Today at 9:30 AM

Moved to frontpage.

Without a clear definition and measure of "consciousness", it's almost impossible to reason about tradeoffs and utility. But that won't stop us!

This is the first time I've come across the point

  • Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.

But I'm not sure that the "something" that it's useful for getting done is actually what other conscious entities want.

One hypothesis is that consciousness evolved for the purpose of deception -- Robin Hanson's "The Elephant in the Brain" is a decent read on this, although it does not address the Hard Problem of Consciousness.

If that's the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.

If somehow the easy and hard versions of consciousness separate (i.e., things which don't functionally look like the conscious part of human brains end up "having experience" or "having moral weight"), then this might not solve the problem even under the deception hypothesis.

This question is also very important in the scenario where good, reflective, humans doesn't control the future. If a rogue AI takes control over the future and the best way to do work involves consciousness, we will have a universe with a lot of consciousness in it, but with no concern for their suffering.

[-]rk6y10
We don’t know which systems are conscious

Related: https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates

Here, EY discusses the concept of a non-person predicate, which evaluates things and tells you if they are not-people. It it says it's a person, it might be wrong, but it's never wrong if it says it's not a person. That way, if you get "not a person!", you can be certain that you do not have to worry about its subjective experience (and therefore, for many moral theories, its moral patienthood).

This doesn't affect the main post's point that once we know which systems are conscious, we may find ourselves in a situation where all our best candidates for work-doing-systems are also consciousness-having-systems

Also related: https://www.lesswrong.com/posts/mELQFMi9egPn5EAjK/my-attempt-to-explain-looking-insight-meditation-and

Here Kaj Sotala suggests that an aspect of our qualitative experience (suffering) can be removed without much change in our behaviours. (Though I worry that makes our experience of suffering surprising)