I have this idea that is not a contender with the serious formal reasoning that people who know a hell of a lot about AI are able to do, but nonetheless I think could be useful for them to hear. The idea is that for a mind (in a broad sense of the word) to have any aim it must simultaneously aim to preserve itself long enough to undertake the actions that serve that aim, and that following from this is an inbuilt cooperative foundation for all minds.

So to try to concretize this I would say that, for example, a human being is preserving their own being from one moment to the next, and that each of these moments could be viewed in objective reality as "empty individuals" or completed physical things. Whatever the fundamental physical reality of a moment of experience I'm suggesting that that reality changes as little as it can. Because of this human beings are really just keeping track of themselves as models of objective reality, and their ultimate aim is in fact to know and embody the entirety of objective reality (not that any of them will succeed). This sort of thinking becomes a next to nothing, but not quite nothing, requirement for any mind, regardless of how vastly removed from another mind it is, to have altruistic concern for any other mind in the absolute longest term (because their fully ultimate aim would have to be the exact same).

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:52 AM

I don't actually understand this, and I feel like it needs to be explained a lot more clearly.

"Whatever the fundamental physical reality of a moment of experience I'm suggesting that that reality changes as little as it can." - What does this mean? Using the word "can" here implies some sort of intelligence "choosing" something. Was that intended? If so, what is doing the choosing? If not, what is causing this property of reality?

"Because of this human beings are really just keeping track of themselves as models of objective reality, and their ultimate aim is in fact to know and embody the entirety of objective reality (not that any of them will succeed)." - Human beings don't seem to act in the way I would expect them to act if this was their goal. For instance, why do I choose to eat foods I know are tasty and take routes to work I've already taken, instead of seeking out new experiences every time and widening my understanding of objective reality? What difference would I expect to see in human behaviour if this ultimate aim was false?

"This sort of thinking becomes a next to nothing, but not quite nothing, requirement for any mind, regardless of how vastly removed from another mind it is, to have altruistic concern for any other mind in the absolute longest term (because their fully ultimate aim would have to be the exact same)." - I don't understand how this point follows from your previous ones, or even what the point actually is. Are you saying "All minds have the same fundamental aim, therefore we should be altruistic to each other"?