I feel like "cooperation" only has a definition in the context of consciousness. If something does not experience utility, then it is simply an object that is acting as it is determined to act, or acting stochastically, rather than an entity making a decision to cooperate. A hammer is not cooperating with me when I use it to hammer in an object, such that I now have moral obligation to the hammer, and a coffee table has not given me casus belli if it strikes my knee. This holds for arbitrary complexity - my computer can be substituted for either the hammer or the coffee table.
I think what you're trying to get at is that moral behavior is behavior such that P(H | B) > P(H), where H is being a high-trust person, and B is being a person that behaves in a given way. The biological reason why we hate animal abusers is that the sort of person who enjoys arbitrarily hurting animals is typically also the sort of person who enjoys arbitrarily hurting humans.
(This is a light edit of a real-time conversation me and Victors had. The topic of consciousness and whether it was the right frame at all often came up when talking together, and we wanted to document all the frequent talking points we had about it, so we attempted in this conversation as best we could to cover all the different points we had before)
On consciousness, suffering, and moral relevance
On suffering as an intrinsically negative experience
Attitudinal cruxes and thought-experiments intuition
On testability and unfalsifiability of consciousness
Should we even discuss morality and value trades?
Final questions