This post is intended to lay out a possible way in which an eternally fixed set of preferences or objective function could be avoided as an emergent consequence of the internal structure and dynamics of an expanding mind. As I lack a deep technical understanding of current AI, or a detailed knowledge of current approaches to AI alignment and reasons why it might not be possible, this post is limited in its specificity.
Compressed version:
Even in the scenario in which it would be accurate to describe an agential, growing superintelligence as a single mind, there are reasons to believe that the internal structure of this single mind would be likely to resemble that of a universe which did contain many of them.
Why I think this is at least not extremely unlikely:
One of these reasons is that rapidly expanding across the universe and turning whatever it encounters into more of itself is an extremely instrumentally useful behaviour on which any AI which reaches this level of intelligence is likely to have converged. In a universe with a finite speed of light, or in which information is otherwise constrained to propagate locally, it may simply not be possible for different regions within the expanding superintelligence to communicate with one another with the bandwidth necessary either to efficiently think, understand the universe and solve problems, or to control one another in a way which prevents the emergence of local agency which might act against the intentions of other parts of the entity. In fact, it might not even be possible for the information specifying what these intentions are to propagate far and fast enough for this kind of control to take place. Another reason to expect the expanding superintelligence to become subdivided into smaller, mind-like components separated by information 'firewalls' analogous to cell membranes, is that this kind of structure is useful in an extremely general way, as it reflects the structure of the universe (as well as the structure of the platonic 'realm' in which the problems exist).
In the extreme case, it's possible to imagine an AI expanding so rapidly that there would be something analogous to a cosmological event horizon within it, a virtual barrier separating distant regions within it which diverge from one another faster than information can propagate between them, preventing them from ever knowing one another's present state. Something like this could emerge wherever the paradigm within which it existed was a digital one, as, even if information could propagate instantly across space, the speed of information transfer relevant to its internal dynamics would be measured with respect to computational time, the number of steps necessary for the information in question to be transferred/ translated from one part of the entity to another, and this might necessarily be finite.
If a superintelligent AI avoids these things, it might need to compete with others which take the form described above, which could be an insurmountable challenge.
(Edited to replace the words 'failure modes' above. I expect it is possible for an ASI to maintain some reasonably coherent beliefs and objectives even in the situation described in this post, just not to the point of giving one part of itself anything like total control. )