There are two points of view. Either such AI will be immortal and it will find ways to overcome the possible end of the Universe and will make an infinite amount of computations. For example, Tipler's Omega is immortal.
Or the superintelligent AI will die at the end, few billion years from now, and thus it will make only a finite amount of computations (this idea is behind Bostrom's "astronomical waste").
The difference has important consequences for the final goals for AI and for our utilitarian calculations. In the first case (possibility of AI's immortality) the main instrumental goal of AI is to find ways to survive the end of the universe.
In the second case, the goal of AI is to create as much utility as possible before it dies.