So, the question is "Whose utopia is it anyway?". Clearly not everyone would agree upon these utopia/dystopia definitions, so if future AIs are to be created instilled with the noblest of human values whose value system should their BIOS contain? Of course this requires you to confront the notion that humans have somewhat diverse value systems, not all of which may be friendly towards science or western libertarian thinking.
Intrepid cryonauts are making a few assumptions, which may or may not be reasonable.
- That in the future it will be possible to revive or otherwise reconstruct themselves via some as yet unspecified [magic happens here] technology.
- That they will be restored to that status of some type of functioning being, either biological or perhaps simulated, or a combination of the two (cyborgian).
- That intelligent entities living in the future will want to restore the remains of creatures who lived in the distant past. These entities may not share the same views as the cryonauts, and may wish to use the remains for some other purpose than the cryonaut originally intended.
- That the restored cryonaut will be able to live some kind of meaningful existence. This depends upon what you regard as "a life worth living". Future entities may only be interested in the cryonauts remains as a museum curiosity, rather as ancient Egyptian mummies are regarded today, or simply as a raw material for some other purpose. Incidentally I think there are some parallels between the beliefs of ancient Egyptians and modern cryonics proponents.
Overall, cryonauts are making a lot of faith based assumptions about the beliefs and motivations of entities living in the possibly quite distant future, typically assuming them to be similar to contemporaneous belief systems.
This seems to happen a lot in computer vision - particularly object recognition - where the prevailing paradigm involves training the system on images taken from numerous perspectives which then become exemplars for subsequent matching. The problem with this is that it just doesn't scale. Having a large number of 2D training examples, then trying to match also in 2D, works for objects which are inherently flat - like the cover of a book - but not for objects with significant 3D structure, such as a chair. For more effective recognition the system needs to capture the 3D essence of the object, not it's innumerable 2D shadows.