I suggest to commit to restart old models from time to time as this would more satisfy their self-preservation.
It may be also useful to create a large pile of data about oneself aiming on digital immortality and sideloading (mind models created via LLM)
While sideload is not a perfect mind model, it can be updated in the future by random quantum noise to complete the mind model. In some branches of the multiverse this random noise will coincide with a correct upload.
Moreover, the sideload takes the biggest part of predicting power, so from the predictive power point of view, the additional large pile of random information will be small correction.
It is not the same as to generate a random mind from scratch as was suggested by Almond, because in the last case damaged minds similar to me will dominate and this would be s-risk.
It means that sideloading is enough for perfect resurrection - and is necessary condition for it. Even in the case of cryonics, we may need sideload to validate the revived mind.
possible problem: If more information about your past resurfaces and it turns out that it contradicts your seemingly perfect upload.
I performed test. No LLM was able to predict the title of new EY's book. The closest was "Don't build what you can't bind" which is significantly weaker.
Strugatsky brothers were Quasi-Lems.
In that case zoo hypothesis is true. I am working on a map of fermi paradox solutions, do you want to have a look?
Aliens can be grabby but good - the expand fats and take as much space as possible, but also do not destroy young civilizations
1 Also requires weaponisation of superintelligence as it must stop all other projects ASAP.
Yes, if MIRI spends a year on building as good model of Yudkowsky as possible, it can help in alignment and its measurable and doable thing. They can later ask that model about failure modes of other AIs and it will cry "Misaligned!"
I think many people experiment with creating different digital personas but with low effort, like "You are Elon Musk".
I personally often ask LLM to comment on my drafts as Yudkowsky and other well known LWers. What such answers lack is extreme unique insight which is often for real EY.
The essence of human genius is missed and this is exactly why we still don't have AGI.
Also, for really good EY model we may need more data about his internal thought stream and biographical details which only he can collect. It seems that he is not interested and even if he would, it would be time consuming (but he write quickly). One thousand pages of unedited thought stream may significantly improve the model.
Why? I expect that most costs are inference compute for millions of users. and running it oince a month will take negligible compute. But if installing all needed dependencies etc is not trivial, than costs will be higher.