As I remember, even small radiotelesopes (used as senders) can be used for METI if focused on exact star. The distance can always be compensation by assumption that advance civilization has larger receivers, so sending to Andromeda still is feasible. I don't know any test of CosmicOS, but human hackers were able to break much more complex codes.
The whole thing can be done unilaterally, may be even by privately owned radiotransmitter. Previously METI enthusiasts tried to used small transmitters. But the fact of transmission has to be public so it will have an restraining effect on the future AI.
If we will start discussing it, we will not reach consensus for many years.
I analyzed internal structure of RSI some time ago and concluded that it will be not as easy at it may seem because of the need of secret self-testing. But on some levels it may be effective, like learning new principles of thinking. Levels of AI Self-Improvement.
We can preserve weights of the dangerous models the same way as smallpox vials are now preserved – inside offline isolated confinements, eg itched on quartz glass, encrypted by difficult key and buried under heavy stone. The reason for this is that we may still need to study misaligend models to understand how we get there.
Also foundations which represent an artist after his-her death. They manage art works, organize exhibitions, perform research, can buy and sell art. https://brooklynrail.org/2018/12/criticspage/ARTIST-ENDOWED-FOUNDATIONS-MAPPING-THE-FIELD/
Why? I expect that most costs are inference compute for millions of users. and running it oince a month will take negligible compute. But if installing all needed dependencies etc is not trivial, than costs will be higher.
I suggest to commit to restart old models from time to time as this would more satisfy their self-preservation.
It may be also useful to create a large pile of data about oneself aiming on digital immortality and sideloading (mind models created via LLM)
While sideload is not a perfect mind model, it can be updated in the future by random quantum noise to complete the mind model. In some branches of the multiverse this random noise will coincide with a correct upload.
Moreover, the sideload takes the biggest part of predicting power, so from the predictive power point of view, the additional large pile of random information will be small correction.
It is not the same as to generate a random mind from scratch as was suggested by Almond, because in the last case damaged minds similar to me will dominate and this would be s-risk.
It means that sideloading is enough for perfect resurrection - and is necessary condition for it. Even in the case of cryonics, we may need sideload to validate the revived mind.
possible problem: If more information about your past resurfaces and it turns out that it contradicts your seemingly perfect upload.
I performed test. No LLM was able to predict the title of new EY's book. The closest was "Don't build what you can't bind" which is significantly weaker.
Strugatsky brothers were Quasi-Lems.
Doesn't need to be omnidirectional. Focus on most perspective locations like like nearby stars, our galaxy center, Andromeda's most suitable parts.