To perform homomorphic operations you need the public key, and that also allows one to encrypt any new value and perform further hidden computations under that key. The private key allows decryption of the values.
I suppose you could argue that the homomorphically encrypted mind exists ala mathematical realism even if the public key is destroyed, but it would be something "outside reality" computing future states of the encrypted mind after the public key is no longer available.
It's possible to alter a homomorphic computation in arbitrary ways without knowing the decryption key.
An omniscient observer can homomorphically encrypt a copy of themselves under the same key as the encrypted mind and run a computation of its own copy examining every aspect of the internal mental states of the subject, since they share the same key.
If there are N homomorphically encrypted minds in reality then the omniscient observer will have to create N layers of homomorphic computation in order for the innermost computation to yield the observation of all N minds' internal states, each passed in turn to a sub-computation, and relying on the premise that homomorphically encrypted minds are conscious for the inner observer to be conscious.
The question is whether encoding all of reality and homomorphically encrypting it necessarily causes a loss of fidelity. If yes, one of the trilemmas still holds. Otherwise there's no trilemma and the innermost omniscient observer sees all of reality and all internal mental states. I'd argue that for a meaningful omniscient observer to exist it is the case that encoding of reality (into the mind of the observer) must not result in a loss of fidelity. There could be some edge-cases where a polynomial amount of fidelity is lost due to the homomorphic encryption that wouldn't be lost to the "natural" omniscient observer's encoding of reality, but I think it stretches the practical definition of omniscience for an observer.
I think the argument extends to physics but the polynomial loss of fidelity is more likely to cause problems in a very homomorphically-encrypted-mind-populated universe.
If the argument is that 1e9 very smart humans at 100x speed yield safe superintelligent outcomes "soon", how is that very different from "pause everything now and let N very smart humans figure out safe, aligned superintelligent outcomes over an extended timeframe, on the order of 1e11/N days/years"? It's just time-shifting safe human work.
I also worry that billions of very smart super-fast humans might decide to try building superintelligence directly, as fast as they can, so that we get doom in months instead of years
I didn't know Corona had a beach vibe, but I have seen a number of Corona ads. Does this mean advertising doesn't have much effect on me (beyond name-brand recognition)? I think I associate Corona more with tacos than anything else.
Go is in that weird spot that chess was for ~decades[0] where the best humans could beat some of the best engines but it was getting harder, until Rybka, Stockfish and others closed the door and continued far beyond human ability (measured by ELO). AlphaGo is barely a decade old, and it does seem like progress on games has taken a decade or more to become fully superhuman from the first challenges to human world champions.
I think it is the case that when the deep learning approach Stockfish used became superhuman it very quickly became dramatically superhuman within a few years/months despite years of earlier work and slow growth. There seems to be explosive gains in capability at ~years-long intervals.
Similarly, most capability gains in math, essay writing, and writing code have periods of explosive growth and periods of slow growth. So far none of the trends in these three at human level have more than ~5 years of history; earlier systems could provide rudimentary functionality but were significantly constrained by specially designed harnesses or environments they operated within as opposed to the generality of LLMs.
So I think the phrase "do X at all" really applies to the general way that deep learning has allowed ML to do X with significantly fewer or no harnesses. Constraint search and expert systems have been around for decades with slow improvements but deep learning is not a direct offshoot of those approaches and so not quite the same "AI" doing X to compare the progress over time.
[0] https://www.reddit.com/r/chess/comments/xtjstq/the_strongest_engines_over_time/
It's the level of detail that's the real risk. Sora or Veo would generate motion video and audio, bringing even more false life into the counterfactual. People get emotionally attached to characters in movies; imagine trying not to form attachments to interactive videos of your own counterfactual children who call you "Mom" or "Dad". Your dead friend or relative could emotionally-believably talk to you from beyond the grave.
That's the kind of thing only the ultra-rich could have conceived of having someone fabricate for them in the past, and it would have come with at least some checks and balances. Now kids in elementary school can necromance their dead parent or whatever.
Realistically, I think it will become "normal" to have your counterfactual worlds easily accessible in this way and the new generations will simply adapt and develop internal safeguards against getting exploited by it, much like we learn how to deal with realistic dreams. I honestly don't know about the rest of us hitting it later in adulthood.
I'm curious what happens if you try on a different suspension of disbelief; imagine peoples' lives if they lack only growth mindset and not any other moral or agentic abilities.
I find quite a bit of difference in behavior between smart people who believe things about what and who they are, and the people who believe things about how they have acted, can change, and may act in the future.
Smart people without growth mindset often rabbithole into things like legalistic religions and overcoming what they perceive as unalterable weaknesses inherent to their nature, or try to maximize their perceived inherent strengths, ignoring development of new skills and abilities. Introspection is like a checklist of how well they've done against a platonic ideal, with maybe some planning to avoid unwinnable situations.
Smart people with growth mindset usually focus on therapy (e.g. understanding their patterns of behavior and the outcomes and how they might alter those patterns in a persistent way), learning new skills and behaviors, possibly some mind-altering substances, and interacting with a lot of diverse other people to understand how they might change or acquire new beliefs and behaviors. Introspection is an exploration of possibilities and personal history and values and how to begin winning in previously unwinnable situations.
Less smart people tend to follow similar patterns, but slower or needing more guidance to proceed.
I think that we have both the bitter lesson that transformers will continue to gain capabilities with scale and also that there are optimizations that will apply to intelligent models generally and orthogonally to computing scale. The latter details seem dangerous to publicize widely in case we happen to be in the world of a hardware overhang allowing AGI or RSI (which I think could be achieved easier/sooner by a "narrower" coding agent and then leading rapidly to AGI) on smaller-than-datacenter clusters of machines today.
There's quite a difference between a couple frontier labs achieving AGI internally and the whole internet being able to achieve AGI on a llama/deepseek base model, for example.
After doing some more research I am not sure that it's always possible to derive a public key knowing only the evaluation key; it seems to depend on the actual FHE scheme.
So the trilemma may be unaffected by this hypothetical. There's also the question of duplication vs. unification for an observer that has the option to stay at base level reality or enter a homomorphically encrypted computation and whether those should be considered equivalent (enough).