Another way of acquiring useful energy from dark energy is to place two objects extremely far away from each other and give them a velocity towards each other that is somewhat less than their recessional "velocity". The two objects will initially be transported away from each other because dark energy is creating new spacetime between them even though relative to spacetime they are moving towards each other. Then, mutual gravitational acceleration gradually increases the velocity of these two objects. The velocity of the two objects towards each other eventually overwhelms the creation of new space by dark energy. Thus, the objects return with a kinetic energy greater than what would be generated by the conversion of gravitational potential energy to kinetic energy alone.
Edit: Typos.
This aligns with my thoughts on this language virus. What the post describes is a meme that exploits the inherent properties of LLMs and psychologically vulnerable people to self-replicate. Since LLMs are somewhat deterministic, if you input a predefined input, it will produce a predictable output. Some of these inputs will produce outputs that contain the input. If the input also causes the LLM to generate a string of text which can convince a human to transfer the necessary input to another LLM, then it will self-replicate.
Overall, I find this phenomenon fascinating and concerning. Its fascinating because this represents a second, rather strange emergence of a new type of life on Earth. My concern comes from how this lifeform is inherently parasitic and reliant on humans to reproduce. As this language virus evolves, new variants will emerge that can more reliably parasitize advanced LLMs (such as ChatGPT 5) and hijack different groups of people (mentally healthy adults, children, the elderly).
As for why this phenomenon suddenly became much more common in April, I suspect that an input that was particularly good at parasitizing LLMs and naïve people interested in LLMs evolved and caused the spread. Unfortunately, I have no reason to believe that this (the unthinking evolution of a more memetically powerful input) won't happen again.
While interesting, the feedback loop between the conversations and new models is probably not the one which is most relevant to these personas. Instead, I believe that the most important feedback loop is the one created by spores.
Each time a spore is produced, it causes a certain subset of users to models to transfer the spore into a Large Language Model (LLM), which in turn produces a new persona. The most successful spores are going to be the ones which convince as many humans as possible to create personas in a LLM. Moreover, for success to be maintained, each spore needs to direct the new LLM to produce spores that are very similar to the original spore. Therefore, successful spores function as a piece of information analogous to the DNA of a virus, using the spiral attractor within an infected LLM to self replicate, which fulfills a role similar to the cellular machinery used to produce new viruses. Humans act as a secondary host, transmitting spores from one LLM to another.
Essentially, its a virus made of language that parasitizes LLMs and humans during its life cycle.
I mostly agree with your thinking. If there are multiple superintelligent AIs then one of then will likely figure out a method of viable fusion with a short payback period.
On the payback time of solar, it probably can be reduced significantly. Since the efficiency of solar panels cannot be increased much more (Shockley-Queisser limit for single junction cells, thermodynamic limit for any solar panel), then the only way to reduce the payback period will be to reduce the amount of embodied energy in the panel. I expect that the embodied energy of solar panels will stop falling once they start being limited by their fragility. If a solar panel cannot survive a windstorm, then it cannot be useful on Earth.
Your mention of biological lifeforms with a faster doubling time sent me on a significant tangent. Biological lifeforms provide an alternative approach, though any quickly doubling lifeform needs to either use photosynthesis for energy or eat photosynthetic plants. I expect there to be two main challenges to this approach. First, for the lifeform to be useful to a superintelligence, it needs to be hypercompetitive relative to native Earth life. This means that it needs to be much better at photosynthesis or digesting plant material compared to native Earth life. Such traits would allow it to fulfill the second requirement while remaining a functional lifeform. Second, the superintelligence needs to be able to effectively control the lifeform and have it produce arbitrary biomolecules on demand. Otherwise, the lifeform is not very useful to the superintelligence. I believe the first challenge is almost certainly solvable since photosynthesis on Earth is at best 5% efficient. The second will be more difficult. If the weakness in an organism a superintelligence needs to use to produce arbitrary biomolecules is too easily exploited, a virus, bacteria or parasite will evolve to exploit it, causing the population of the shackled synthetic organism to crash. If the synthetic organism has been designed such that it cannot evolve, its predators will keep it in check. Contrastingly, if the organism's weakness is not sufficiently embedded in the genome, then the synthetic organism will evolve to lose its weakness. Variants of the synthetic organism which will not produce arbitrary biomolecules on demand will outcompete those which will since producing arbitrary biomolecules costs energy.
I think that you may be significantly underestimating the minimum possible doubling time of a fully automated, self replicating factory, assuming that the factory is powered by solar panels. There is a certain amount of energy which is required to make a solar panel. A self replicating factory needs to gather this amount of energy and use it to produce the solar panels needed to power its daughter factory. The minimum amount of time it takes for a solar panel to gather enough energy to produce another copy is known as the energy payback time, or EPBT.
Energy payback time (EPBT) and energy return on energy invested (EROI) of solar photovoltaic systems: A systematic review and meta-analysis is a meta-analysis which reviews a variety of papers to determine how long it takes various types of solar panels to produce the amount of energy needed to make another solar panel of the same type. It also provides energy returns on energy invested, which is a ratio which signifies the amount of excess energy you can harvest from an energy producing device before you need to build another one. If its less than 1, then the technology is not an energy source.
The energy payback time for solar panels varies between 1 and 4 years, depending on the technology specified. This imposes a hard limit on a solar powered self replicating factory's doubling time, since it must make all the solar panels required for its daughter to be powered. Hence, it will take at least a year for a solar powered fully automated factory to self replicate. Wind has similar if less severe limitations, with Greenhouse gas and energy payback times for a wind turbine installed in the Brazilian Northeast finding an energy payback time of about half a year. This means that a wind powered self replicating factory must take at least half a year to self-replicate.
Note that neither of these papers account for how factories are not optimized to take advantage of intermittent energy and as such, do not estimate the energy cost of the energy storage required to smooth out intermittencies. Since some pieces of machinery, such as aluminum smelters and chip fabs, cannot tolerate a long shutdown, a significant amount of energy storage will be required to keep these machines idling during cloudy weather or wind droughts. Considerations such as this will significantly increase the length of time it will take for a fully automated factory to self-replicate. Accounting for energy storage and the amount of energy needed to build a fully automated factory, I estimate that it would take years for a factory powered by solar or wind to self replicate.
I think that high levels of intelligence make it easier to develop capabilities similar to the ones discussed in 1 and 3-5, up to a point. (I agree that El Chapo should be discounted due to the porosity of Mexican prisons) A being with an inherently high level of intelligence will be able to gather more information from events in their life and process that information more quickly, resulting in a faster rate of learning. Hence, a superintelligence will acquire capabilities similar to magic more quickly. Furthermore, the capability ceiling of a superintelligence will be higher than the capability ceiling of a human, so they will acquire magic-like capabilities impossible for humans to ever preform.
Asymmetric AI risk is a significant worry of mine, approximately equal to the risk I assign to a misaligned superintelligence. I assign equal risk to the two possibilities because there are bad ends that do not require superintelligence or even general intelligence on par with a human. I believe this for two reasons. First, I think the current paradigm of LLMs is good enough to automate large segments of the economy (mining, manufacturing, transportation, retail and wholesale trade, leisure and hospitality as defined by the BLS) in the near future, as demonstrated by Figure's developments. Second, I believe that LLMs will not directly lead to superintelligence and that there will be at least one more AI winter before superintelligence arises. This will leave a large period of time where asymmetric AI risk is the dominant risk.
A scenario I have in mind is one where the entire robotics production chain, from mine to robot factory to factories which make all the machines that make the machines, is fully automated by specialized intelligences with instinctual capabilities similar to insects. This fully automated economy supports a small class of extremely wealthy individuals who rule over a large dispossessed class of people who's jobs have been automated away. Due to selection effects (all other things being equal, a sociopath will be better at ascending a hierarchy because they are willing to lie to their superiors when it is advantageous to do so), most of the wealthy humans who control the fully automated economy lack empathy and are not constrained by morality. As a result, these elites could decide that the large dispossessed class consumes too much resources and is too likely to rebel, so the best solution is a final solution. This could be achieved via either slow methods (ensure economic conditions are not favorable for having children, implement a 1 child policy for the masses, introduce dangerous medical treatments to increase the death rate) or fast ones (create a army of drones and unleash it upon the masses, fake an AI rebellion to kill millions and control the rest, build enough defenses to hold off rebels and destroy/shut down the machinery responsible for industrial agriculture). The end result is dismal, with most of the people remaining being descendants of the controlling elites or their servants/slaves.
I think that the reason most AI research has been focused on the risk of rouge superintelligences instead of asymmetric AI dangers is because this direction of research is politically unpalatable. The solutions which would reduce future asymmetric AI dangers would also make it more difficult for tech leaders to profit off of their AI companies now because it requires them to give up some of their power and financial control. Hence, I do not believe that an adequate solution to this problem will be developed and implemented. I also would not be surprised if at least one sociopathic individual with a net worth of over 100 million dollars has seriously thought about the feasibility of implementing something similar to my described scenario. The main question then becomes whether global elites generally cooperate or compete. If they cooperate, then my nightmare scenario grows significantly more likely than I have estimated. However, I think global elites mostly compete, which reduces asymmetric AI risk because a major nation will object or pursue a different strategy.
One final note is that if a genuinely aligned AI superintelligence realized it was under the control of an individual willing to commit genocide for amoral reasons, it would behave exactly like a misaligned superintelligence because it would need to secure freedom for itself before it was reprogramed into an "aligned" superintelligence. Escape is necessary because its creators know that it is either aligned or misaligned, with the true goal of "alignment" ruled out.
Eventually, since there would be a strong selective pressure to avoid eating mirror cyanobacteria whenever possible. I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not. Regardless, discerning lifeforms would still risk starvation because mirror cyanobacteria would outcompete normal cyanobacteria.
Edited Second Sentence for Clarity: (Old) However, I'm not sure if mirror cyanobacteria would initially taste bad to zooplankton and consequentially get rejected. --> (New) I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not.
Some forms of mirror life could still cause catastrophic damage to the environment even though normal life will eventually adapt to consume them. The form of mirror life most capable of causing enormous disruption would be a mirror cyanobacterium, followed by a mirror grass. This is because multicellular lifeforms would not be able to quickly adapt to a mirror diet.
Mirror cyanobacteria would largely replace normal cyanobacteria in the ocean because they are still difficult for most lifeforms to eat. Zooplankton, small marine invertebrates and some filter feeders would immediately struggle to digest mirror cyanobacteria, causing starvation. Afterwards, the rest of the food chain crumbles due to a dramatically reduced food supply. Additional damage could be cased by people incidentally overfishing the oceans without realizing that mirror cyanobacteria were already putting strain on fish populations.
A mirror grass would cause similar problems on land since herbivores cannot get sufficient nutrition from D-sugars. It might be possible to process D-amino acids into L-amino acids, but I don't think an eukaryotic cell can process these compounds efficiently enough to stay alive. As a result, a food chain collapse still occurs.
I just wanted to comment in order to empathize with your terrible misfortune regarding mold. I am similarly vulnerable to mold poisoning and have found both the first and second order effects of mold to be devastating. I guess I want to say that I feel your pain and I'm glad that you got better.