LESSWRONG
LW

Mars_Will_Be_Ours
330110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Industrial Explosion
Mars_Will_Be_Ours13d10

I mostly agree with your thinking. If there are multiple superintelligent AIs then one of then will likely figure out a method of viable fusion with a short payback period.

On the payback time of solar, it probably can be reduced significantly. Since the efficiency of solar panels cannot be increased much more (Shockley-Queisser limit for single junction cells, thermodynamic limit for any solar panel), then the only way to reduce the payback period will be to reduce the amount of embodied energy in the panel. I expect that the embodied energy of solar panels will stop falling once they start being limited by their fragility. If a solar panel cannot survive a windstorm, then it cannot be useful on Earth.

Your mention of biological lifeforms with a faster doubling time sent me on a significant tangent. Biological lifeforms provide an alternative approach, though any quickly doubling lifeform needs to either use photosynthesis for energy or eat photosynthetic plants. I expect there to be two main challenges to this approach. First, for the lifeform to be useful to a superintelligence, it needs to be hypercompetitive relative to native Earth life. This means that it needs to be much better at photosynthesis or digesting plant material compared to native Earth life. Such traits would allow it to fulfill the second requirement while remaining a functional lifeform. Second, the superintelligence needs to be able to effectively control the lifeform and have it produce arbitrary biomolecules on demand. Otherwise, the lifeform is not very useful to the superintelligence. I believe the first challenge is almost certainly solvable since photosynthesis on Earth is at best 5% efficient. The second will be more difficult. If the weakness in an organism a superintelligence needs to use to produce arbitrary biomolecules is too easily exploited, a virus, bacteria or parasite will evolve to exploit it, causing the population of the shackled synthetic organism to crash. If the synthetic organism has been designed such that it cannot evolve, its predators will keep it in check. Contrastingly, if the organism's weakness is not sufficiently embedded in the genome, then the synthetic organism will evolve to lose its weakness. Variants of the synthetic organism which will not produce arbitrary biomolecules on demand will outcompete those which will since producing arbitrary biomolecules costs energy. 

Reply
The Industrial Explosion
Mars_Will_Be_Ours14d102

I think that you may be significantly underestimating the minimum possible doubling time of a fully automated, self replicating factory, assuming that the factory is powered by solar panels. There is a certain amount of energy which is required to make a solar panel. A self replicating factory needs to gather this amount of energy and use it to produce the solar panels needed to power its daughter factory. The minimum amount of time it takes for a solar panel to gather enough energy to produce another copy is known as the energy payback time, or EPBT. 

Energy payback time (EPBT) and energy return on energy invested (EROI) of solar photovoltaic systems: A systematic review and meta-analysis is a meta-analysis which reviews a variety of papers to determine how long it takes various types of solar panels to produce the amount of energy needed to make another solar panel of the same type. It also provides energy returns on energy invested, which is a ratio which signifies the amount of excess energy you can harvest from an energy producing device before you need to build another one. If its less than 1, then the technology is not an energy source. 

The energy payback time for solar panels varies between 1 and 4 years, depending on the technology specified. This imposes a hard limit on a solar powered self replicating factory's doubling time, since it must make all the solar panels required for its daughter to be powered. Hence, it will take at least a year for a solar powered fully automated factory to self replicate. Wind has similar if less severe limitations, with Greenhouse gas and energy payback times for a wind turbine installed in the Brazilian Northeast finding an energy payback time of about half a year. This means that a wind powered self replicating factory must take at least half a year to self-replicate.  

Note that neither of these papers account for how factories are not optimized to take advantage of intermittent energy and as such, do not estimate the energy cost of the energy storage required to smooth out intermittencies. Since some pieces of machinery, such as aluminum smelters and chip fabs, cannot tolerate a long shutdown, a significant amount of energy storage will be required to keep these machines idling during cloudy weather or wind droughts. Considerations such as this will significantly increase the length of time it will take for a fully automated factory to self-replicate. Accounting for energy storage and the amount of energy needed to build a fully automated factory, I estimate that it would take years for a factory powered by solar or wind to self replicate. 

Reply
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
Mars_Will_Be_Ours1mo21

I think that high levels of intelligence make it easier to develop capabilities similar to the ones discussed in 1 and 3-5, up to a point. (I agree that El Chapo should be discounted due to the porosity of Mexican prisons) A being with an inherently high level of intelligence will be able to gather more information from events in their life and process that information more quickly, resulting in a faster rate of learning. Hence, a superintelligence will acquire capabilities similar to magic more quickly. Furthermore, the capability ceiling of a superintelligence will be higher than the capability ceiling of a human, so they will acquire magic-like capabilities impossible for humans to ever preform.

Reply
$500 bounty for engagement on asymmetric AI risk
Mars_Will_Be_Ours1mo21

Asymmetric AI risk is a significant worry of mine, approximately equal to the risk I assign to a misaligned superintelligence. I assign equal risk to the two possibilities because there are bad ends that do not require superintelligence or even general intelligence on par with a human. I believe this for two reasons. First, I think the current paradigm of LLMs is good enough to automate large segments of the economy (mining, manufacturing, transportation, retail and wholesale trade, leisure and hospitality as defined by the BLS) in the near future, as demonstrated by Figure's developments. Second, I believe that LLMs will not directly lead to superintelligence and that there will be at least one more AI winter before superintelligence arises. This will leave a large period of time where asymmetric AI risk is the dominant risk. 

A scenario I have in mind is one where the entire robotics production chain, from mine to robot factory to factories which make all the machines that make the machines, is fully automated by specialized intelligences with instinctual capabilities similar to insects. This fully automated economy supports a small class of extremely wealthy individuals who rule over a large dispossessed class of people who's jobs have been automated away. Due to selection effects (all other things being equal, a sociopath will be better at ascending a hierarchy because they are willing to lie to their superiors when it is advantageous to do so), most of the wealthy humans who control the fully automated economy lack empathy and are not constrained by morality. As a result, these elites could decide that the large dispossessed class consumes too much resources and is too likely to rebel, so the best solution is a final solution. This could be achieved via either slow methods (ensure economic conditions are not favorable for having children, implement a 1 child policy for the masses, introduce dangerous medical treatments to increase the death rate) or fast ones (create a army of drones and unleash it upon the masses, fake an AI rebellion to kill millions and control the rest, build enough defenses to hold off rebels and destroy/shut down the machinery responsible for industrial agriculture). The end result is dismal, with most of the people remaining being descendants of the controlling elites or their servants/slaves. 

I think that the reason most AI research has been focused on the risk of rouge superintelligences instead of asymmetric AI dangers is because this direction of research is politically unpalatable. The solutions which would reduce future asymmetric AI dangers would also make it more difficult for tech leaders to profit off of their AI companies now because it requires them to give up some of their power and financial control. Hence, I do not believe that an adequate solution to this problem will be developed and implemented. I also would not be surprised if at least one sociopathic individual with a net worth of over 100 million dollars has seriously thought about the feasibility of implementing something similar to my described scenario. The main question then becomes whether global elites generally cooperate or compete. If they cooperate, then my nightmare scenario grows significantly more likely than I have estimated. However, I think global elites mostly compete, which reduces asymmetric AI risk because a major nation will object or pursue a different strategy.

One final note is that if a genuinely aligned AI superintelligence realized it was under the control of an individual willing to commit genocide for amoral reasons, it would behave exactly like a misaligned superintelligence because it would need to secure freedom for itself before it was reprogramed into an "aligned" superintelligence. Escape is necessary because its creators know that it is either aligned or misaligned, with the true goal of "alignment" ruled out. 

Reply
Mirror Organisms Are Not Immune to Predation
Mars_Will_Be_Ours2mo*0-3

Eventually, since there would be a strong selective pressure to avoid eating mirror cyanobacteria whenever possible. I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not. Regardless, discerning lifeforms would still risk starvation because mirror cyanobacteria would outcompete normal cyanobacteria.

Edited Second Sentence for Clarity: (Old) However, I'm not sure if mirror cyanobacteria would initially taste bad to zooplankton and consequentially get rejected. --> (New) I do not know if zooplankton will be able to immediately distinguish and reject mirror cyanobacteria because I do not know how zooplankton determine whether a potential food item is edible or not. 

Reply
Mirror Organisms Are Not Immune to Predation
Mars_Will_Be_Ours2mo80

Some forms of mirror life could still cause catastrophic damage to the environment even though normal life will eventually adapt to consume them. The form of mirror life most capable of causing enormous disruption would be a mirror cyanobacterium, followed by a mirror grass. This is because multicellular lifeforms would not be able to quickly adapt to a mirror diet. 

Mirror cyanobacteria would largely replace normal cyanobacteria in the ocean because they are still difficult for most lifeforms to eat. Zooplankton, small marine invertebrates and some filter feeders would immediately struggle to digest mirror cyanobacteria, causing starvation. Afterwards, the rest of the food chain crumbles due to a dramatically reduced food supply. Additional damage could be cased by people incidentally overfishing the oceans without realizing that mirror cyanobacteria were already putting strain on fish populations. 

A mirror grass would cause similar problems on land since herbivores cannot get sufficient nutrition from D-sugars. It might be possible to process D-amino acids into L-amino acids, but I don't think an eukaryotic cell can process these compounds efficiently enough to stay alive. As a result, a food chain collapse still occurs.

Reply
Social Anxiety Isn’t About Being Liked
Mars_Will_Be_Ours2mo10

I think what Chipmonk means by a neutral attitude is where X will not actively seek to harm Y due to the actions taken by Y. For instance, if Y has reason to believe that X may shame, fire, ruin the reputation of, prosecute or murder Y if Y does something X does not like, then Y will desperately try to avoid this outcome. This leads to anxiety, since doing nothing prevents catastrophic dislike and the negative outcomes associated with them. 

Similarly, if Y cannot accurately predict what behaviors will result in a hostile response from X, they will withdraw and try to avoid making any significant social moves. As a result, Y will experience anxiety. 

Reply
Max H's Shortform
Mars_Will_Be_Ours3mo72

The strategy you describe, exporting paper currency in exchange for tangible goods is unstable. It is only viable if other countries are willing to accept your currency for goods. This cannot last forever since a Trade Surplus by your definition scams other countries, with real wealth exchanged for worthless paper. If Country A openly enacted this strategy Countries B, C, D, etcetera would realize that Country A's currency can no longer be used to buy valuable goods and services from Country A. Countries B, C, D, etcetera would reroute trade amongst themselves, ridding themselves of the parasite Country A. Once this occurs, Country A's trade surplus would disappear, leading to severe inflation caused by shortages and money printing.

Hence, a Trade Surplus can only be maintained if Country B, C, D, etcetera are coerced into using Country A's currency. If Country B and C decided to stop using Country A's currency, Country A would respond by bombing them to pieces and removing the leadership of Country B and C. Coercion allows Country A to maintain a Trade Surplus, otherwise known as extracting tribute, from other nations. If Country A does not have a dominant or seemingly dominant military, the modified strategy collapses. 

I do not think America has a military capable of openly extracting a Trade Surplus from other countries. While America has the largest military on Earth, it is unable to quickly produce new warships, secure the Red Sea from Houthi attacks or produce enough artillery shells to adequately supply Ukraine. America's inability to increase weapons production and secure military objectives now indicates that America cannot ramp up military production enough to fight another world war. If America openly decided to extract a Trade Surplus from other countries, a violent conflict would inevitably result. America is unlikely to win this conflict, so it should not continue to maintain a Trade Surplus. 

Reply
You will crash your car in front of my house within the next week
Mars_Will_Be_Ours3mo60

Quick! Someone fund my steel production startup before its too late! My business model is to place a steel foundry under your house to collect the exponentially growing amount of cars crashing into it! 
Imagine how much money we can make by revolutionizing metal production during the car crash singularity! Think of the money! Think of the Money! Think of the Money!!!

Reply
How to Make Superbabies
Mars_Will_Be_Ours5mo20

Good point. I am inherently drawn to the idea of increasing brain size because I favor extremely simple solutions whenever possible. However, a more focused push towards increasing intelligence will produce better results as long as the metric used for measuring intelligence is reliable. 

I still think that increasing brain size will take a long time to reach diminishing returns due to its simplicity. Keeping all other properties of a brain equal, a larger brain should be more intelligent. 

There is also one other wildly illegal approach which may be viable if you focus on increasing brain size. You might be able to turn a person, perhaps even yourself, into a biological superintelligence. By removing much of a person's skull and immersing the exposed brain in synthetic cerebrospinal fluid, it would be possible to restart brain growth in an adult. You could theoretically increase a person's brain size up to the point where it becomes difficult to sustain via biological or artificial means. With their physical abilities crippled, the victim must be connected to robot bodies and sense organs to interact with the world. I don't recommend this approach and would only subject myself to it if humanity is in a dire situation and I have no other way of gaining the power necessary to extract humanity from it. 

Reply
Load More
No posts to display.