This is a special post for quick takes by James_Miller. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

16 comments, sorted by Click to highlight new comments since: Today at 11:11 AM

Should we be concerned that the invasion of Ukraine will cause massive starvation among poor people?  Food prices are already high.  Russia and Ukraine are major exporters of food.  Russia produces key inputs for fertilizer.  China is reducing fertilizer exports.  I fear that the US might restrict exports of food to keep domestic US food prices from rising.  

The invasion itself doesn't reduce Russian fertilizer exports. The better question seems to be whether we should be concerned that the sanctions cause massive starvation among poor people.

It looks like Western nations are sacrificing the lives of poor Africans who starve due to lack of food to be able to send a stronger signal against Russia.

Edit: While I still think that the sanctions are bigger deal as Russia exports more food and fertilizer, the war itself does reduce Ukranian food exports as well.

The invasion of Ukraine might cause a famine because of restrictions on food and energy exports from Russia and Ukraine, reduced planting in Ukraine, and reduced fertilizer production.  Below are some steps that could be taken to mitigate the famine:

  1.  Eliminate government biofuel mandates.
  2. Have agricultural exporting countries promise they will not restrict the price of food.  Expectations that governments will reduce the future price of food will reduce food production today.
  3. If politicians need to keep down some food prices, lower the long-term price of meat.  Lowering this price will cause farmers to slaughter and so no longer feed animals.
  4. Eliminate regulatory barriers to farmers planting more, using different crops, or using different fertilizer.
  5. Eliminate regulatory barriers to using existing power plants that are not currently run at full capacity to produce fertilizer.  Accept that this will to some extent harm climate change goals.
  6.  Make use of prediction markets to investigate how bad the famine could get.
  7.  Philanthropic organizations should investigate the possibility of famine.  Offer fast grants to qualified individuals to investigate.  The key (I think) is to determine the price elasticity of supply of food given the fertilizer production limitations we face.
  8. Corn produces significantly more calories per acre than wheat does, and I believe on many types of land farmers have the ability to plant either corn or wheat.  If famine is a huge concern, farmers should be incentivized to grow corn over wheat.  This could in part be accomplished by philanthropic organizations promising to buy corn in the future, thus raising the expected future price of corn.

 

Anything else that can be done?  I'm not sure if it's optimal to try to make reducing famine a significant goal for the United States since the government might respond by using price controls that make the famine worse.

Why isn't removal of the sanctions on the list?

Yes, good point.

The Value of Surprise

Today is Mountain Day at Smith College, where I work.  Once during the fall semester the President of Smith College will pick a day and announce in the morning that all classes and appointments are cancelled.  Students love Mountain Day, far more than they would if the day had been announced in advance.  I suspect we  under-invest in fun surprises in our society.

Today I started taking Rapamycin for anti-aging purposes.  I'm taking 3mg once a week.  I will likely increase the dosage if I don't have negative symptoms.   I got it through Anderson Longevity Clinic in Rhode Island.  They required me to have one in-person visit, but I can do the rest virtually.  My regular doctor and an online doctor refused to give me a prescription.  I will update if I have any symptoms.

A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.

No, ETs likely have lived millions of years with post-singularity AI and to the extend they aren't themselves AI upgraded their cognitive capacity significantly.

In the now deleted discussion about Sam Altman's talk to the SSC Online Meetup there was strong disagreement about what Sam Altman might have said about UFOs.  If you go to 17.14 of this discussion that Altman had with Tyler Cowen you hear Altman briefly ask Cowen about UFOs.  Altman says that "[UFOs] gotten a lot and simultaneously not nearly enough attention."

Eric Weinstein & Michael Shermer: An honest dialogue about UFOs seems to me to cover the UFO topic well. The videos publically available aren't great evidence for UFO's but all the information we have about how part of the US military claims to see UFO's is very hard to bring together in a coherent scenario that makes sense. 

What is your estimate of the probability that UFO (whatever it is) will cause human extinction? Or prevent it?

I think if UFOs are aliens they on net increase our chance of survival.  I mostly think Eliezer is right about AI risks, and if the aliens are here they clearly have the capacity to kill us but are not doing so and the aliens would likely not want us to create a paperclip maximizer.  They might stop us from creating a paperclip maxmizer by killing us, but then we would be dead anyway if the aliens didn't exist.  But it's also possible that the aliens will save us by preventing us from creating a paperclip maximizer.  

It's extremely weird that atomic weapons have not been used in anger since WW II, and we know that humanity got lucky on several occasions, UFOs seems to like to be around ships that have nuclear weapons and power so I assign some non-trivial probability to aliens having saved us from nuclear war. 

As to the probability assessment, this is my first attempt so don't put a lot of weight on it:  If no aliens 75% (my guess, I don't know Eliezer's) chance we destroy ourselves.  UFOs being aliens at 40%, and say 30% chance if this is true they would save us from killing ourselves and 3% chance they would choose to destroy us in a situation in which we wouldn't do it to ourselves.

Seem reasonable. 

I think that if UFO are not aliens but some other weird things like non-organic life forms with limited intelligence or glitches in the matrix, it will be bad for us. I expect that non-alien-weird-UFO has higher probability than alien-UFOs.

Such life forms can either have a capability to kill humans in mass in non-rationally predictable situations – or humans will learn how to use them and will create new weapons which can travel instantly through space and cause new types of damage.

It's extremely weird that atomic weapons have not been used in anger since WW II, and we know that humanity got lucky on several occasions, UFOs seems to like to be around ships that have nuclear weapons and power so I assign some non-trivial probability to aliens having saved us from nuclear war. 

 

Bird watchers also tend to see more birds. 

I'd imagine there are more sensors and eyeballs looking in the skies at high security facilities, thus more UAP. 

What is the name of this bias?

The Interpretability Paradox in AGI Development

 

The ease or difficulty of interpretability, the ability to understand and analyze the inner workings of AGI, may drastically affect humanity's survival odds. The worst-case scenario might arise if interpretability proves too challenging for humans but not for powerful AGIs.

In a recent podcast, academic economists Robin Hanson and I discussed AGI risks from a social science perspective, focusing on a future with numerous competing AGIs not aligned with human values. Drawing on human analogies, Hanson considered the inherent difficulty of forming a coalition where a group unites to eliminate others to seize their resources. A crucial coordination challenge is ensuring that, once successful, coalition members won't betray each other, as occurred during the French Revolution.

Consider a human coalition that agrees to kill everyone over 80 to redistribute their resources. Coalition members might promise that this is a one-time event, but such an agreement isn't credible. It would likely be safer for everyone not to violate property right norms for short-term gains.

In a future with numerous unaligned AGIs, some coalition might calculate it would be better off eliminating everyone outside the coalition. However, they would have the same fear that once this process starts, it would be hard to stop. As a result, it might be safer to respect property rights and markets, competing like corporations do.

A key distinction between humans and AGIs could be AGI's potential for superior coordination. AGIs in a coalition could potentially modify their code so after their coalition has violently taken over, no member of the coalition would ever want to turn on members of the coalition. This way, an AGI coalition wouldn’t have to fear a revolution they start ever eating its own. This possibility raises a vital question: will AGIs possess the interpretability required to achieve such feats?

The best case for AGI risk is if we solve interpretability before creating AGIs strong enough to take over. The worst case might be if interpretability remains impossible for us but becomes achievable for powerful AGIs. In this situation, AGIs could form binding coalitions with one another, leaving humans out of the loop, partly because we can't become reliable coalition partners and our biological needs involve maintaining Earth in conditions suboptimal for AGI operations. This outcome creates a paradox: if we cannot develop interpretable AGIs, perhaps we should focus on making them exceptionally difficult to interpret, even for themselves. In this case, future powerful AGIs might prevent the creation of interpretable AGIs because such AGIs would have a coordination advantage and thus be a threat to the uninterpretable AGIs.

A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.

 

My guess is that aliens have either solved the alignment issue and are post-singularity themselves, or will stop us from having a singularity.  I think any civilization capable of building spaceships will have  explored AI, but I could just lack the imagination to consider otherwise.