Sounds like an excellent idea. The Journal of Existential Risk of AI.
The mere fear that the entire human race will be exterminated in their sleep through some intricate causality we are too dumb to understand will seriously diminish our quality of life.
That doesn’t make sense to me. If someone wants to fool me that I’m looking att a tree he has to paint a tree in every detail. Depending on how closely I examine this tree he has to match my scrutiny to the finest detail. In the end, his rendering of a tree will be indistinguishable from an actual tree even at the molecular level.
The actual peace deal will be something for the Ukraine to agree to. It is not up to Trump to dictate the terms. All Trump should do is to stop financing the war and we will have peace.
Having said that, if it is somehow possible for Trump to pressure Ukraine into agreeing to become a US colony, my support for Trump was a mistake. The war would be preferable to the peace.
Good post! We will soon have very powerful quantum computers that probably could simulate what will happen if a mirror bacteria is confronted with the human immune system. Maybe there is no risk at all or an existential risk to humanity. This should be a prioritized task for our first powerful quantum computer to find out.
Because he says so.
I have also noticed that when you read the word ”metaethics” on Lesswrong it can mean anything that is in some way related to morality.
Mayby I should take it upon myself to write a short essay on metaethics and how it differs from normative ethics and why it may be of importance to AI alignment.