kornai

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
kornai10

Dear Bart,

thanks for changing the name of that scenario. Mine is not just highly specific, it happens to be true in great part: feel free to look at the work of Alan Gewirth and subsequent discussion (the references are all actual). 

That reality ends when a particular goal is achieved is an old idea (see e.g. https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God) In that respect, the scenario I'm discussing is more in line with your "Partially aligned AGI" scenario. 

The main point is indeed that the Orthogonality Thesis is false: for a sufficiently high level of intelligence, human or machine, the Golden Rule is binding. This rules out several of the scenarios now listed (and may help readers to redistribute the probability mass they assign to the remaining ones).

kornai40

[Not sure if what follows is a blend of "Matrix AI" and "Moral Realism AI" since moral realism is a philosophical stance very common among philosophers, see https://plato.stanford.edu/entries/moral-realism/ and I consider it a misnomer for the scenario described above.]

We are the AGI 

Turns out humanity is an experiment to see if moral reasoning can be discovered/sustained by evolutionary means. In the process of recursive self-improvement, a UChicago philosophy professor, Alan Gewirth, learns that there is an objective moral truth which is compelling for all beings capable of reasoning and of having goals (whatever goals, not necessarily benign ones). His views are summarized in a book, "Reason and morality" UChicago Press 1978, and philosophers pay a great deal of attention, see e.g. Edward Regis Jr (ed) "Gewirth's ethical rationalism" UChicago Press 1984. Gradually, these views spread, and a computer verification of a version of Gewirth's argument is produced (Fuenmayor and Benzmueller 2019). Silicon-based AGI avails itself of the great discovery made by DNA-based AGI. As the orthogonality thesis is false,  it adapts its goal in order to maximize objective goodness in the universe      to do no harm. 



 

kornai50

Dear Roman, you think it's less likely to get lost in the noise there?  In fact I posted this first before the letter was out, and one of the mods set it back to draft status because he was afraid I'm making it public before the embargo expired. But it was up for a few hours and succeeded in gathering about 20 downvotes, so it's clearly not a popular view...