All of tae's Comments + Replies

AGI safety from first principles: Superintelligence
tae3moΩ01

And for an AGI to trust that its goals will remain the same under retraining will likely require it to solve many of the same problems that the field of AGI safety is currently tackling - which should make us more optimistic that the rest of the world could solve those problems before a misaligned AGI undergoes recursive self-improvement.

 

This reasoning doesn't look right to me. Am I missing something you mentioned elsewhere?

The way I understand it, the argument goes:

  1. An AGI would want to trust that its goals will remain the same under retraining.
  2. Then,
... (read more)
3Richard_Ngo3mo
The thing you're missing is the clause "before the AGI undergoes recursive self-improvement". It doesn't work for general X, but it works for X which need to occur before Y.
Rationalist Poetry Fans, Unite!

I came here just to post "Ulysses"! 

And the accompanying song: "Untraveled Worlds" by Paul Halley. The song has lots of sentimental value to me because I sang it in choir when I was in sixth grade. Out of ten years' worth of songs, I chose to have the choir sing it again during my last year of high school. 

I would also quote:

Come, my friends,

'T is not too late to seek a newer world.

Push off, and sitting well in order smite

The sounding furrows; for my purpose holds

To sail beyond the sunset, and the baths

Of all the western stars, until I die.

Remind... (read more)