LESSWRONG
LW

jonperry
8020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Thoughts on the Singularity Institute (SI)
jonperry13y20

Yes, you can create risk by rushing things. But you still have to be fast enough to outrun the creation of UFAI by someone else. So you have to be fast, but not too fast. It's a balancing act.

Reply
Thoughts on the Singularity Institute (SI)
jonperry13y60

Let's say that the tool/agent distinction exists, and that tools are demonstrably safer. What then? What course of action follows?

Should we ban the development of agents? All of human history suggests that banning things does not work.

With existential stakes, only one person needs to disobey the ban and we are all screwed.

Which means the only safe route is to make a friendly agent before anyone else can. Which is pretty much SI's goal, right?

So I don't understand how practically speaking this tool/agent argument changes anything.

Reply
No posts to display.