Posts

Sorted by New

Wiki Contributions

Comments

Kayden2y10

It doesn't have to - Specialized deployments will lead to better performance. You can create custom processors for specific tasks, and create custom software optimized for that particular task. That's different from having the flexibility of generalizing. A deep neural network might be trained on chess but it can't suddenly start performing well on image classification without losing significant ability and performance.

Kayden2y00
  1. An AGI doesn't have to kill humans directly for our civilization to be disrupted.
  2. Why would the AGI not have capabilities to pursue this if needed?
Kayden2y10

What do you think of when you say an AGI? To me, it is a general intelligence of some form, able to specialize in tasks as it determines fit. 

Humans are a general intelligence organism, and we're constrained by biological needs (for ex: sleeping, eating) because we arrived here via the evolution algorithm. A general intelligence on silicon is a million times faster than us and it is an instrumental goal to be smarter as it will be able to do things and arrive at conclusions with lesser data and evidence. 

Thus, a GI specializing in removing its own bottlenecks and not being constrained as much as us and being faster than us in processing and sequential tasks and parallel tasks, and so on, would be far superior in planning. Even if it starts out stupider than us, it probably would not take long for that to change.

Kayden2y20

Two cases are possible: Either a singleton is established and it is able to remain a singleton due to strategic interests (of either AGI or the group), or a singleton loses its lead and we have a multipolar situation with more than 1 groups having AGI. 

 In case 1, if the lead established is say, 6 months or more, it might not be possible for the 2nd place group to get there as the work done during this period by the lead would be driven by intelligence explosion, and far faster than the 2nd. This only incentivizes going forward as fast as possible and is not a good safety mindset.

In case 2, we have the risk of multiple projects developing AGI and thus the risk of something going wrong also increases. Even if group 1 is able to implement safety measures, some other group might fail, and the outcome would be disastrous, unless AGI by the Group 1 is specifically going to solve the control problem for us.