VonBrownie

Posts

Sorted by New

Comments

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.