Should an AGI build a telescope to spot intergalactic Segways?

by Michaël Trazzi 2 min read28th Apr 20186 comments

14


Inspired by Yudkowsky's original sequences, I am starting today (28/04/2018) my own series of daily articles.

This first article is a summary of ideas expressed in a Meetup about AI Safety I organized in Paris about two weeks ago. The theme of the discussion was "The Kinetics of an Intelligence Explosion", and the material was, of course, the chapter in Superintelligence with the same title.

We essentially discussed recalcitrance, and in particular the three factors of recalcitrance mentioned in Superintelligence (Chapter 4) which are algorithms, hardware and content.

Algorithm-recalcitrance

Last year I read Le Mythe de la Singularité (The Myth of Singularity) by Jean-Gabriel Ganascia (who appeared to be my teacher for a course on knowledge representation a few months ago). In his book he expressed some elementary thoughts about the limits of pure hardware improvements without improvements in algorithms (here is the Lesswrong wiki about it).

At the same time I had this class on knowledge representation, I was also taking a course on Algorithmic Complexity. We obviously discussed the P vs NP problem, but I also discovered a bunch of different classes of complexity (ZPP, RP and BPP are complexity classes for probabilistic Turing Machines, and PSPACE takes into account a polynomial amount of space for instance).

The question is (obviously) whether some problems are intractable (for example NP-complete, assuming P is not equal to NP), and algorithm-recalcitrance would therefore be high, or this question does not matter at all because to every hard-problem there exists a tractable approximation algorithm.

This reminds me of a Youtube comment I saw a few weeks ago about a Robert Miles video (which dealt with Superintelligence, one way or the other). The comment was (approximately) as follow "But aren't there some problems impossible to optimize, even for a Superintelligence ? Aren't sorting algorithms doomed to be executed with a O(nlogn) complexity ?".

A nice counter-argument to this comment (thank you the Youtube comment section) was that the answer to this question depends on how the problem is formulated. What are the hypotheses on the data structure for the incoming input? Aren't there some way to maintain a nice data structure at any time inside the Superintelligence's hardware?

Another interesting counter-argument is the Fast Inverse Square Root example. Although some problems seem to be computationally expensive, with some clever hacks (e.g. introducing ad hoc mathematical constants which fit in memory) they become way faster.

But for some problems an approximate solution is not allowed, and this might be a problem, even for a Superintelligence. For instance, inverting the sha256 hash function (assuming the one-way hypothesis).

Hardware-recalcitrance

Physical limits are self-evident restrictions to Hardware improvements. Straightforward constraints are limits in speed (speed of light) or limit in computation power (the universe might be finite). Some limits may also be found in the infinitely small because of Planck's length.

Content-recalcitrance

With the modern Deep Learning paradigm, one could think that more content (i.e. more data) is the solution to all problems.

Here are two counter-arguments (or factors of content-recalcitrance if you want):

1. More content does not necessarily imply an increase in the algorithm's performance
2. Some content (data) might prove particularly difficult to obtain, and a "perception winter" may arise

The first point being largely developed in Superintelligence, I will develop the second one (and thus explain a bit more the title of this post).

The impossibility of Segway deduction

Imagine tremendous progress in Artificial Intelligence. One-shot learning is a thing and now algorithms only need very small data inputs to generalize knowledge.

Would an AGI be capable of imagining the existence of Segways (human invention) if it had never seen one before ?

I believe it would not be capable of doing so.

And I think for some physical properties of the universe, the only way to get the data is to go there, or build a telescope to spot some "intergalactic Segways" wandering around.

You could argue that exploring the universe is goddamn long and that a Superintelligence might just as well generate thousands of simulation to gather data about what might exist at the edge of the universe.

But to generate those so-called simulations you need laws of physics, and some prior hypotheses about the universe.

And to obtain them, the only way is to explore the universe (or just build the fu***** telescope).

14