...has finally been published.
Contents:
- Uziel Awret - Introduction
- Susan Blackmore - She Won’t Be Me
- Damien Broderick - Terrible Angels: The Singularity and Science Fiction
- Barry Dainton - On Singularities and Simulations
- Daniel Dennett - The Mystery of David Chalmers
- Ben Goertzel - Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?
- Susan Greenfield - The Singularity: Commentary on David Chalmers
- Robin Hanson - Meet the New Conflict, Same as the Old Conflict
- Francis Heylighen - Brain in a Vat Cannot Break Out
- Marcus Hutter - Can Intelligence Explode?
- Drew McDermott - Response to ‘The Singularity’ by David Chalmers [this link is a McDermott-corrected version, and therefore preferred to the version that was published in JCS]
- Jurgen Schmidhuber - Philosophers & Futurists, Catch Up!
- Frank Tipler - Inevitable Existence and Inevitable Goodness of the Singularity
- Roman Yampolskiy - Leakproofing the Singularity: Artificial Intelligence Confinement Problem
The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & Bostrom, Igor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.
McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.
Tipler paper
Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.
(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)
And the extraction of a transcendent system of ethics from a Feynman quote...
... (read more)The way I look at it, it's 'if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.'
I'm very grateful to the undergraduate professor of mine that introduced me to Penrose and Tipler as a freshman. I think at that time I was on the cusp of falling into a similar failure state, and reading Shadows of the Mind and The Physics of Immortality shocked me out of what would have been a very long dogmatic slumber indeed.
Daniel Dennett's "The Mystery of David Chalmers" quickly dismissed the Singularity without really saying why:
and then spent the rest of his paper trying to figure out why Chalmers isn't a type-A materialist.
By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.
People need cool names to treat ideas seriously, so let's call this apex of human invention "Procrastinarity". Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.
There's another such curve, incidentally - I've been reading up on scientific careers, and there's solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).
So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist's career will be spent being trained, learning, helping out on other experiments, and in general just catching up!
We might call this the PhDalarity - the rate at which graduate and post-graduate experience is needed before one can make a major discovery.
Sue's article is here: She won’t be me.
Robin's article is here: Meet the New Conflict, Same as the Old Conflict - see also O.B. blog post
Francis's article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.
Marcus Hutter: Can Intelligence Explode?.
I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:
However, Francis's objections to virtual worlds seem even more silly to me. I've been hearing that simulations aren't real for decades now - and I still don't really understand why people get into a muddle over this issue.
Schmidhuber paper
Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.
On falsified predictions of AI progress:
... (read more)Similar theme from Hutter's paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?
Roman V Yampolskiy paper
Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I've see Eliezer's experiments cited, or Stuart Armstrong's Dr. Evil anthropic attack.
Given the length of the paper, I kind of expected there to be no mention of homomorphic encryption, as the boxing proposal that seems most viable, but to my surprise I read
Important modules? Er, why not just the whole thing? If you have homomorphic encryption working and proven correct, the other measures may add a little security, but not a whole lot.
Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."
Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.
I like Goertzel's succinct explanation of the idea behind Moore's Law of Mad Science:
Also, his succinct explanation of why Friendly AI is so hard:
Another choice quote that succinctly makes a key point I find myself making all the time:
His proposal for Nanny AI, however, appears to be FAI-complete.
Also, it is strange that despite paragraphs like this:
...he does not anywhere cite Bostrom (2004).
A quote from Dennett's article, on the topic of consciousness:
... (read more)Damien Broderick paper
... (read more)In "Leakproofing..."
"To reiterate, only safe questions with two possible answers of even likelihood which are independently computable by people should be submitted to the AI."
Oh come ON. I can see 'independently computable', but requiring single bit responses that have been carefully balanced so we have no information to distinguish one from the other? You could always construct multiple questions to extract multiple bits, so that's no real loss; and with awareness of Bayes' theorem, getting an exact probability balance is essentially impossible on any question we'd actually care about.
In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singulari... (read more)
I wish I could read the Dennett article online. If Chalmers has a philosophical nemesis it has to be Dennett. Though he probably sees it otherwise, I contend that Dennett's hard materialism is loosing ground daily in the academic and philosophical mainstream even as Chalmers' non-reductive functionalism gains in appreciation. (Look at Giulio Tononi's celebrated IIT theory of consciousness with its attendant panpsychism for just one example. And that's in the hard sciences, not philosophy.)
I'm ascertaining from the comments here that Dennett is no fan of t... (read more)
Many of those people are believers who are already completely sold on the idea of a technological singularity. I hope some sort of critical examination is forthcoming as well.
Schmidhuber, Hutter and Goertzel might be called experts. But I dare to argue that statements like "progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of" are almost certainly bullshit.