Journal of Consciousness Studies issue on the Singularity

by lukeprog1 min read2nd Mar 201286 comments

23

Personal Blog

...has finally been published.

Contents:

The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & BostromIgor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.

McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.

86 comments, sorted by Highlighting new comments since Today at 8:44 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Tipler paper

Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.

(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)

And the extraction of a transcendent system of ethics from a Feynman quote...

A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true hu

... (read more)
9quanticle9yThe quote that stood out for me was the following: Now, all that's well and good, except for one, tiny, teensy little flaw: there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887. Tipler, in this case, appears to be basing his argument on a theory that was discredited over a century ago. Yes, some of the conclusions of aetheric theory are superficially similar to the conclusions of relativity. That, however, doesn't make the aetheric theory any less wrong.
0TetrahedronOmega6yHi, Quanticle. You state that "there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887." For the details on how General Relativity is inherently an æther theory, see physicist and mathematician Prof. Frank J. Tipler and mathematician Maurice J. Dupré's following paper: * Maurice J. Dupré and Frank J. Tipler, "General Relativity as an Æther Theory", International Journal of Modern Physics D, Vol. 21, No. 2 (Feb. 2012), Art. No. 1250011, 16 pp., doi:10.1142/S0218271812500113, bibcode: 2012IJMPD..2150011D, http://webcitation.org/6FEvt2NZ8 [http://webcitation.org/6FEvt2NZ8] . Also at arXiv:1007.4572, July 26, 2010, http://arxiv.org/abs/1007.4572 [http://arxiv.org/abs/1007.4572] .
7Pfft9yArgh. Also, this makes me wonder if the SIAI's intention to publish in philosophy journals is such a good idea. Presumably part of the point was for them to gain status by being associated with respected academic thinkers. But this isn't really the kind of thinking anyone would want to be associated with...

The way I look at it, it's 'if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.'

7JohnD9yI can't speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you'd normally get for an unsolicited article in a top philosophy journal. But, to reiterate, I can't say whether or not the Journal of Consciousness Studies did that in this instance.
1[anonymous]9yOn the one hand, this is the cached defense that I have for the Sokal hoax, so now I have an internal conflict on my hands. If I believe that Tipler's paper shouldn't have been published, then it's unclear why Sokal's should have been. Oh dear, oh dear. How to resolve this conflict? Perhaps rum...
0Bruno_Coelho9ySomeone think the visibility for philosophers have pratically impact for the solution of technical problems? Apparently who can possibly cause some harm in the near time are AI researchs, but much of these people are scalating Internet flux or working on their own projects. Gaining visibility is a good thing when what's needed is social acceptance, or when is necessary more people to solution a problem. Publishing in peer reviews (philosophical)journals can give more scholars to the cause, but more people caring about AI don't mean a good thing per se.
3[anonymous]9ySome things even peer-review can't cure. I looked through a few of their back-issues and was far from impressed. On the other hand, this ranking [http://www.scimagojr.com/journalrank.php?area=1200&category=1211&country=all&year=2011&order=sjr&min=0&min_type=cd] puts them above Topoi, Nous, and Ethics. I'm not even sure what that means -- maybe their scale is broken?
3gwern9yMaybe there's some confounding factor - like sudden recent interest in Singularity/transhumanist topics forcing the cite count up?
0Jesper_Ostman9yUnlikely, they have been highly ranked for a long time and singularity/transhumanist topics are only a very small part of what JCS covers.
3shminux9yTipler did some excellent work in mathematical relativity before going off the rails shortly thereafter.
[-][anonymous]9y 15

I'm very grateful to the undergraduate professor of mine that introduced me to Penrose and Tipler as a freshman. I think at that time I was on the cusp of falling into a similar failure state, and reading Shadows of the Mind and The Physics of Immortality shocked me out of what would have been a very long dogmatic slumber indeed.

2Incorrect9yAnd yet humans kill eachother. His only possible retort is that some humans are not rational. Better hope that nobody builds an "irrational" AI.
0TetrahedronOmega6yHi, Gwern. You asked, "... MWI and quantum mechanics implied by Newton? Really?" Yes, the Hamilton-Jacobi Equation, which is the most powerful formulation of Newtonian mechanics, is, like the Schrödinger Equation, a multiverse equation. Quantum Mechanics is the unique specialization of the Hamilton-Jacobi Equation with the specification imposed that determinism is maintained: since the Hamilton-Jacobi Equation is indeterministic, because when particle trajectories cross paths a singularity is produced (i.e., the values in the equations become infinite) and so it is not possible to predict (even in principle) what happens after that. On the inherent multiverse nature of Quantum Mechanics, see physicist and mathematician Prof. Frank J. Tipler's following paper: * Frank J. Tipler, "Quantum nonlocality does not exist", Proceedings of the National Academy of Sciences of the United States of America, Vol. 111, No. 31 (Aug. 5, 2014), pp. 11281-11286, doi:10.1073/pnas.1324238111, http://www.pnas.org/content/111/31/11281.full.pdf [http://www.pnas.org/content/111/31/11281.full.pdf] , http://webcitation.org/6WeupHQoM [http://webcitation.org/6WeupHQoM] . Regarding the universe necessarily being temporally closed according to the known laws of physics: all the proposed solutions to the black hole information issue except for Prof. Tipler's Omega Point cosmology share the common feature of using proposed new laws of physics that have never been experimentally confirmed--and indeed which violate the known laws of physics--such as with Prof. Stephen Hawking's paper on the black hole information issue which is dependent on the conjectured String Theory-based anti-de Sitter space/conformal field theory correspondence (AdS/CFT correspondence). (See S. W. Hawking, "Information loss in black holes", Physical Review D, Vol. 72, No. 8 [Oct. 15, 2005], Art. No. 084013, 4 pp.) Hence, the end of the universe in finite proper time via collapse before a black hole completel
0torekp9yNot to rescue Tipler, but: None of these possibilities seem to exclude being also a series of imperative sentences.
0gwern9yIn much the same way rhetorically asking 'After all, what is a computer program but a proof in an intuitionistic logic?' doesn't rule out 'a series of imperative sentences'.
0Will_Newsome9yThe "AIXI equation" is not an AI in the relevant sense.
0gwern9yFine, 'show me this morality in a computable implementation of AIXI using the speed prior or GTFO' (what was it called, AIXI-tl?).
2Will_Newsome9yThat also isn't an AI in the relevant sense, as it doesn't actually exist. Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can't prove that an AIXI-style AI will ever work, and it's presumably part of Tipler's argument that it won't work, so simply asserting that it will work is sort of pointless. I'm just saying that if you want to engage with his argument you'll have to get closer to it 'cuz you're not yet in bowshot range. If your intention was to repeat the standard counterargument rather than show why it's correct then I misinterpreted your intention; apologies if so.
1gwern9yThe AIXI proofs seem pretty adequate to me. They may not be useful, but that's different from not working. More to the point, nothing in Tipler's paper gave me the impression he had so much as heard of AIXI, and it's not clear to me that he does accept Searlian reasons - what is that, by the way? It can't be Chinese room stuff since Tipler has been gung ho on uploading for decades now.
4Will_Newsome9yIt's really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it's a common LW meme ("AIXI drops an anvil on its head"). By "Searlian reasons" I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring. I don't think Tipler's not having heard of AIXI is particularly damning, even if true.
1gwern9yI don't think it's obvious it would self-destruct - any more than it's obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans. I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.
0[anonymous]9yYou might enjoy reading this [http://theophysics.host56.com/tipler-omega-point-and-christianity.html] for more context.
-1timtyler9yYes: nonsense.

Daniel Dennett's "The Mystery of David Chalmers" quickly dismissed the Singularity without really saying why:

My reactions to the first thirty-odd pages did not change my mind about the topic, aside from provoking the following judgment, perhaps worth passing along: thinking about the Singularity is a singularly imprudent pastime, in spite of its air of cautious foresight, since it deflects our attention away from a much, much more serious threat, which is already upon us, and shows no sign of being an idle fantasy: we are becoming, or have become, enslaved by something much less wonderful than the Singularity: the internet.

and then spent the rest of his paper trying to figure out why Chalmers isn't a type-A materialist.

By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.

People need cool names to treat ideas seriously, so let's call this apex of human invention "Procrastinarity". Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.

There's another such curve, incidentally - I've been reading up on scientific careers, and there's solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).

So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist's career will be spent being trained, learning, helping out on other experiments, and in general just catching up!

We might call this the PhDalarity - the rate at which graduate and post-graduate experience is needed before one can make a major discovery.

5Viliam_Bur9yAs a former teacher I have noticed some unlucky trends in education (it may be different in different coutries), namely that it seems to slow down. On one end there is a public pressure to make schools easier for small children, like not giving them grades in the first class. On the other end there is a pressure to send everyone to university, for signalling (by having more people in universities we can pretend to be smart, even if the price is dumbing down university education) and reducing unemployment (more people in schools, less people in unemployment registry). While I generally approve friendlier environment for small children and more opportunities for getting higher education, the result seems like shifting the education to later age. Students learn less in high schools (some people claim otherwise, but e.g. math curicullum is being reduced in recent decades) and many people think it's ok, because they can still learn the necessary things in university, can't they? So the result is a few "child prodigies" and a majority of students who are kept at schools only for legal or financial reasons. Yeah, people live longer, prolong their childhoods, but their peak productivity does not shift accordingly. We feel there is enough time, but that's because most people underestimate how much there is to learn.
0Thomas9yOTOH there is a saying - just learn where and how to get the information you need. And it's a big truth in that. It is easier every day to learn something (anything) when you need it. Knowledge market value could be easily grossly overestimated.
9Viliam_Bur9yIt's easy to learn something when you need it... if the inferential distance [http://wiki.lesswrong.com/wiki/Inferential_distance] is short. Problem is, it often isn't. Second problem, it is easy to find information, but it is more difficult to separate right and wrong information if the person has no background knowledge. Third problem, the usefullness of some things becomes obvious only after a person learns them. I have seen smart people trying to jump across a large informational gap and fail. For example there are many people who taught themselves programming from internet tutorials and experiments. They can do many impressive things, just to fail at something rather easy later, because they have no concepts of "state automata" or "context-free grammar" or "halting problem" -- the things that may seem like a useless academic knowledge at university, but they allow to quickly classify groups of problems into categories with already known rather easy solutions (or in the last case: known to be generally unsolvable). Lack of proper abstractions slows them at learning, they invent their own bad analogies. In theory, there are enough materials online that would allow them to learn everything properly, but that would take a lot of time and someone's guidance. And that's exactly what schools are for: they select materials, offer guidance, and connect you with other people studying the same topic. In my opinion, a good "general education" is one that makes inferential distances shorter on average. Mathematics is very important, because it takes good basic knowledge to understand statistics, and without statistics you can't understand scientific results in many fields. A recent example: in a local Mensa group there was a discussion on web whether IQ tests are really necessary, because most people know what their IQ is. I dropped them a link to an article [http://onlinelibrary.wiley.com/doi/10.1111/1467-6494.00023/abstract] saying that the correlation between self-repo
-1Thomas9yYou are advocating a strategically devised network of knowledge which would always offer you a support from the nearest base, when you are wandering on a previously unknown land. "Here comes the marines" - you can always count on that. Well, in science you can't. You must fight the marines as the enemies sometimes, and you are often so far out, that nobody even knows for you. You are on your own and all the heavy equipment is both useless and to expensive to carry. This is the situation when the stakes are high, when it really matters. When it doesn't, it doesn't anyway.
4John_Maxwell9yI think we can plausibly fight this by improving education to compress the time necessary to teach concepts. Hardly any modern education uses the Socratic method to teach, which in my experience is much faster than conventional methods, and could in theory be executed by semi-intelligent computer programs (the Stanford machine learning class embedding questions part way through their videos is just the first step). Also, SENS.
8gwern9yEven better would be http://en.wikipedia.org/wiki/Bloom%27s_2_Sigma_Problem [http://en.wikipedia.org/wiki/Bloom%27s_2_Sigma_Problem] incidentally, and my own idée fixe, spaced repetition. Like Moore's Law, at any point proponents have a stable of solutions for tackling the growth; they (or enough of them) have been successful for Moore's Law, and it has indeed continued pretty smoothly, so if they were to propose some SENS-style intervention, I'd give them decent credit for it. But in this case, the overall stylized evidence says that nothing has reversed the changes up until I guess the '80s at which point one could begin arguing that there's underestimation involved (especially for the Nobel prizes). SENS and online education are great, but reversing this trend any time soon? It doesn't seem terribly likely. (I also wonder how big a gap between the standard courses and the 'cutting edge' there will be - if we make substantial gains in teaching the core courses, but there's a 'no mans land' of long-tail topics too niche to program and maintain a course on which extends all the way out to the actual cutting edge, then the results might be more like a one-time improvement.)
1John_Maxwell9yThanks for the two sigma problem link.
0John_Maxwell9yhttp://arstechnica.com/web/news/2009/04/study-surfing-the-internet-at-work-boosts-productivity.ars [http://arstechnica.com/web/news/2009/04/study-surfing-the-internet-at-work-boosts-productivity.ars]
4Viliam_Bur9yThe article says that internet use boosts productivity only if it is done less than 20% of time. How is this relevant to the real life? :D Also the article suggests that the productivity improvement is not caused by internet per se, but by having short breaks during work. So I think many people are beyond the point where internet use could boost their productivity.

Sue's article is here: She won’t be me.

Robin's article is here: Meet the New Conflict, Same as the Old Conflict - see also O.B. blog post

Francis's article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.

Marcus Hutter: Can Intelligence Explode?.

I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:

IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.

However, Francis's objections to virtual worlds seem even more silly to me. I've been hearing that simulations aren't real for decades now - and I still don't really understand why people get into a muddle over this issue.

0gwern9yHanson link doesn't seem to work.
2timtyler9yIt seems to be back now.

Schmidhuber paper

Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.

Simultaneously, our non-universal but still rather general fast deep/ recurrent neural networks have already started to outperform traditional pre-programmed methods: they recently collected a string of 1st ranks in many important visual pattern recognition benchmarks, e.g. Graves & Schmidhuber (2009); Ciresan et al. (2011): IJCNN traffic sign competition, NORB, CIFAR10, MNIST, three ICDAR handwriting competitions. Here we greatly profit from recent advances in computing hardware, using GPUs (mini-supercomputers normally used for video games) 100 times faster than today’s CPU cores, and a million times faster than PCs of 20 years ago, complementing the recent above-mentioned progress in the theory of mathematically optimal universal problem solvers.

On falsified predictions of AI progress:

I feel that after 10,000 years of civilization there is no need to justify pessimism through comparatively recent over-optimistic and self-serving predictions (1960s: ‘only 10 instead of 100 years needed to build AIs’) by a few early AI enthusiast

... (read more)
8Wei_Dai9yA Gödel machine, if one were to exist, surely wouldn't do something so blatantly stupid as posting to the Internet a "recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions". Why can't humanity aspire to this rather minimal standard of intelligence and rationality?

Similar theme from Hutter's paper:

Will AIXI replicate itself or procreate? Likely yes, if AIXI believes that clones or descendants are useful for its own goals.

If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?

4Will_Newsome9yI like lines of inquiry like this one and would like it if they showed up more.
0Wei_Dai9yI'm not sure what you mean by "lines of inquiry like this one". Can you explain?
8Will_Newsome9yI guess it's not a natural kind, it just had a few things I like all jammed together compactly: * Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers. * Talks about creation qua creation rather than creation as some implicit kind of self-modification. * Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
2Luke_A_Somers9yAn AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn't figure out how to get as good a result with another design (under real constraints).
6gwern9yI'm sure you can come up with several reasons for that.
5Wei_Dai9yThat was meant to be rhetorical... I'm hoping that the hypothetical person who's planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think "Hmm, a Gödel machine is supposed to be smart and it wouldn't publish its own recipe. Maybe I should give this a second thought."
2timtyler9yIf someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner's power and levelling things a little. Such an act may not be irrational - if it is a form of self-defense.
3Wei_Dai9ySuppose someone has built a self-improving AI, and it's the only one in existence (hence they have a "monopoly"). Then there might be two possibilities, either it's Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it's not too late. What would publishing its source code accomplish? Edit: Is the idea that the UFAI hasn't taken over the world yet, but for some technical or political reason it can't be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
1timtyler9yI don't think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself. If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn't bode well, though. Failing to share is surely one of the most reliable ways to signal that you don't have the interests of others at heart. Industrial espionage or reverse engineering can't shut organisations down - but it may be able to liberate their technology for the benefit of everyone.
4Vladimir_Nesov9ySo we estimate based on what we anticipate about the possible state of society. If it's expected that sharing AGI design results in everyone dying, not sharing it can't signal bad intentions.
-4timtyler9yThe expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable. Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
2timtyler9yThat seems more likely than a secretive monoplolistic agent keeping the technology for themselves from the beginning - and obliterating all potential rivals. Keeping the technology of general-purpose inductive inference secret seems unlikely to happen in practice. It is going to go into embedded devices - from which it will inevitably be reverse engineered and made publicly accessible. Also, it's likely to arise from a public collaborative development effort in the first place. I am inclined to doubt whether anyone can win while keeping their technology on a secure server - try to do that and you will just be overtaken - or rather, you will never be in the lead in the first place. Not pessimism, realism, is my assessment. You have to apply your efforts where they will actually make a difference.

Roman V Yampolskiy paper

Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I've see Eliezer's experiments cited, or Stuart Armstrong's Dr. Evil anthropic attack.

While the outlined informational hazards comprise over a dozen categories and are beyond the scope of this paper, it is easy to see how mental state of a person could be stressed to an unstable state. For example a religious guard could be informed of all the (unknown to him) contradictions in the main text of his religion causing him to question his beliefs and the purpose of life.

Given the length of the paper, I kind of expected there to be no mention of homomorphic encryption, as the boxing proposal that seems most viable, but to my surprise I read

The source code and hardware configuration of the system needs to be obfuscated (Yampolskiy & Govindaraju, 2007a) and important modules of the program should be provided only in the homomorphicly encrypted (Gentry, 2009) form, meaning that it could be used for computation or self-improvement (Hall, 2007), but not for self-analysis.

Important modules? Er, why not just the whole thing? If you have homomorphic encryption working and proven correct, the other measures may add a little security, but not a whole lot.

5timtyler9yIt says:
8gwern9yWell, weren't they? That was the whole point, I had the impression on SL4...

Our reason for placing the Singularity within the lifetimes of practi- cally everyone now living who is not already retired, is the fact that our supercomputers already have sufficient power to run a Singularity level program (Tipler, 2007). We lack not the hardware, but the soft- ware. Moore’s Law insures that today’s fastest supercomputer speed will be standard laptop computer speed in roughly twenty years (Tipler, 1994).

Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."

Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.

7DanielVarga9yTrue. But this one would not make the top 20 list of most problematic statements from the Tipler paper.
2gwern9yIndeed. For example, I raised my eyebrows when I came across the 2007 claim we already have enough. But that was far from the most questionable claim in the paper, and I didn't feel like reading Tipler 2007 to see what lurked within.

I like Goertzel's succinct explanation of the idea behind Moore's Law of Mad Science:

...as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.

Also, his succinct explanation of why Friendly AI is so hard:

The practical realization of [Friendly AI] seems likely to require astounding breakthroughs in mathematics and science — whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with ‘normal science and engineering.’

Another choice quote that succinctly makes a key point I find myself making all the time:

if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.

His proposal for Nanny AI, however, appears to be FAI-complete.

Also, it is strange that despite paragraphs like this:

we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies. And now, these same technologies that may necessitate the creation of an AI Nanny, also may provide the means of creating it.

...he does not anywhere cite Bostrom (2004).

0timtyler9yIt's a very different idea from Yudkowsky's "CEV" proposal. It's reasonable to think that a nanny-like machine might be easier to build that other kinds - because a nanny's job description is rather limited.

A quote from Dennett's article, on the topic of consciousness:

‘One central problem,’ Chalmers tells us, ‘is that consciousness seems to be a further fact about conscious systems’ (p. 43) over and above all the facts about their structure, internal processes and hence behavioral competences and weaknesses. He is right, so long as we put the emphasis on ‘seems’. There does seem to be a further fact to be determined, one way or another, about whether or not anybody is actually conscious or a perfect (philosopher’s) zombie. This is what I have called the Zom

... (read more)

Damien Broderick paper

"What if, as Vernor Vinge proposed, exponentially accelerating science and technology are rushing us into a Singularity (Vinge, 1986; 1993), what I have called the Spike? Technological time will be neither an arrow nor a cycle (in Stephen Jay Gould’s phrase), but a series of upwardly accelerating logistical S-curves, each supplanting the one before it as it flattens out. Then there’s no pattern of reasoned expectation to be mapped, no knowable Chernobyl or Fukushima Daiichi to deplore in advance. Merely - opacity."

...G. H

... (read more)

In "Leakproofing..."

"To reiterate, only safe questions with two possible answers of even likelihood which are independently computable by people should be submitted to the AI."

Oh come ON. I can see 'independently computable', but requiring single bit responses that have been carefully balanced so we have no information to distinguish one from the other? You could always construct multiple questions to extract multiple bits, so that's no real loss; and with awareness of Bayes' theorem, getting an exact probability balance is essentially impossible on any question we'd actually care about.

In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singulari... (read more)

I wish I could read the Dennett article online. If Chalmers has a philosophical nemesis it has to be Dennett. Though he probably sees it otherwise, I contend that Dennett's hard materialism is loosing ground daily in the academic and philosophical mainstream even as Chalmers' non-reductive functionalism gains in appreciation. (Look at Giulio Tononi's celebrated IIT theory of consciousness with its attendant panpsychism for just one example. And that's in the hard sciences, not philosophy.)

I'm ascertaining from the comments here that Dennett is no fan of t... (read more)

Many of those people are believers who are already completely sold on the idea of a technological singularity. I hope some sort of critical examination is forthcoming as well.

Schmidhuber, Hutter and Goertzel might be called experts. But I dare to argue that statements like "progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of" are almost certainly bullshit.

3Thomas9yYou can be certain if you wish. I am not. As I am not sure that there isn't a supervirus somewhere, I can't be certain that there isn't a decent self-improver somewhere. Probably not, but ... Both ARE possible, according to my best knowledge, so it wouldn't be wise to be too sure in any direction. As you are.
2XiXiDu9yAccording to the technically correct, but completely useless, lesswrong style rationality you are right that it is not wise to say that it is "almost certainly bullshit". What I meant to say is that given what I know it is unlikely enough to be true to be ignored and that any attempt at calculating the expected utility of being wrong will be a waste of time, or even result in spectacular failure. I currently feel that the whole business of using numerical probability estimates and calculating expected utilities is incredible naive in most situations and at best gives your beliefs a veneer of respectability that is completely unjustified. If you think something is almost certainly bullshit then say it and don't try to make up some number. Because the number won't resemble the reflective equilibrium of various kinds of evidence, your preferences and intuition that is being comprised in calling something almost certainly bullshit .
0Thomas9yWell, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can't be any different. Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be. Saying - oh, I know you are wrong based on everything I stand for - is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely - should explain it also. They do so. P.S. I don't consider myself as a "lesswronger" at all. Disagree too often and have no "site patriotism".
1XiXiDu9yMy comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree. There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated. In the case that the development of self-improving AI's is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements. In the case that the development of self-improving AI's demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered. And regarding the proponents of a technological Singularity. 99% of their arguments consist of handwaving and claims that physical possibility implies feasibility. In other words, bullshit.
0Thomas9yEverybody on all sides of this discussion is a suspect of a bullshit trader or a bullshit producer. That includes me, you, Vinge, Kurzweill, Jürgen S., Ben Goertzel - everybody is a suspect. Including the investigators from any side. Now, I'll clear my position. The whole AI business is an Edisonian, not an Einsteinian project. I don't see a need for some enormous scientific breakthroughs before it can be done. No, to me it looks like - we have Maxwell equations for some time now, can we build an electric lamp? Edison is just one among many, who is claiming it is almost done in his lab. It is not certain what's the real situation in the Menlo Park. The fact that an apprentice who left Edison is saying that there is no hope for a light bulb is not very informative. As it is not, that another apprentice still working there, is euphoric. It doesn't matter even what the Royal Queen Science Society back in old England has to say. Or a simple peasant. You just can't meta judge very productively. But you can judge is it possible to have an object as an electric driven lamp? Or can you build a nuclear fusion reactor? Or can you built an intelligent program? If it is possible, how hard is to actually build one of those? May takes a long time, even if it is. May take a short time, if it is. The only real question is - can it be done and if yes - how? If no, also good. It just isn't. But you have to stay on topic, not meta topic, I think.
5XiXiDu9yTo me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant. It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn't mean that it is feasible. It also says nothing about timeframes. The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.
1Thomas9yIt has been done in 2500 years. (Providing that the fusion is still outsourced to the Sun). What are guaranties that in this case we will CERTAINLY NOT be 100 times faster? It does not automatically mean that it is either unfeasible or far, far in the future. If it was sure that it's far, far away - but it isn't that sure at all - even then it would be a very important topic.
0XiXiDu9yI am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don't do it, do you? There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can't really calculate shit 2) it's impossible to do for any agent that isn't computationally unbounded 3) you'll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability. Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level. The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can't be evaluated due to a lack of data.
4timtyler9yDoes this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them? I don't think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
0[anonymous]9yOk what's the difference here? By "utilitarianism" do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions? I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
0timtyler9yThe term "utilitarianism [http://en.wikipedia.org/wiki/Utilitarianism]" refers to maximising the combined happiness of all people. The page says: So: that's a particular class of utility functions. "Expected utility maximization" is a more general framework from decision theory. You can use any utility function with it - and you can use it to model practically any agent. Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural - due partly to its selflessness and lack of nepotism [http://en.wikipedia.org/wiki/Nepotism]. It may have some merits as a politial philosophy (but even then...).
0[anonymous]9yThanks. Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does "consequentialism" usually imply normal human value, or is it usually a general term?
0timtyler9ySee http://en.wikipedia.org/wiki/Consequentialism [http://en.wikipedia.org/wiki/Consequentialism] for your last question (it's a general term). The answer to your "Is there a name..." question is "no" - AFAIK.
0[anonymous]9yI get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.
0gwern9yMy understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different? (Note there is no such possible danger from cryonics: you're already 'dead'.)
0timtyler9yReally? Some have been known top exaggerate to stimulate funding. However, many people (including some non-engineers) don't put machine intelligence that far off. Do you have your own estimates yet, perhaps?
2timtyler9yThat's one of those statements-that-is-so-vague-it-is-bound-to-be-true. "Substantially" in one problem, and "many" is another one.