By singularity I mean a recursive self-improving intelligence created by man that will help us solve the worlds problems.

please explain the downvotes.. sorry I didn't write an essay or link to lots of in-jokes about the sky being green. It's just a simple question so I didn't want to embellish it.

New to LessWrong?

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 9:39 AM

On the one hand, no. I suspect building a better than human intelligence to be much harder than we imagine. It has been much harder than we imagined in the past. I don't see any reason to think we have passed some inflection point where our understanding is just so good that progress from now on will be qualitiatively better than in the past.

On the other hand, we already have a self-improving intelligence, but it is only partially artificial. It seems clear to me that the intelligence of the human race as a whole is taking off quickly as our technology of tying brains together to work cooperatively improves. Probably the last "natural" improvement was the evolution of the ability to use spoken language to increase the bandwidth of inter-brain communication by orders of mangitude. Since then the artificial improvements include recording devices (starting with writing), communication devices (starting with books on boats). With the internet and fantastic interfaces to it, we know have a fantastically complex inter-brain communication. Amazingly, a big part of it is still intermediated through our fingers, which seems likely to me to change soon for the better.

What is the IQ of the planet as a whole? Whatever the answer, the planet with books was smarter than the planet without them, and the planet with internet is pretty remarkable indeed.

On the one hand, no. I suspect building a better than human intelligence to be much harder than we imagine.

While I understand your desire to correct for Optimism Bias, aren't you making a fully general counterargument?

Am I arguing? I thought I was just saying what I think for what that is worth or not worth.

that will probably kill us all.

FTFY

Please don't use "man" as a synonym for "people". We're supposed to be past this sexism.

[-][anonymous]13y-20

At the risk of sounding cynical, "we're" not past premeditated acquaintance rape (the most common kind of rape), the gender pay gap and glass ceiling in employment, gender-essentialist stereotyping, domestic violence with a significant gender skew, and transphobia...

So why would "we" be past sexism?

(This is not to shut down your request for gender-neutral language, mind! It just seems like the tip of the iceberg where sexism is concerned.)

I just meant the Less Wrong community ought to hold itself to higher standards then the general public.

It depends. What do you mean by lifetime?

I'm not sure that your definition of a Singularity is a good one. By that definition you are only asking about a subclass of best case Singularity scenarios. An extremely self-improving intelligence that doesn't help humans and takes over should probably be consdired a Singularity type event. In fact by your definition it would constitute a Singularity if we created an entity about as smart as a cat that was able to self-improve to being as intelligent as a raven. This seems to not fit what you want to ask.

I will therefore answer two questions: First, will a singularity occur under your definition? I don't know, I wouldn't be surprised. One serious problem with this sort of thing is what one means by self-improving. For example, neural nets are self-improving as are a number of automated learning systems, and much of what they do is in some sense recursive. Presuming therefore you mean some form of recursive self-improvement that allows much more fundamental changes to the architecture of the entity in question, I assign this sort of event a decent chance of happening in my lifetime (say 10-20%).

Now, I will answer what I think you wanted to ask, where by Singularity you mean the creation of a recursively self-improving AI which improves itself so quickly and so much that it quickly becomes a dominant force in its lightcone. This possibility I assign a very low probability of happening my lifetime, around 1-2%. Most of that uncertainty is due to uncertainty about how much purely algorithmic improvement is possible (e.g. issues like whether P=NP and the relationship between NP and BPP).

No, at the level I think an AI needs to emulate humans I don't think it'd have any great advantages for self-improvement. I think it might be possible to highly automate innovation and scientific discovery without intelligence (and all the risks that come with it) though - i.e. a system that applies the same "dumb" algorithms over and over again and spits out answers without the possibility of it ever having a goal or desire or making a decision (confusion would be the input, clarity the output). So I still think we might be able to solve all the world's problems in my life time.

There's a whole cluster of things - a recursively self-improving process? A runaway optimizer? A general intelligence? Brain emulation?

But yes to most of these things. It helps that I expect that in 60 years, when I am 80, medical technology will be effective enough to give me another 60 years.

But yes to most of these things. It helps that I expect that in 60 years, when I am 80, medical technology will be effective enough to give me another 60 years.

I remember thinking that 20 years ago. We don't seem much closer. If you assume that you will be fantastically rich, then it sounds a little more plausible that you would get access to such care.

I remember thinking that 20 years ago. We don't seem much closer.

Twenty years ago we didn't have the Methuselah or SENS foundations; nor was antiagapic research considered viable for mainstream medicine. Today these things are.

No, sadly. If we make a brain by simulating neurons then we will end up with a brain that can't understand itself well enough to make improvements, just like ours. Writing a program to be intelligent to a level similar to a human seems to be something we've made no progress on for 50 years (all the impressive progress has been in very niche areas). It seems highly likely to me that we won't know how to do this 20 to 30 years from now either. And then of course it would still need to be self-improving.

Then over this period of time (and likely much less) we still have the major challenge of keeping civilization ticking along at all. Resource production rates are falling and will start falling at increasing rates. There don't seem to be any magical/technological solutions to dropping power rates. Economies all over the world are already suffering. Computers don't work well without electricity.

So close, and yet so far.

If we make a brain by simulating neurons then making it superhuman is a matter of throwing hardware at it, and it could certainly help design better hardware. In terms of subjective time no it would not be a lot faster than a human but that doesn't matter to the outside view.

I actually think though that apart from just more cycles the patch-work simulation brain would enjoy immense advantages in terms of access to information and computing facilities. Similar to the advantages we would anticipate from "wiring in" an organic brain but even better since it can benefit from much higher bandwidth.

If we make a brain by simulating neurons then making it superhuman is a matter of throwing hardware at it,

Not really. No matter what you do, if your simulation speed is optimized to the best of your current know-how, expanding additional hardware once you reach full simulation will only be wasted resources. And then you run into lightspeed constraints and bus/bandwidth limitations.

I.e.; if we don't have the hardware you can't "just throw more hardware at it".

and it could certainly help design better hardware.

There is an underlying assumption here that a simulated brain would operate faster than a human's. That's by no means necessarily true. Anyone familiar with virtualization performance knows you take a performance hit as compared to bare-metal when you do full virtualization.

The speed at which it operates will have nothing to do with the speed at which human brains operate. It can be much much slower and much much faster, depending on the hardware available. Today, it would be much much slower. Yes there are theoretical limits but also neural simulation looks like a problem well optimized for parallel processing, just like our brains do. The extent to which that is practical is really the governor here since parallel scalability would really mean that we can throw more hardware at it until the cost of communicating between nodes exceeds the benefit from having one more node.

The extent to which that is practical is really the governor here since parallel scalability would really mean that we can throw more hardware at it until the cost of communicating between nodes exceeds the benefit from having one more node.

I don't think "nodes" is an entirely accurate statement here, however. But either way -- the point I was making is that the hardware is itself going to be the biggest constraint in early-period devices. Furthermore, it's not enough to simply integrate more nodes; those nodes need to have an actual role. Otherwise, bigger brains would result in smarter individuals. Yet elephants are not smarter than humans.

Yes. I'm having third thoughts about the likely speed of self-improvement, but I definitely expect smarter-than-human intelligence doing all sorts of interesting things before I die.

I'm sorry, don't want to diss the very real idea of the singularity, but I had to laugh for this one. It's just that the way you asked... Its wording is strongly reminiscent of Rapture/Apocalypse fanaticism, merely translated into the local dialect.

But no, I don't think it'll happen within my life time bar cryonics, life extension, or a sudden and dramatic increase in world sanity.

Yes. I am 24 right now, and most intelligent people (SIAI folks) I've talked to think the singularity will happen within the next 30-70 years (individual claims fall mostly somewhere within this margin).

Edit: when it comes to the question: will the first recursively self-improving AGI be friendly, I don't have a good estimate, so I'll just leave it at 50-50.

Notice the probably updated clarification, which makes the question to be about a positive singularity.

[-]XiXiDu13y-10

90% that we'll see a partly superhuman general intelligence within my lifetime. What I am really skeptical about is the extent to which such an intelligence will be superhuman:

[-][anonymous]13y20

Why?