So far, we only have one known example of the development of intelligent life; and that example is us. Humanity. That means that we have only one machanism that is known to be able to produce intelligent life; and that is evolution. But by far the majority of life that is produced by evolution is not

intelligent. (In fact, by far the majority of life produced by evolution appears to be bacteria, as far as I can tell. There's also a lot of beetles).

Why did evolution produce such a steep climb in human intelligence, while not so much in the case of other creatures? That, I suspect, is at least partially because as humans we are not competing against other creatures anymore. We are competing against each other.

Also, once we managed to start writing things down and sharing knowledge, we shifted off the slow, evolutionary timescale and onto the faster, technological timescale. As technology improves, we find ourselves being more right, less wrong; our ability to affect the environment continually increases. Our intellectual development, as a species, speeds up dramatically.

And I believe that there is a hack that can be applied to this process; a mechanism by which the total intelligence of humanity as a whole can be rather dramatically increased. (It will take time). The process is simple enough in concept.

 


 

These thoughts were triggered by an article on some Ethiopian children who were given tablets by OLPC. They were chosen specifically on the basis of illiteracy (through the whole village) and were given no teaching (aside from the teaching apps on the tablets; some instruction on how to use the solar chargers was also given to the adults) and in fairly short order, they taught themselves basic literacy. (And had modified the operating system to customise it, and re-enable the camera).

My first thought was that this gives an upper limit to the minimum cost of world literacy; the minimum cost of world literacy is limited to the cost of one tablet per child (plus a bit for transportation).

 


 

In short, we need world literacy. World literacy will allow anyone and everyone to read up on that which interests them. It will allow a vastly larger number of people to start thinking about certain hard problems (such as any hard problem you care to name). It will allow more eyes to look at science; more experiments to be done and published; more armour-piercing questions which no-one has yet thought to ask because there simply are not enough scientists to ask them.

World literacy would improve the technological progress of humanity; and probably, after enough generations, result in a humanity who we would, by todays standards, consider superhumanly intelligent. (This may or may not necessitate direct brain-computer interfaces)

The aim, therefore, is to allow humanity, and not some human-made AI, to go *foom*. It will take some significant amount of time - following this plan means that our generation will do no more than continue a process that began some millions of years ago - but it does have this advantage; if it is humanity that goes *foom*, then the resulting superintelligences are practically guaranteed to be human-Friendly since they will be human. (For the moment, I discard the possibility of a suicidal superintelligence).

It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.

The main disadvantage is the time taken; this will take centuries at the least, perhaps millenia. It is likely that, along the way, a more traditional AI will be created.

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 8:23 AM

According to the CIA world factbook, the world literacy rate is 83.7%, so increasing this to 100% is only a 20% increase in the number of literate people. That's equivalent to about 18 years of population growth at the current rate of 1.1%/yr. World literacy is a good and desirable thing, but we already got most of the way there (and collected the benefits) in the 20th century; the remaining benefits are humanitarian, not technological.

I think the breakpoint is access to computers, not literacy.

[-]CCC11y-10

There's another effect as well. Humans compete with each other; at the moment, all literate people can claim a legitimate advantage over the illiterate people (and, in the case of some, this may be an excuse to stop self-improving). Once there are no illiterates, that excuse falls away.

Most potential scientists don't view illiterate children in Third World countries as their competitors.

Humans aren't Friendly. Whatever gave you that idea?

Don't we mean 'friendly to humans and their goals' when we say 'Friendly' in the context of AI? I'm pretty sure that would make us at least moderately Friendly (or, at least, more so than an Unfriendly AI would be.)

We are Friendlier than a paperclip maximizer, but we're not just-plain-Friendly. We can be led to do nasty things for all kinds of reasons in all kinds of ways, we are subject to goal distortion and various interfering biases even insofar as our goals are correct, and our goals aren't fully transparent to us to allow explicit unambiguous pursuit anyway.

I think most of us are Friendly “enough”, but those who aren't tend to have a disproportionate impact on world history (Hitler would be one of the most extreme examples).

An Unfriendly AI would only be bad because it becomes ridiculously hard for us to stop, and it doesn't care about us. If an ufAI is exactly as powerful and smart as an average human, and cannot ever get better, it's not all that much of a threat, and is really just only as dangerous as your average socio/psycho/something-path.*

May I point at the various instances of systematic slavery in human history, or even right now across the world? Imagine if the slavers had double our triple the intelligence they had/have. What makes you think that these superintelligent slaver humans would be "Friendly" even at the basic level, let alone would be the Safe kind of Friendly under self-modification? (supposing they manage to modify or enhance themselves in some way)

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

* Yes, that's anthropomorphizing it a bit, but I'm assuming that it would need its own set of heuristics to replace humans' biases and heuristics, otherwise it'd probably be thinking very slowly and pose even less of a threat. If those heuristics aren't particularly better optimized than our own, then it's still only so much of a threat, probably equivalent to a particularly unpredictable psychopath.

[-]CCC11y-10

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

The assumptions that I make are that the humanity-fooming would be both very slow, and generally available in some way (I'm not sure entirely how, but brain-computer interfaces are a possibility). That all humans foom, at more-or-less the same time, and at more-or-less the same rate, then follows on (especially in the case of the brain-computer interface, in which case the speed of the foom is controlled by the speed of technological development).

I don't think that all of the fooming people would be Friendly, but I do think that under those circumstances, any Unfriendly ones would be outnumbered by roughly-equivalently-intelligent Friendly ones, resulting in a by-and-large Friendly consensus.

I believe OP was referring to a single FOOM of humanity collectively.

I believe OP was referring to a single FOOM of humanity collectively.

Yes, so was I:

The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.

Hmm, I see from OP's response that he is thinking that EACH human will have a doubling of IQ per decade once we can all read. I certainly can't see where he'd get that from. It seems most likely that high literacy high wealth countries would be near the limits of individual IQ achievable from good nutrition, education, and pervasive literacy.

I thought, incorrectly apparently, he was referring to a collective intelligence of humanity.

It seems clear enough to me that humanity functions as a Searlian "Chinese room" style intelligence at least. In that sense, the infrastructure, the technology available to that room to integrate the individuals in the room, as well as the total number of individuals available to be installed in the room, limits the effective intelligence of that room.

If you don't like the metaphor of the Searlian "Chinese room," think of a multiprocessor where each core is a human, and the communications and shared memory and other linkages are internet, written documents, and so on.

Then turning the last 1/6 of humanity literate (world literacy rate currently about 5/6) might give a 16ish % boost in total intelligence, plus a bit more since excess capacity over what is available for pure survival is what we get to contribute to the total, and presumably illiterate people are working at close to breakeven (just effectively smart enough to stay alive).

But the idea that individual intelligence will change because literary rate goes from 84% to 99+%, I don't get that at all.

[-]CCC11y-20

Yes, exactly. A slow foom, one in which we take maybe a decade or longer for each doubling of IQ, so that there's time for everyone to keep up.

[-]CCC11y00

Don't we mean 'friendly to humans and their goals' when we say 'Friendly' in the context of AI?

That is how I was using the term, yes.

$50 smartphones are accomplishing more than OLPC ever did.

On OLPC: I became slightly less enthusiastic about it after reading this; there was also some discussion of this here.

It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.

Well, our offensive technology is far ahead of our defensive technology. Despite being on the whole human-friendly, people keep running around trying to kill each other; sometimes we stop them and sometimes we do not.

One of the reasons people give for trying to develop AGI quickly is that, if done right, it would protect us from all the other things that are trying to kill us.

[-]TrE11y50

Considering that every new human more or less starts off as a blank slate (or rather, that which we do start with doesn't improve with each generation) and there is only so much a human can learn within a lifetime (unless we consider genetic engineering or control over death/prolonging life), I'd expect that progress becomes slower and slower over time. I don't see how a takeoff (where progress speeds up again, and drastically so) could be achieved without either mind uploads, anti-death measures which greatly prolong life, or genetic engineering/breeding of humans. Or major breakthroughs in education, to the point where you don't need brains to observe and pattern-match, but can teach them directly.

A data point: Even today, in highly explored areas such as mathematics, it takes a significant fraction of a normal human life to acquire the skillset needed to tackle the hard problems.

[-]CCC11y00

Ancillary devices (like computers) do improve, however. With time, education will likely shift away from memorising facts, and people will put a greater reliance on handheld computing systems; which can include, for example, automated theorem proving software (which already exists).

Brain-computer direct interfaces will take time to develop, but are a continuation of this trend.

Some people would argue that we don't want to reach the superintelligent level the really slow way because more and more people are dying and there is more tragedy the longer we take.

And, more importantly, we'd all be dead by that time so it wouldn't help us very much.

[-][anonymous]11y00

Humans will not go about fooming until we can doctor our brains well enough that we might as well run the entire civilization in upload space. Even then I and four associates suspect that modifying your own brain is dangerous and can lead to UFAI.

World literacy will allow anyone and everyone to read up on that which interests them.

There are plenty of people who know perfectly well how to read but can't be arsed to, so long as they can eat everyday and watch TV on their couch.

No doubt, but what's important isn't so much the people who don't do much (and I don't think there's any superiority from reading ordinary stuff vs. watching ordinary stuff), as the people who could be doing something interesting or important but are blocked because of lack of literacy.