tangerine

Wiki Contributions

Comments

Sorted by

But we already knew that some people think AGI is near and others think it's farther away!

And what do you conclude based on that?

I would say that as those early benchmarks ("can beat anyone at chess", etc.) are achieved without producing what "feels like" AGI, people are forced to make their intuitions concrete, or anyway reckon with their old bad operationalizations of AGI.

The relation between the real world and our intuition is an interesting topic. When people’s intuitions are violated (e.g., the Turing test is passed but it doesn’t “feel like” AGI), there’s a temptation to try to make the real world fit the intuition, when it is more productive to accept that the intuition is wrong. That is, maybe achieving AGI doesn’t feel like you expect. But that can be a fine line to walk. In any case, privileging an intuitive map above the actual territory is about as close as you can get to a “cardinal sin” for someone who claims to be rational. (To be clear, I’m not saying you are doing that.)

They spend more time thinking about the concrete details of the trip, not because they know the trip is happening soon, but because some think the trip is happening soon. Disagreement on and attention to concrete details is driven by only some people saying that the current situation looks like, or is starting to look like the event occurring according to their interpretation. If the disagreement had happened at the beginning, they would soon have started using different words.

In the New York example, it could be that when someone says “Guys, we should really buy those Broadway tickets. The trip to New York is next month already.” they prompt the response “What? I thought we were going the month after!”, hence disagreement. If this detail had been discussed earlier, there might have been the “February trip” and the “March trip” in order to disambiguate the trip(s) to New York.

In the case of AGI, some people’s alarm bells are currently going off, prompting others to say that more capabilities are required. What seems to have happened is that people at one point latched on to the concept of AGI, thinking that their interpretation was virtually the same as those of others because of its lack of definition. Again, if they had disagreed with the definition to begin with, they would have used a different word altogether. Now that some people are claiming that AGI is here or here soon, it turns out that the interpretations do in fact differ. The most obnoxious cases are when people disagree with their own past interpretation once that interpretation is threatened to be satisfied, on the basis of some deeper, undefined intuition (or, in the case of OpenAI and Microsoft, ulterior motives). This of course is also known as “moving the goalposts”.

Once upon a time, not that long ago, AGI was interpreted by many as “it can beat anyone at chess”, “it can beat anyone at go” or “it can pass the Turing test”. We are there now, according to those interpretations.

Whether or not AGI exists depends only marginally on any one person’s interpretation. Words are a communicative tool and therefore depend on others’ interpretations. That is, the meanings of words don’t fall out of the sky; they don’t pass through a membrane from another reality. Instead, we define meaning collectively (and often unconsciously). For example, “What is intelligence?” is a question of how that word is in practice interpreted by other people. “How should it be interpreted (according to me personally)?” is a valid but different question.
 

tangerine100

The amount of contention says something about whether an event occurred according to the average interpretation. Whether it occurred according to your specific interpretation depends on how close that interpretation is to the average interpretation.

You can't increase the probability of getting a million dollars by personally choosing to define a contentious event as you getting a million dollars.

I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)

Formally, I say that , meaning  is small, where  can be considered a measure of how vaguely the term AGI is specified.

The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.

That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it's also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.

An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.

  1. ^

    Assuming rational actors.

  2. ^

    Assuming a unimodal distribution of interpretations in semantic space.

You’re kind of proving the point; the Second Coming is so vaguely defined that it might as well have happened. Some churches preach this.

If the Lord Himself did float down from Heaven and gave a speech on Capitol Hill, I bet lots of Christians would deride Him as an impostor.

Thank you for the reply!

I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well.

This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.
 

Entities that reproduce with mutation will evolve under selection. I'm not so sure about the "natural" part. If AI takes over and starts breeding humans for long floppy ears, is that selection natural?

In some sense, all selection is natural, since everything is part of nature, but an AI that breeds humans for some trait can reasonably be called artificial selection (and mesa-optimization). If such a breeding program happened to allow the system to survive, nature selects for it. If not, it tautologically doesn’t. In any case, natural selection still applies.

But there won't necessarily be more than one AI, at least not in the sense of multiple entities that may be pursuing different goals or reproducing independently. And even if there are, they won't necessarily reproduce by copying with mutation, or at least not with mutation that's not totally under their control with all the implications understood in advance. They may very well be able prevent evolution from taking hold among themselves. Evolution is optional for them. So you can't be sure that they'll expand to the limits of the available resources.

In a chaotic and unpredictable universe such as ours, survival is virtually impossible without differential adapation and not guaranteed even with it. (See my reply to lukedrago below.)

I don't know how selection pressures would take hold exactly, but it seems to me that in order to prevent selection pressures, there would have to be complete and indefinite control over the environment. This is not possible because the universe is largely computationally irreducible and chaotic. Eventually, something surprising will occur which an existing system will not survive. Diverse ecosystems are robust to this to some extent, but that requires competition, which in turn creates selection pressures.

tangerine1412

humans are general because of the data, not the algorithm

Interesting statement. Could you expand a bit on what you mean by this?

You cannot in general pay a legislator $400 to kill a person who pays no taxes and doesn't vote.

Indeed not directly, but when the inferential distance increases it quickly becomes more palatable. For example, most people would rather buy a $5 T-shirt that was made by a child for starvation wages on the other side of the world, instead of a $100 T-shirt made locally by someone who can afford to buy a house with their salary. And many of those same T-shirt buyers would bury their head in the sand when made aware of such a fact.

If I can tell an AI to increase profits, incidentally causing the AI to ultimately kill a bunch of people, I can at least claim a clean conscience by saying that wasn't what I intended, even though it happened just the same.

In practice, legislators do this sort of thing routinely. They pass legislation that causes harm—sometimes a lot of harm—and sleep soundly.

Load More