I think that, in your New York example, the increasing disagreement is driven by people spending more time thinking about the concrete details of the trip. They do so because it is obviously more urgent, because they know the trip is happening soon. The disagreements were presumably already there in the form of differing expectations/preferences, and were only surfaced later on as they started discussing things more concretely. So the increasing disagreements are driven by increasing attention to concrete details.
It seems likely to me that the increasing disagreement around AGI is also driven by people spending more time thinking about the concrete details of what constitutes AGI. But where in the New York example we can assume people pay more attention to the details because they know the trip is upcoming, with AGI we know that people don't know when AGI will happen, so there must be some other reason.
One reason could be "a bunch of people think/feel AGI is near", but we already knew that before noticing disagreement around AGI. Another reason could be that there's currently a lot of hype and activity around AI and AGI. But the fact that there's lots of hype around AI/AGI doesn't seem like much evidence that AGI is near, or if it is, can also be stated more directly than through a detour via disagreements.
They spend more time thinking about the concrete details of the trip, not because they know the trip is happening soon, but because some think the trip is happening soon. Disagreement on and attention to concrete details is driven by only some people saying that the current situation looks like, or is starting to look like the event occurring according to their interpretation. If the disagreement had happened at the beginning, they would soon have started using different words.
In the New York example, it could be that when someone says “Guys, we should really buy those Broadway tickets. The trip to New York is next month already.” they prompt the response “What? I thought we were going the month after!”, hence disagreement. If this detail had been discussed earlier, there might have been the “February trip” and the “March trip” in order to disambiguate the trip(s) to New York.
In the case of AGI, some people’s alarm bells are currently going off, prompting others to say that more capabilities are required. What seems to have happened is that people at one point latched on to the concept of AGI, thinking that their interpretation was virtually the same as those of others because of its lack of definition. Again, if they had disagreed with the definition to begin with, they would have used a different word altogether. Now that some people are claiming that AGI is here or here soon, it turns out that the interpretations do in fact differ. The most obnoxious cases are when people disagree with their own past interpretation once that interpretation is threatened to be satisfied, on the basis of some deeper, undefined intuition (or, in the case of OpenAI and Microsoft, ulterior motives). This of course is also known as “moving the goalposts”.
Once upon a time, not that long ago, AGI was interpreted by many as “it can beat anyone at chess”, “it can beat anyone at go” or “it can pass the Turing test”. We are there now, according to those interpretations.
Whether or not AGI exists depends only marginally on any one person’s interpretation. Words are a communicative tool and therefore depend on others’ interpretations. That is, the meanings of words don’t fall out of the sky; they don’t pass through a membrane from another reality. Instead, we define meaning collectively (and often unconsciously). For example, “What is intelligence?” is a question of how that word is in practice interpreted by other people. “How should it be interpreted (according to me personally)?” is a valid but different question.
In the New York example, it could be that when someone says “Guys, we should really buy those Broadway tickets. The trip to New York is next month already.” they prompt the response “What? I thought we were going the month after!”, hence the disagreement. If this detail had been discussed earlier, there might have been the “February trip” and the “March trip” in order to disambiguate the trip(s) to New York.
I guess I don't understand what focusing on disagreements adds. Sure, in this situation, the disagreement stems from some people thinking the trip is near (and others thinking it's farther away). But we already knew that some people think AGI is near and others think it's farther away! What does observing that people disagree about that stuff add?
What seems to have happened is that people at one point latched on to the concept of AGI, thinking that their interpretation was virtually the same as those of others because of its lack of definition. Again, if they had disagreed with the definition to begin with, they would have used a different word altogether. Now that some people are claiming that AGI is here or here soon, it turns out that the interpretations do in fact differ.
Yeah, I would say that as those early benchmarks ("can beat anyone at chess", etc.) are achieved without producing what "feels like" AGI, people are forced to make their intuitions concrete, or anyway reckon with their old bad operationalizations of AGI. And that naturally leads to lots of discussion around what actually constitutes AGI. But again, all this is evidence of is that those early benchmarks have been achieved without producing what "feels like" AGI. But we already knew that.
But we already knew that some people think AGI is near and others think it's farther away!
And what do you conclude based on that?
I would say that as those early benchmarks ("can beat anyone at chess", etc.) are achieved without producing what "feels like" AGI, people are forced to make their intuitions concrete, or anyway reckon with their old bad operationalizations of AGI.
The relation between the real world and our intuition is an interesting topic. When people’s intuitions are violated (e.g., the Turing test is passed but it doesn’t “feel like” AGI), there’s a temptation to try to make the real world fit the intuition, when it is more productive to accept that the intuition is wrong. That is, maybe achieving AGI doesn’t feel like you expect. But that can be a fine line to walk. In any case, privileging an intuitive map above the actual territory is about as close as you can get to a “cardinal sin” for someone who claims to be rational. (To be clear, I’m not saying you are doing that.)
people disagree heavily on what the second coming will look like. this, of course, means that the second coming must be upon us
You’re kind of proving the point; the Second Coming is so vaguely defined that it might as well have happened. Some churches preach this.
If the Lord Himself did float down from Heaven and gave a speech on Capitol Hill, I bet lots of Christians would deride Him as an impostor.
Specifically, as an antichrist, as the Gospels specifically warn that "false messiahs and false prophets will appear and produce great signs and omens", among other things. (And the position that the second coming has already happened - completely, not merely partially - is hyperpreterism.)
suppose I believe the second coming involves the Lord giving a speech on capitol hill. one thing I might care about is how long until that happens. the fact that lots of people disagree about when the second coming is doesn't mean the Lord will give His speech soon.
similarly, the thing that I define as AGI involves AIs building Dyson spheres. the fact that other people disagree about when AGI is doesn't mean I should expect Dyson spheres soon.
The amount of contention says something about whether an event occurred according to the average interpretation. Whether it occurred according to your specific interpretation depends on how close that interpretation is to the average interpretation.
You can't increase the probability of getting a million dollars by personally choosing to define a contentious event as you getting a million dollars.
My response to this is to focus on when a Dyson Swarm is being built, not AGI, because it's easier to define the term less controversially.
And a large portion of disagreements here fundamentally revolves around being unable to coordinate on what a given word means, which from an epistemic perspective doesn't matter at all, but it does matter from a utility/coordination perspective, where coordination is required for a lot of human feats.
What is the definition of a Dyson Swarm? Is it really easier to define, or just easier to see that we are not there, only because we are not close yet?
Unfortunatelly, I fear this applies to basically everything I could in principle make a benchmark around, mostly because of my own limited abilities.
The actual Bayesian response would be for both the AGI case and the Second Coming case is that both hypotheses are invalid from the start due to underspecification, so any probability estimates/decision making for utility for these hypotheses are also invalid.
I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)
Formally, I say that , meaning is small, where can be considered a measure of how vaguely the term AGI is specified.
The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.
That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it's also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.
An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.
Agree. This post captures the fact that, time and again, historical and once perceived as insurmountable benchmarks in AI have been surpassed. Those not fully cognizant of the situation have been iteratively surprised. People, for reasons I cannot fully work out, will continue to engage in motivated reasoning against current and near-term-future-expected AI capabilities and or economical value, with some part of the evidence-downplaying consisting of shifting AGI-definitional or capability-threshold-to-impress goalposts (see moving goalposts). On a related note, your post also makes me imagine the apologue of the boiling frog of late w.r.t. scaling curves.
If I’m planning a holiday to New York (and I live pretty far from New York), it’s quite straightforward to get fellow travellers to agree that we need to buy plane tickets to New York. Which airport? Eh, whichever is more convenient, I guess. Alternatively, some may prefer a road trip to New York, but the general direction is obvious for everyone.
However, as the holiday gets closer in time or space, the question of what we actually mean by a holiday in New York becomes more and more contentious. Did we mean New York State or New York City? Did we mean Brooklyn or Broadway? Which Broadway theater? Which show? Which seats?
By my estimation, the fact that the question of whether AGI has been achieved is so broadly contentious shows that we are so close to it that the term has lost its meaning, in the same way that “Let’s go to New York!” loses its meaning when you’re already standing in Times Square.
It’s time for more precise definitions of the space of possible minds that we are now exploring. I have my own ideas, but I’ll leave those for another post…