Wiki Contributions


If I understand correctly:

  • You approve of the direct impact your employer has by delivering value to its customers, and you agree that AI could increase this value.
  • You're concerned about the indirect effect on increasing the pace of AI progress generally, because you consider AI progress to be harmful. (You use the word "direct", but "accelerating competitive dynamics between major research laboratories" certainly has only an indirect effect on AI progress, if it has any at all.)

I think the resolution here is quite simple: if you're happy with the direct effects, don't worry about the indirect ones. To quote Zeynep Tufekci:

Until there is substantial and repeated evidence otherwise, assume counterintuitive findings to be false, and second-order effects to be dwarfed by first-order ones in magnitude.

The indirect effects are probably smaller than you're worrying they may be, and they may not even exist at all.

I'm curious about this too. The retrospective covers weaknesses in each milestone, but a collection of weak milestones doesn't necessarily aggregate to a guaranteed loss, since performance ought to be correlated (due to an underlying general factor of AI progress).

Maybe I should have said "is continuing without hitting a wall".

I like that way of putting it. I definitely agree that performance hasn't plateaued yet, which is notable, and that claim doesn't depend much on metric.

I think if I'm honest with myself, I made that statement based on the very non-rigorous metric "how many years do I feel like we have left until AGI", and my estimate of that has continued to decrease rapidly.

Interesting, so that way of looking at it is essentially "did it outperform or underperform expectations". For me, after the yearly progression in 2019 and 2020, I was surprised that GPT-4 didn't come out in 2021, so in that sense it underperformed my expectations. But it's pretty close to what I expected in the days before release (informed by Barnett's thread). I suppose the exception is the multi-modality, although I'm not sure what to make of it since it's not available to me yet.

This got me curious how it impacted Metaculus. I looked at some selected problems and tried my best to read the before/after from the graph.

(Edit: The original version of this table typoed the dates for "turing test". Edit 2: The color-coding for the percentage is flipped, but I can't be bothered to fix it.)

How would you measure this more objectively?

It's tricky because different ways to interpret the statement can give different answers. Even if we restrict ourselves to metrics that are monotone transformations of each other, such transformations don't generally preserve derivatives.

Your example is good. As an additional example, if someone were particularly interested in the Uniform Bar Exam (where GPT-3.5 scores 10th percentile and GPT-4 scores 90th percentile), they would justifiably perceive an acceleration in capabilities.

So ultimately the measurement is always going to involve at least a subjective choice of which metric to choose.

  • Capabilities progress is continuing without slowing.

I disagree. For reasonable ways to interpret this statement, capabilities progress has slowed. Consider the timeline:

  • 2018: GPT-1 paper
  • 2019: GPT-2 release
  • 2020: GPT-3 release
  • 2023: GPT-4 release

Notice the 1 year gap from GPT-2 to GPT-3 and the 3 year gap from GPT-3 to GPT-4. If capabilities progress had not slowed, the latter capabilities improvement should be ~3x the former.

How do those capability steps actually compare? It's hard to say with the available information. In December 2022, Matthew Barnett estimated that the 3->4 improvement would be about as large as the 2->3 improvement. Unfortunately, there's not enough information to say whether that prediction was correct. However, my subjective impression is that they are of comparable size or even the 3->4 step is smaller.

If we do accept that the 3->4 step is about as big as the 2->3 step, that means that progress went ~33% as fast from 3 to 4 as it did from 2 to 3.

Worriers often invoke a Pascal’s wager sort of calculus, wherein any tiny risk of this nightmare scenario could justify large cuts in AI progress. But that seems to assume that it is relatively easy to assure the same total future progress, just spread out over a longer time period. I instead fear that overall economic growth and technical progress is more fragile that this assumes. Consider how regulations inspired by nuclear power nightmare scenarios have for seventy years prevented most of its potential from being realized. I have also seen progress on many other promising techs mostly stopped, not merely slowed, via regulation inspired by vague fears. In fact, progress seems to me to be slowing down worldwide due to excess fear-induced regulation. 

This to me is the key paragraph. If people's worries about AI x-risk drive them in a positive direction, such as doing safety research, there's nothing wrong with that, even if they're mistaken. But if the response is to strangle technology in the crib via regulation, now you're doing a lot of harm based off your unproven philosophical speculation, likely more than you realize. (In fact, it's quite easy to imagine ways that attempting to regulate AI to death could actually increase long-term AI x-risk, though that's far from the only possible harm.)

Having LessWrong (etc.) in the corpus might actually be helpful if the chatbot is instructed to roleplay as an aligned AI (not simply an AI without any qualifiers). Then it'll naturally imitate the behavior of an aligned AI as described in the corpus. As far as I can tell, though ChatGPT is told that it's an AI, it's not told that it's an aligned AI, which seems like a missed opportunity.

(That said, for the reason of user confusion that I described in the post, I still think that it's better to avoid the "AI" category altogether.)

Indeed, the benefit for already-born people is harder to forsee. That depends on more-distant biotech innovations. It could be that they come quickly (making embryo interventions less relevant) or slowly (making embryo interventions very important).

Thanks. (The alternative I was thinking of is that the prompt might look okay but cause the model to output a continuation that's surprising and undesirable.)

An interesting aspect of this "race" is that it's as much about alignment as it is about capabilities. It seems like the main topic on everyone's minds right now is the (lack of) correctness of the generated information. The goal "model consistently answers queries truthfully" is clearly highly relevant to alignment.

Although I find this interesting, I don't find it surprising. Productization naturally forces solving the problem "how do I get this system to consistently do what users want it to do" in a way that research incentives alone don't.

Load More