I found this 2015 panel with Terence Tao and some other eminent mathematicians to be interesting. The panel covered various topics but got into the question of when computers will be able to do research-level mathematics. Most interestingly, Maxim Kontsevich was alone in predicting that HLAI in math was plausible in our lifetime -- but also, that developing such an AI might not be a good idea. He also mentioned a BioAnchors-style AI forecast by Kolmogorov that I had never heard of before(and cannot find a reference to -- anyone know of such a thing?) Excerpts below:

INTERVIEWER: Do you imagine that, maybe in 100 years, or 1000 years, that, like it happened in chess -- humans stil play tournaments but everyone knows computers are better -- is it conceivable that this could happen in mathematics?

TAO: I think computers will be able to do things much more efficiently with the right computer tools. Search engines, for example, often you'll type in a query to Google and it will come back with "do you mean this" and often you did. One could imagine that if you had a really good computer assistant working on some math problem, it will keep suggesting "should you do this? have you considered looking at this paper?" You could imagine this would really speed up the way we do research. Sometimes you're stuck for months because you just don't know some key trick that is buried in some other field of expertise. Some sort of advanced Google could suggest this to you.

So I think we will use computers to do things much more efficiently than we do currently, but it will still be humans driving the show, I'm pretty sure.

INTERVIEWER: Maxim, do you think anything like this[HLAI, I assume] is possible?

MAXIM: I think it's perfectly possible, maybe in our lifetime.

INTERVIEWER: Why do you think so?

MAXIM: I don't think artificial intelligence is very hard. It will be pretty soon I suppose.

[...]

INTERVIEWER: You are a contrarian here, saying it will happen so quickly. So what makes you so optimistic?

MAXIM: Optimistic? No, it's actually pessimistic. I thought about it myself a little bit, I don't think there are fundamental difficulties here.

INTERVIEWER: So why don't you just work on that instead?

MAXIM: I think it would be immoral to work on it.

MILNER: I'm no expert, but isn't the way the computer played chess not really very intelligent? It's a huge combinatorial check. Inventing the sort of mathematics you've invented, that's not combinatorial checking, it's entirely conceptual.

MAXIM: Yeah OK, sure.

MILNER: Is there any case we know of computers doing anything like that?

MAXIM: We don't know any examples, but it's not inconceivable.

MILNER: It's not inconceivable...but I would be very surprised if we saw a computer win a Fields medal in our lifetime.

TAO: One could imagine that a computer could discover just by brute force a connection between two fields of mathematics that wasn't suspected, and then the person on the computer would be able to flesh it out. Maybe he would collect the medal.

[...]

MAXIM: Actually, Kolmogorov thought that mathematics will be extinct in 100 years, he had an estimate. He calculated the number of neurons and connections, he made the head something like one cubic meter. So yes, maybe a crazy estimate, but he was also thinking about natural boundaries.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 1:49 PM

This was interesting, but the panel dates back from 2015. Before GPT, before AlphaGo, before Alpha0 and so on. I doubt they hold the same positions now !

This is why we really need some followups to the old surveys: they're all from before the most striking developments, and approaching a decade out of date (assuming people formed opinions in the 2-3 years before the survey and are not even further out of date than that). Metaculus is great and all, but not exactly a random sampling of ML/DL/AI communities.

Yes, I thus found it especially striking that Kontsevich already thought HLAI was possible soon, apparently from pure reasoning alone. I also wouldn't be too surprised if many of them held same or similar positions now, given how resistant many are to updating from existing progress to the possibility of further progress, although one might hope that elite mathematicians would be more rational. Incidentally, something a lot like Tao's "brute force search fleshed out by human mathematicians" has happened in the last year.

Nobody will hold a position such as "I'm no expert, but isn't the way the computer played chess not really very intelligent? It's a huge combinatorial check." - well, I hope so at least !

You might be surprised. (Not to say they're totally wrong about current systems working differently from human brains, but they do show a definite lack of imagination about what further progress might bring...)