First take in my Lesswrong shortform. I think I will mostly be sharing takes about AI that are too long for twitter.
I think google deepmind getting gold on the IMO 2025 is not surprising and shouldn't be much of an update, because the problems were unusually easy this year and it is plausible that last year's alphaproof+alphageoemtry system would have gotten gold this year. On the other hand, I am pretty surprised by regular reasoning LLMs getting gold in the way openai described that they did. I am pretty puzzled as to how they are setting this up. It is somewhat plausible that the model is marginally better than o3 but scales better with test-time compute and is very scaled, since gemini 2.5 pro got 31% from 1 try and this openai experimental model may have tried hundreds of times to get 81%; plausible! but probably it is quite a bit better than current models can get away with.
'Tis I. Didn't intend bad incentives, the stakes on that market are imo pretty tiny. But I N/Aed, I don't want anyone suspecting that had affected the final outcome.
I posted our submission in your twitter DMs and as a standalone post on LW the other day, but thought it wise to send it here as well: https://alignmentsearch.up.railway.app/
Mirroring other comments, we plan to get in contact with the team behind Stampy and possibly integrate some of the functionality of our project into their conversational agent.
There already exists a bunch of projects which do something similar. This technique is known as Retrieval-Augmented Generation, as described in this paper from May 2020. Tools like Langchain and openAI tutorials have been used to build similar projects quickly, and the tech (cheap openAI embeddings, separating the dataset into ~200 token chunks and chatGPT) have all existed and been used together for many months. A few projects I've seen that do something akin to what we do include HippocraticAI, Trevor Hubbard, and ChatLangChain. This could and will be applied more widely, like people adding Q&A abilities to their library's documentation, to blogs, etc., but a key limitation is that, since it uses LLMs, it is pricier, slower and less reliable at inference time, without tricks that attempt to go around these limitations.
how does the math work out if you consider that o3 was created around 8-9 months before it was released? and this model was finished in the last month or 2. that would be nearly a year difference, and should be modelled as such in bayesian's adjusted METR doubling time extrapolation