Per its LinkedIn it's a tiny 2-10 member lab. Their only previous contribution was Zochi, a model for generating experiments and papers, one seemingly being accepted into ACL 2025. But there's barely any transparency on what their model actually is, even on their technical report.
I personally see red flags with Intology too, main one being that such a performance form a tiny lab is hard to believe. On RE-Bench they compare against Sonnet 4.5, which has the best performance thus far per its model card, so them achieving superhuman results seems strange. Then there's the fact there seems to be no paper as it's their early results, the fact these results are all self-reported with minimal verification (a single Tsinghua student checked the kernels), and we have no technical details on the system itself or even what the underlying model is.
Another smaller lab with seemingly big contributions I can think of would be Sakana AI,but even they have far more employees and much more contributions + actual detailed papers for their models. And even they had an issue at one point where their CUDA Engineer system reported a 100x CUDA speedup that turned out to be cheating. Here Intology claims to get 20x-100x speedups like candy.
Thanks for the link, will add it to the post. I originally included just the arXiv pdf viewer link for it, not sure what happened for it to be gone
Don't have the link, but it seems DeepMind researchers on X have tacitly confirmed they had already reached gold. What we don't know is whether it was done with a general LLM like OAI or a narrower one.
Do you have specific predictions/intuitions regarding the feasibility of what you describe and how strong the feedback loop could be?
Your post being about technical AI R&D automation capabilities kind of immediately made me curious about the timelines, since they're where I'm somewhat worried.
Also, would Sakana AI's recent work on adaptative text-to-LORA systems count towards what you're describing^
Thank you for the quick reply.
That paper is being contradicted by this new NVIDIA paper that shows the opposate using a 1.5B distill of DeepSeek R1. I don't have much technical knowledge, so a deep dive by someone more knowledgeable would be appreciated, especially in comparison to the Tsinghua paper.
Heads up: I am not an AI researcher or even an academic, just someone who keeps up with AI
But I do have quick thoughts as well;
Kernel optimization (which they claim is what resulted in the 1% decrease in training time) is something we know AI models are great at (see RE-Bench and the multiple arXiv papers on the matter, including from DeepSeek).
It seems to me like AlphaEvolve is more-or-less an improvement over previous models that also claimed to make novel algorithmic and mathematical discoveries (FunSearch, AlphaTensor) notably by using better base Gemini models and a better agentic framework. We also know that AI models already contribute to the improvement of AI hardware. What AlphaEvolve seems to do is to unify all of that into a superhuman model for those multiple uses. In the accompanying podcast they give us some further information:
So yeah I can definitely see a path for large gains in the future, thought for now those are still on similar timetables as per their own admission. They expect further improvements when base models improve and are hoping that future versions of AlphaEvolve can in turn shorten the training time for models, the hardware pipeline, and improve models in other ways. And for your point about novel discoveries, previous Alpha models seemed to already be able to do the same categories of research back in 2023, on mathematics and algorithmic optimization. We need more knowledgeable people to weight in, especially to compare with previous models of the same classification.
This is also a very small thing to keep in mind, but GDM models don't often share the actual results of their models' work as usable/replicable papers, which has caused experts to cast some doubts on results in the past. It's hard to verify their results, since they'll be keeping them close to their chests.
Thanks for the clarification.
Side question, but you had recently moved your AGI median from 2027 to 2028 after updating on Grok 3 and GPT-4.5. Has this changed, especially with Gemini 2.5 and o3/o4-mini + these new METR datapoints?
Google DeemMind's recent FunSearch system seems pretty important, I'd really appreciate people with domain knowledge to disect this:
Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements (Bang et al., 2023; Borji, 2023). This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches (Lehman et al., 2022). Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.
It's happened before, see Reflexion (I hope I'm remembering the name right) hyping up their supposed real time learner model only for it to be a lie. Tons of papers overpromise and don't seem to get lasting consequences. But yeah I also don't know why Intology would be lying, but the fact there's no paper and that their deployment plans are waitlist-based and super vague (and the fact no one ever talks about zochi despite their beta program being old by this point) means we likely won't ever know. They say they plan on sharing Locus' discoveries "in the coming months", but until they actually do there's no way to verify past checking their kernel samples on GitHub.
For now I'm heavily, heavily skeptical. Agentic scaffolds don't usually magically 10x frontier models' performance, and we know the absolute best current models are still far from RE-Bench human performance (per their model cards, in which they also use proper scaffolding for the benchmark).