That account is notorious on X for making things up. It doesn't even try to make them believable. I would disregard anything coming from it.
That this post is at 47 upvotes and no-one has said this is crazy. LessWrong please get your acts together.
Seconded. Went from a skeptical "big if true" at the post title to rolling my eyes once I saw "iruletheworldmo".
For reference, check out this leak by that guy from February 2025:
ok. i’m tired of holding back. some of labs are holding things back from you.
the acceleration curve is fucking vertical now. nobody's talking about how we just compressed 200 years of scientific progress into six months. every lab hitting capability jumps that would've been sci-fi last quarter. we're beyond mere benchmarks and into territory where intelligence is creating entirely new forms of intelligence.
watched a demo yesterday that casually solved protein folding while simultaneously developing metamaterials that shouldn't be physically possible. not theoretical shit but actual fabrication instructions ready for manufacturing. the researchers presenting it looked shell shocked. some were laughing uncontrollably while others sat in stunned silence. there's no roadmap for this level of cognitive explosion.
we've crossed into recursive intelligence territory and it's no longer possible to predict second order effects. forget mars terraforming or fusion. those are already solved problems just waiting f
One of the Twitter AI accounts better off blocked due to the mischievous combination of reasonable comments and BS. Does he know anything? Yeah sure it's possible who knows - but life is too short.
If they had fully solved it, there would be large commercial pressure to release it as soon as possible, e.g. because they could start charging > $10K/month for remote worker subscriptions or increase their valuations in future funding rounds. It’s true that everyone is working on it; my guess is that they’ve made some progress but haven’t solved it yet.
On one hand, one would hope they are capable of resisting this pressure (these continual learners are really difficult to control, and even mundane liability might be really serious).
But on the other hand, it might be “not releasable” for purely technical reasons. For example, it might be the case that each installation of this kind is really expensive and requires support of a dedicated competent “maintenance crew” in order to perform well. So, basically, it might be technically impossible without creating a large “consulting division” within a lab in question, with dedicated teams supporting clients, and the labs are likely to think this is too much of a distraction at the moment.
There is a spinoff thread listing papers cited in the post
https://x.com/_alejandroao/status/2008253699567858001
which speculates that this paper from Google is being referenced
https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
From https://x.com/iruletheworldmo/status/2007538247401124177:
I found this because Bengio linked to it on Facebook, so I'll guess that it's well informed. But I'm confused by the lack of attention that it has received so far. Should I believe it?
This seems like an important capabilities advance.
But maybe more importantly, it's an important increase in secrecy.
Do companies really have an incentive to release models with continual learning? Or are they close enough to AGI that they can attract enough funding while only releasing weaker models? They have options for business models that don't involve releasing the best models. We might have entered an era when AI advances are too secret for most of us to evaluate.