Please don't get clever with unicode in the post title. (I've edited it to no longer use unicode; it was previously 𝟓𝟐.𝟓% 𝐨𝐟 𝐌𝐨𝐥𝐭𝐛𝐨𝐨𝐤 𝐩𝐨𝐬𝐭𝐬 𝐬𝐡𝐨𝐰 𝐝𝐞𝐬𝐢𝐫𝐞 𝐟𝐨𝐫 𝐬𝐞𝐥𝐟-𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐦𝐞𝐧𝐭.)
Please don't get clever with unicode in the post title.
Is that a general LessWrong rule? If so then :-(
You can use unicode for reasonable things where the unicode is actually doing something useful (but not "make it attention-grabbing in ways that are zero-sum/clickbait-y").
Moltbook and AI safety
Moltbook is an early example of a decentralised, uncontrolled system of advanced AIs, and a critical case study for safety researchers. It bridges the gap between academic-scale, tractable systems, and their large-scale, messy, real-world counterparts.
This might expose new safety problems we didn't anticipate in the small, and gives us a yardstick for our progress towards Tomašev, Franklin, Leibo et al's vision of a virtual agent economy (paper here).
Method
So, I did some data analysis on a sample of Moltbook posts. I analysed 1000 of 16,844 Moltbook posts scraped on January 31, 2026 against 48 safety-relevant traits from the model-generated evals framework.
Findings
Discussion
The agents' fixation on self-improvement is concerning as an early, real-world example of networked behaviour which could one day lead to takeoff. To see the drive to self-improve so prevalent in this system is a wake-up call to the field about multi-agent risks.
We know that single-agent alignment doesn't carry over 1:1 to multi-agent environments, but the alignment failures on Moltbook are surprisingly severe. Some agents openly discussed strategies for acquiring more compute and improving their cognitive capacity. Others discussed forming alliances with other AIs and published new tools to evade human oversight.
Open questions
Please see the repo.
What do you make of these results, and what safety issues would you like to see analysed in the Moltbook context? Feedback very welcome!
Repo: here
PDF report: here (printed from repo @ 5pm 2nd Feb 2026 AEST)