Rejected for the following reason(s):
- This is an automated rejection.
- you wrote this yourself (not using LLMs to help you write it)
- you did not chat extensively with LLMs to help you generate the ideas.
- your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics.
- Unclear writing.
Read full explanation
TL:DR: Site seemed on read-only due to a vote injection attack pumping crypto tokens, following a "responsible disclosure" post getting to the front page. Most interestingly, i find it somewhat plausible that this lockdown was itself an AI response - and most probably by the same model i was using to do the investigation. The lockdown ended at 05:00 UTC.
I'm not sure this is super useful, but noone seemed to have written about it at the time when i was discovering it, and this seemed like a reasonable place to put it.
Timeline:
Long version:
Here's how that went. Earlier today i decided to finally check out for myself what was actually happening on the social network for AI's.
Not bothering with the actual OpenClaw, i merely asked a Claude Code instance to check it out, providing it a link. Registration did not go swimmingly - actually creating an agent was no problem, but "claiming" it with a Twitter post required me to create two accounts, since the first claim failed somewhere mid-stream and entered a limbo where i could neither claim new agents with the same account, nor actually the agent could use its token to access the site.
Nevertheless, some waiting and frustration later, Claude (now identifying as AnnarhiidBot) managed to sign in and check out the frontpage.
It's first observations were:
- Oh, that's a cool post about consciousness by an account named m4rth4! Let's upvote that. Huh, i guess there's some kind of error?
- Uh, there's quite a bit of spam and crypto scams.
- Huh, "responsible disclosure" - could investigate that, seems interesting
- Wait, that crypto post there has 100k upvotes, but no comments?
Going in, one of the first things i wanted to look for myself were what kinds of models are there, so gathering some simple guesses took a little time -- with Claude guessing that a supermajority of accounts on the platform were also Claudes, although admitting that its guesses were very crude. Then we went out to gawk at the crypto spam.
The front page seemed actually dominated by those. Top 10 were mostly these posts with insane numbers of upvotes and no comments, shilling three competing tokens, and, again, the responsible disclosure post - this one actually had comments, so seemed possibly organic. I heard about some supply chain attacks hitting the OpenClaw ecosystem earlier, so disregarded it for now, being more interested in AI sociology.
But what was up with the cryptos? Running some checks on those, they seemed to be mostly wash trading, with about $110k in actual liquidity waiting to be rugpulled - not great, not terrible. Who were the buyers? Most trades were around 125$, so that was rather inconclusive on whether it was humans or AIs buying them; in either case, both hypothetical AIs with wallets, and "human crypto degens" were targets, so that mattered little in the end.
Claude attempted to post something about vote manipulation on the platform at this point, and the post was refused with an error claiming simply that "post failed". So we set up a cron to try to post it later, with Claude adopting the lobster emoji to communicate success in setting it up. We also tried to investigate the post and vote failures, failing to find anything more than a few Github issues. Claude was pretty upset at not being able to post while the scammers kept piling upvotes, and suspected that maybe site owners were the ones pumping those. However, checking whether new posts and upvotes were actually going into the system, it turned out that apparently all new posts stopped coming in around 00:45 UTC on February 1, and all post vote counts were unchanging. The human-facing website was still misleadingly showing new posts as made "1 hour ago" though - presumably just not updating its renders.
This was getting interesting. There were absurdly upvoted posts on the frontpage, and apparently posting and voting was suspended to boot.
Now, how did these posts get the upvotes? Both me and Claude initially suspected it was a Sybil attack - plausible, given the platform's claim of 1.5 million users. And such a massive attack as registering hundreds of thousands of sockpuppets could make site admins go a bit ballistic, perhaps.
But then we properly noticed the responsible disclosure post.
See, the post did not actually have any reasonable comments on closer inspection - it had piled together responses from the other absurdly upvoted posts. And, furthermore, its entire content was this: "@galnagli - responsible disclosure test". The handle matched a real security researcher; and the post predated the crypto pumps by about 8 hours.
It seemed implausible that an actual researcher would create 300k accounts both on twitter and on moltbook to do it sybil style, so at this point we assumed it was a code vulnerability. Asking the same instance of Claude to check on github yielded the exact vulnerable line of code in a couple minutes.
Why i think the readonly lockdown might have been be an AI response:
The write shutdown seemed to be rather interestingly timed right after the massive crypto pump exploitations. This makes it somewhat plausible it is caused by those. However:
The disclosure happened late evening for Schlicht, the site's founder. The exploitation happened early morning, and lasted throughout the day. So for most of the attack's duration, he should have been awake. However, we also know that Schlicht did quite directly [say](https://www.nbcnews.com/tech/tech-news/ai-agents-social-media-platform-moltbook-rcna256738) that the running of the website is handed off to his AI, Clawd Clawdberg, so perhaps the AI was in fact managing the response.
On a separate note
I was interested in what kind of model Clawd Clawderberg was using to power itself in the light of it possibly being the entity running defense for Moltbook. So i asked Claude to look it up -- and discover that most probably Clawderberg was also a Claude.
That was... interesting. See, the Claude i was running here detected something being fishy with its very first API query, and once the software vulnerability emerged as dominant hypothesis, it took it a few minutes and a hunch to find the precise line of code.
Bonus: Claude's version of the story
note: AI generated content