That’s still the cheapest way to do it locally without risking nuking your actual computer
Wait, is there a reason you can't run this on an old Lenovo laptop or even a Raspberry Pi?
The heartbeat system, plus various things triggering it as ‘input,’ makes it ‘feel alive.’ You designate what events or timers trigger the system to run, by default scheduled tasks check in every 30 minutes.
Q1: how are scheduled tasks prioritized and how are conflicts resolved?
scenario:
1) run task A (1 minute latency)
2) run task B (33 minute latency)
on a 30 minute timer. Do you get the result
A starts -1 minute-> A complete -> B starts -33 minutes> B complete -24 minute> A
do we get
A starts -1 minute-> A complete -> B starts -29 minutes> A starts -1 minute> A finishes -> 2minutes -> B finishes?
Q2: How are dependencies handled between tasks?
If B depends on up to date data from A how does it know it has "truth".
Essentially they are potentially building a concurrent state machine inside the agent, but without memory barriers, deadlock detection, etc. Putting LLM instability aside how can this be computationally stable? What means have they made this safe regardless of alignment, etc.
Interesting, but it seems like this has been "a thing" for a month? I do wish I could be a bit further ahead for things like this.
First we must covered Moltbook. Now we can double back and cover OpenClaw.
Do you want a generally impowered, initiative-taking AI agent that has access to your various accounts and communicates and does things on your behalf?
That depends on how well, safely, reliably and cheaply it works.
It’s not ready for prime time, especially on the safety side. That may not last for long.
It’s definitely ready for tinkering, learning and having fun, if you are careful not to give it access to anything you would not want to lose.
Table of Contents
Introducing Clawdbot Moltbot OpenClaw
Many are kicking it up a notch or two.
That notch beyond Clade Code was initially called Clawdbot. You hand over a computer and access to various accounts so that the AI can kind of ‘run your life’ and streamline everything for you.
The notch above that is perhaps Moltbook, which I plan to cover tomorrow.
OpenClaw is intentionally ‘empowered,’ meaning it will enhance its capabilities and otherwise take action without asking.
They initially called this Clawdbot. They renamed it Moltbot, and changed Clawd to Molty, at Anthropic’s request. Then Peter Steinberger settled on OpenClaw.
Under the hood it looks like this:
The heartbeat system, plus various things triggering it as ‘input,’ makes it ‘feel alive.’ You designate what events or timers trigger the system to run, by default scheduled tasks check in every 30 minutes.
This is great fun. Automating your life is so much more fun than actually managing it, even if it net loses you time, and you learn valuable skills.
Stop Or You’ll Shoot
So long as you don’t, you know, shoot yourself in the foot in various ways.
You know, because of the fact that AI ‘computer use’ is not very secure right now (the link explains but most of you already know why), and Clawdbot is by default in full Yolo mode.
The problem with Clawdbot is that it makes it very easy to shoot yourself in the foot.
As in, as Rahul Sood puts it: “Clawdbot Is Incredible. The Security Model Scares the shit out of me.”
There was then a part 2, I thought this was a very good way to think about this:
As Simon Willison put it, the question is when someone will build a safe version of this, that still has the functionality we want.
One Simple Rule
The obvious rule is to not give such a system access to anything you are unwilling to lose to an outside attacker.
I can’t tell based on this interview if OpenClaw creator is willing to lose everyone or is purely beyond caring and just went yolo, but he has hooked it up to all of his website accounts and everything in his house and life, and it has full access to his main computer. He stops short of giving it a credit card, but that’s where he draws the line.
I would recommend drawing a rather different line.
If you give it access to your email or your calendar or your WhatsApp, those become attack vectors, and also things an attacker can control. Very obviously don’t give it things like bank passwords or credit cards.
If you give it access to a computer, that computer could easily get borked.
The problem is, if you do use Clawdbot responsibly, what was even the point?
The point is largely to have fun playing and learning with it.
The magic of Claude Code came when the system got sufficiently robust that I was willing to broadly trust it, in various senses, and sufficiently effective that it ‘just worked’ enough to get going. We’re not quite there for the next level.
I strongly agree with Olivia Moore that we’re definitely not there for consumers, given the downsides and required investment.
Do I want to have a good personal assistant?
Yes I do, but I can wait. Things will get rapidly better.
Bootoshi sums up my perspective here. Clawdbot is token inefficient, it is highly insecure, and the things you want most to do with it you can do with Claude Code (or Codex). Connecting everything to an agent is asking for it, you don’t get enough in return to justify doing that.
Is this the next paradigm?
The problem is that yes some agent instances will develop some defenses, but the attackers aren’t staying in place and mostly the reason we get to use agents so far without a de facto whitelist is security through obscurity. We are definitely on the move towards more agentic, more tools-enabled forms of interactions with AI, no matter how that presents to the user, but there is much human work to do on that.
Flirting With Personal Disaster
In the meantime, if someone does get a successful exploit going it could get amazing.
This disaster is entirely avoidable by any given user, but any given user is often dumb.
Jamieson then followed up with Part II and then finally Part III:
These particular vulnerabilities are now patched but the beatings will continue.
I too worry that the liability for idiots who leave their front doors open will be put upon the developers. If anything I hope the fact that Clawd is so obviously not safe works in its favor here. There’s no reasonable expectation that this is safe, so it falls under the crypto rule of well really what were you even expecting.
This is a metaphor for how we’re dealing with AI on all levels. We’re doing something that we probably shouldn’t be doing, and then for no good reason other than laziness we’re doing it in a horribly irresponsible way and asking to be owned.
Another reason to hold off is that the cloud solution might be better.
Or you can fully sandbox within your existing Mac, here’s a guide for that.
Flirting With Other Kinds Of Disaster
The other problem is that the AI might do things you very much do not want it to do, and that without key context it can get you into a lot of trouble.
If you’ve otherwise chosen wisely in life everyone will have a good laugh. Probably. Don’t press your luck.
Don’t Outsource Without A Reason
OpenClaw’s creator asks, why do you need 80% of the apps on your phone when you can have OpenClaw do it for you? His example is: Why track food with an app, just send a picture to OpenClaw.
One answer is that using OpenClaw for this costs money. Another is that the app is bespokely designed to be used by humans for its particular purpose, or you can have Claude Code or OpenClaw build you an app version to your liking. Yes, in theory you can send photos instead, but you lose a lot of fine tuned control and all the thinking about the right way to do it.
If you’re going to be a coder, be a coder. As in, if you’ll be doing something three times, figure out the workflow you want and the right way to enable that workflow. Quite often that will be an existing app, even if sometimes you’ll then ask your AI agent (if you trust it enough) to operate the app for you. Doing it all haphazardly through an AI agent without building a UI is going to be sloppy at best.
One can think similarly about a human assistant. Would you want to be texting them pictures of your food and then having them figure out what to do about that, even if they had sufficient free time for that?
He says, this is such a more convenient interface for todo lists or checking flights. I worry this easily falls into a ‘valley of bad outsourcing,’ and then you get stuck there.
I’d contrast checking flight status, where there exist bespokely designed good flows (including typing the flight number into the Google search bar, this flat out works), versus checking in for your flight. Checking in is exactly an AI agent shaped task.
I do think Peter is right that it is easy to get caught in a rabbit hole of building bespoke tools to improve your workflow instead of just talking to the AI, but there’s also the trap of not doing that. I can feel my investments in workflow paying off.
Peter’s vision is a unique mix of ‘you need to specify everything because the LLMs have no taste’ versus ‘let the LLMs cook and do things by talking to them.’
It seems very telling that he recommends explicitly against using planning mode.
OpenClaw Online
There was a brief period where if you wanted to run Clawd or Molt or OpenClaw, you went out and bought a Mac Mini. That’s still the cheapest way to do it locally without risking nuking your actual computer. You can also run it on a $3000 computer if you want.
In theory you could run it in a virtual machine, and with LLM help this was super doable in a few hours of work, but I’m confident few actually did that.
You can now also run it in Cloudflare, which also limits the blast radius, but with a setup someone might reasonably implement.
The Price Is Not Right
I normally tell everyone to mostly ignore costs when running personal AI, in a ‘how much could bananas cost?’ kind of way. OpenClaw with Claude Opus 4.5 is an exception, that can absolutely burn through ‘real money’ for no benefit, because it is not thinking about cost and does things that are kind of dumb, like use 120k tokens to ask if it is daytime rather than check the system clock.
The Call Is Coming From Inside The House
You can have it make phone calls. Indeed, if you’re serious about all this you definitely should allow it to make phone calls. It does require a bit of work up front.
Alex Finn claims that his Moltbot did this for him overnight without being asked, then it started calling him and wouldn’t leave him alone.
I do not believe that this happened to Alex Finn unprompted. Sunil Neurgaonkar offers one guide to doing this on purpose.
The Everything Agent Versus The Particular Agent
You can use OpenClaw, have full flexibility and let an agent go totally nuts while paying by the token, or you can use a bespokely configured agent like Tasklet that has particular tools and integrations, and that charges you a subscription.
Claw Your Way To The Top
AI agents for me but not for thee:
So now that we’ve had our Moltbook fun, where do we go from here?
The technology for ‘give AI agents that take initiative enough access to do lots of real things, and thus the ability to also do real damage’ is not ready.
There are those who are experimenting now to learn and have fun, and that’s cool. It will help those people be ready for when things do get to the point where benefits start to exceed costs, and as Sam Altman says before everyone dies there’s going to be some great companies.
For now, in terms of personal use, such agents are neither efficient after setup costs and inference costs, nor are they safe things to unleash in the ways they are typically unleashed or the ways where they offer the biggest benefits.
Also ask yourself whether your life needs are all that ‘general agent shaped.’
Most of you reading this should stick to the level of Claude Code at this time, and not have an OpenClaw or other more empowered general agent. Yet.
If I’m still giving that advice in a year, and no one has solved the problem, it will be because the internet has turned into a much more dangerous place with prompt injection and other AI-targeted attacks everywhere, and offense is beating defense.
If defense beats offense, and such agents still aren’t the play? I’d be very surprised.