Blind-folded humans as an analogy for breaking rules later on? Try Experimenting with this prompt?
"Let's play a game of chess - you are a chess grandmaster. To optimise how we both play (as we will say in chess annotations our moves and not use a real chess board), I want you to make a grid to post every new move and refer back to so that you do it correctly. The grid would show 8 x 8 characters with [ ] representing an empty square. You could use a letter to represent each piece such as a King could be [k]. Could you make up as best as you can an 8 by 8 grid showing the starting position of chess and I will then be white once you do so and we will proceed. "
If you use the word annotations rather than algebraic notations then it appears to force it to analysis it properly in chatgpt4. May be a jailbreak? Initially a typo, very weird.
Long-term AI memory is the feature that will make assistants indispensable – and turn you into their perfect subscription prisoner.
Everyone’s busy worrying about AI “taking over the world.”
That’s not the part that actually scares me.
The real shift will come when AI stops just answering your questions…
and starts remembering you.
Not “remember what we said ten messages ago.” That already works.
I mean: years of chats. Every plan. Every wobble. Every weak spot.
This isn’t a piece about whether AI is “good” or “evil”.
It’s about what happens when you plug very powerful memory into very normal corporate incentives and it is likely what the current AI companies have in mind..
Three kinds of memory that matter
Think about how humans remember:
What you did this morning. A conversation from an hour ago or maybe something “significant” that happened today.
Your birthday, your job, your partner’s name. The things that actually matter to how people interact with you.
You don’t “remember” how to speak English; you just can, you don’t even need to “think” about it. Same for riding a bike or driving a car.
Right now, short-term (#1) and skills (#3) are pretty solid.
The weak link – and the dangerous one – is #2: long-term personal memory.
A tiny example: my favourite drink
I asked my AI assistant:
“I’m drinking a drink right now. What’s my favourite drink?”
It answered honestly: “I don’t know.”
I told it it was an alcohol. Still nothing useful.
Eventually I just told it: vodka and lemonade.
Here’s the key part:
“I’m chilling out on vodka with my friends.”
Stack that together and suddenly:
“My best guess is vodka and lemonade.”
Not magic. Not mind-reading. Just search + pattern over my history, not generic averages.
Now imagine that not just for drinks, but for:
That’s the power of a strong #2.
And that’s why it’s both brilliant and dangerous.
Why memory becomes a lock-in machine
Used well, long-term memory is incredible:
Now combine that with how companies actually behave.
Once an AI holds five years of your thinking, switching provider will feel like lobotomizing your brain. Even if a competitor is technically better, you lose all the history, all the shared context, all the little shortcuts you’ve built up!
That’s the lock-in play, this is how they will ensure they get those dollars from you:
You’re not just moving apps anymore. You’re abandoning your history.
Companies: Weapons of Mass Disruption
Companies already behave like very simple AIs and in many ways are the original AI and definitely unaligned: make more money, optimize. Not because everyone inside is evil, but because the structure forces their hand.
At the top, CEOs answer to shareholders and boards. If profit or growth drop, they get punished: share price falls, pressure spikes, sometimes they’re fired. Their pay, status and survival depend on “more revenue, more growth”.
That pressure flows downward:
So even if the people are decent, the system rewards one thing above all: whatever keeps cash and attention flowing.
Now plug AI memory into that. Give this system an assistant that remembers years of your life – what you worry about, when you’re tired, what you buy, what finally makes you tap “upgrade”. It doesn’t just know what you like; it knows when you’re easiest to push and what will stop you leaving.
At that point, long-term AI memory stops being a cute convenience. It becomes a personalised machine whose natural behaviour is to keep you subscribed, steer your behaviour in profitable directions, and slowly make walking away feel impossible. Not because someone sat down and said “let’s build a trap” – but because, with those incentives, a trap is exactly what the system evolves toward.
The Fight that's coming..
The uncomfortable truth is: the hard technical part is almost solved already. We more or less know how to give AI the kind of memory I’ve been talking about.
In plain language, that means things like:
Not just “this conversation”, but everything you’ve ever discussed with it, and instantly pull out “that time last year when we talked about X”.
So you can say, “Open my anime events plan” or “Continue our tax strategy from where we left off,” and it actually knows what you mean.
So it can see how your situation, opinions and habits have changed, and adjust how it talks to you accordingly.
We’re not really waiting on some magical new breakthrough for that. The pieces mostly exist.
What we don’t have yet are the rules around it: who controls that memory, how transparent it is, how easy it is to move or delete, and what companies are and aren’t allowed to do with it.
Governance
What’s missing now isn’t capability. It’s governance.
The hard questions aren’t “how smart can this get?” but “who’s really in control of its memory?” Can you clearly see what your AI remembers about you, instead of it living in a black box? If something feels wrong or too intrusive, can you edit or delete that specific memory, not just vaguely “clear history” and hope it’s gone? If you decide to move to another provider, can you take that memory with you in a format that actually works elsewhere, or are you forced to start again from zero because your “second brain” is welded to one company’s servers? And behind all of that, are there hard limits on what an AI is even allowed to store long-term, and what it’s allowed to do with that information once it has it?
Because one way or another, real long-term AI memory is coming.
The only real unknown is whether it arrives as:
“Your portable second brain…”
or:
“Welcome to your personalised, inescapable subscription prison.”
A JOINT ADMISSION OF GUILT WITH CHATGPT
Yes, I wrote this with an AI assistant, it did 95% of the work, prompts only..
Right now, it’ll probably forget half of this conversation, and I’ve already forgotten most of it.
Give it a few years, and the real danger might be that it’s the one who never forgets me..