No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
I have been thinking a lot about the prediction error involved in saving links for later.
When I save an article, I am essentially making a bet that my future self will have more time, energy, or cognitive surplus than my present self. For years, my data has shown this bet to be consistently wrong. My "read later" queue became a write-only memory buffer.
The problem is that we treat these queues as storage, when we should probably treat them as triage. A backlog that grows linearly creates a cognitive load that seems to compound. The sheer volume of unread items devalues the individual signal of any single article.
I decided to run an experiment to fix this by inverting the logic of Spaced Repetition. Usually, we use algorithms like SM-2 to ensure we remember facts. I wanted to use a similar decay function to determine what I should forget.
I built a simple open source tool to test this. The logic is straightforward. A saved link is presented to me at increasing intervals. If I engage with it, great. If I ignore it or skip it three times, the system assumes the probability of me ever reading it has dropped to near zero. It then automatically archives or deletes the item.
This acts as a commitment device. It forces a micro-decision every time a link resurfaces: is this actually signal, or was it just noise that triggered a dopamine hit when I found it?
The counter-argument is obvious. What if the system deletes something important? My working hypothesis is that in an information-rich environment, the opportunity cost of maintaining a noisy backlog is significantly higher than the risk of losing a single data point. If the information is truly vital, it tends to resurface through other channels eventually.
I wrote the implementation for my own use, but the tools is available for free if anyone wants to give it a try.
I have been thinking a lot about the prediction error involved in saving links for later.
When I save an article, I am essentially making a bet that my future self will have more time, energy, or cognitive surplus than my present self. For years, my data has shown this bet to be consistently wrong. My "read later" queue became a write-only memory buffer.
The problem is that we treat these queues as storage, when we should probably treat them as triage. A backlog that grows linearly creates a cognitive load that seems to compound. The sheer volume of unread items devalues the individual signal of any single article.
I decided to run an experiment to fix this by inverting the logic of Spaced Repetition. Usually, we use algorithms like SM-2 to ensure we remember facts. I wanted to use a similar decay function to determine what I should forget.
I built a simple open source tool to test this. The logic is straightforward. A saved link is presented to me at increasing intervals. If I engage with it, great. If I ignore it or skip it three times, the system assumes the probability of me ever reading it has dropped to near zero. It then automatically archives or deletes the item.
This acts as a commitment device. It forces a micro-decision every time a link resurfaces: is this actually signal, or was it just noise that triggered a dopamine hit when I found it?
The counter-argument is obvious. What if the system deletes something important? My working hypothesis is that in an information-rich environment, the opportunity cost of maintaining a noisy backlog is significantly higher than the risk of losing a single data point. If the information is truly vital, it tends to resurface through other channels eventually.
I wrote the implementation for my own use, but the tools is available for free if anyone wants to give it a try.
www.sigilla.net