That'd be an interesting structure for sure. Some kind of spaced repetition, presenting you with two (or three? or more?) prior thoughts to read simultaneously, to reinforce not just the information, but the relationship between the different ideas; not just in isolation, but reinforcing the network itself... maybe with some kind of highlight markup to indicate where the parallel is strongest.
With regards to the Zettelkasten containing too many cards to keep up with, I think card agglomeration above a certain threshold might be useful. Rather than hyperlinking between 500 cards with one sentence each on them, it may be preferable to link to particular sentences or sections within 50 different cards. The hyper-deconstruction of the original index card system was I think more a limitation of index card size, and how you might best utilize a physical system. Hypertext and NLP could identify links, and keeping a human in the loop ensures they're the kind of parallels you actually want to be forming... to some degree. Being presented with particular links still might still have some subtle undesirable biases. But the prospect of "combine these cards?" might make things more manageable.
I think a manual consolidation step would at least prompt for better structure for future consumption. If you know the order you'll always want to view the notes in, turn them into the one note with headings. That could keep the knowledge bank a bit more manageable and navigable. I don't think the current graph view in systems like Roam or Obsidian is enough - you need a text preview of those notes. Maybe for each new note, checking the graph structure remains planar, avoiding messy crossovers in visualisations of it. Or maybe that's a terrible idea, I have no idea! There's lots of low hanging fruit in the space, and whoever makes the biggest fruit basket's going to win here.
This just seems like a Wittgensteinian Language Game crossed with the Symbol Grounding Problem. It's not so much that "lying can't exist" as "it is impossible to distinguish intentional deception from using different symbols". A person can confidently and truthfully state "two plus two equals duck" - all we need is for "duck" to be their symbol for "four". They're not "lying", or even "wrong" or "incoherent", their symbols are just different. Those symbols are incompatible with our own, but we don't "really" disagree. A different person could, alternatively, say "two plus two equals duck" and be intentionally deceiving - but there's nothing that can be observed about the situation to prove it, just by looking at a transcription of the text. There's also no way, exclusively through textual conversation, to "prove" that another person is using their symbols in the same way as you! Even kiki-bouba effects aren't universal - symbols can be arbitrary, once pulled out of their original context. If everyone's playing their own Language Game, shared maps are illusory - How Can Maps Be Real If Our Words Aren't Real?
P. rey consistently and unambiguously uses the symbol to mean "mating time". P. redator consistently and unambiguously uses the symbol to mean "I would like to eat you, please". Neither, in this language game, is lying to each other, or violating their own norms - but the same behaviour as above happens. Lying is just dependent on reference frame; just because there's a hole in one map doesn't open a hole in another. In any given example of "deception", we can (however artificially) construct a language game where everyone acted honestly. Lying isn't a part of what we can check on the maps here - it's an element of territory, in so far as we could only tell if someone was "really lying" if we could make direct neurological observations. Maybe not even then, if that understanding's some privileged qualia. The only time you can observe a lie with certainty is if you're the liar, as the only beliefs you can directly observe with confidence are your own.
The territory only contains signals, consequences, and benefits. Lying postulates about intention, which is unverifiable from the outside. That doesn't make "lying" meaningless, though - we can absolutely lie, and be certain that we lied - so it has meaning, but it's dependent on reference frame. When two people observe a relativistic object moving at different speeds, they can both provide truthful yet contradictory claims. When each claims the other "lied", in so far as they have their own evidence and certainty, it's a consequence of reference frame. Lying is centrifugal force, signals are centripetal. Both can be treated as real when useful for analysis. Hooray for compatibilism!
Electricity use isn't the only ongoing factor, though: consider that freezers are somewhat bulky appliances - you can imagine e.g. in an environment where rent is high, there's an additional ongoing cost of physically having a refrigerator taking up floorspace. If your refrigerator has a floor footprint of about square metre, cost can go up to $60 or more just to have it in your space - an order of magnitude more than electricity cost. So there's a much larger ongoing cost that will dominate that effect.
"Yeah, so my dumb argumentive comment is, prediction does not equal compression. Sequential prediction equals compression. But non-sequential prediction is also important and does not equal compression... And by non-sequential prediction, I mean you have a sequence of bits in the information theory model, but if instead you have this broad array of things that you could think about, and you're not sure when any one of them will become observed, [then] you want all of your beliefs to be good, but you don't have any next bit... You can't express the goodness just in terms of this sequential goodness."
I think this argument is conflating "data which is temporally linear" and "algorithm which is applied sequentially". "Prediction" isn't an element of past data VS future data, it's an element of information the algorithm has seen so far VS information the algorithm hasn't yet processed.
After spending hours at your computer console struggling with the symbol grounding problem, you realise the piece of paper had icicles on it and that the codec contained information more important than the signal.