Thanks for the explanation/links!
I think understand (broadly!) the concept of surprisal - but here I'm talking about compression of text in a system where storage costs are based on surprisal. If the reference LLM would always generate the same text for the same prompt couldn't we call this hypothetical text a reference text? Then, if I can generate the reference LLM's outputs myself, for many text strings where what I want to store is close to what the reference LLM would generate, I probably only want to store the prompt plus the difference between the r...
(I missed the surprisal specification! I mostly intended to reply to your top-level comment so I really ought to have picked it up - sorry!)
So, Terrarium's compression ("Terrarial compression"?) would look more like a diff vs. a reference LLM's output for the same prompt? The closer you are to reference, the smaller your diffs and thus the lower your storage costs? Doesn't this penalise unconventional thinking? Or are storage costs so low that the benefits of unconventional thinking can usually outweigh the additional cost?
For encryption, I'm not sure huma...
I'm curious about whether agents could improve their checkpoint volume:cost ratio (in a way that persists across epochs) using compression, and whether they could defeat human auditing of checkpoints using encryption.
Compression: Instead of storing complex checkpoints in natural language, could they store checkpoints like "decompress the following using /usr/bin/unzip, in base-95: l7dwsrFq^nwSc[@`\LBF%J/p,Z^J_]Aa...."? (I'm thinking base-95 'cos 95 printable characters in ASCII..)
Or, would the humans want to set the ratio of compute costs to storage costs ...
Personally I'm somewhat sceptical of AI-doom - but even I must admit, both Ball's "steps that require capital" and his "interfacing with hard-to-predict complex systems" seem like very odd things to propose as insurmountable barriers to AI-doom: one of the things we're using AI for right now is to help us interface with and make predictions about complex systems, and if they weren't capable of generating revenue we probably wouldn't have built them in the first place.
Companies want to consume everything, including peoples' lives, in order to make themselves richer and bigger. People are "resources" to a company.
Lawnmowers just want to cut your grass, the only resource all they ask for is petrol, and (crucially) they don't want to consume it exponentially to make themselves bigger and cut exponentially more grass.
If Lawnmowers were people, they'd be those weird obsessive monomaniacal types who're generally harmless but a bit difficult to talk to. Lots of them would be on LessWrong.
You could form a bond with a lawnmower ...
When I was about 18, my then-girlfriend's mother, an obstetrician, had a talk (in the same sense that the Conférence de la paix de Paris was "a talk"..) with my girlfriend and I about the Marquette method, and indeed cycle-based family-planning in general. That family were atheists, but I came from a Catholic family and had gone to a fairly hardcore Jesuit school (by 18 I didn't consider myself a Catholic... but the Catholics still very much did...)
She told me, unequivocally, that she sees a shockingly high number of pregnant women who'd been faithfully (n... (read more)