This is really cool! It seems to have flown under the radar when first posted, so I hope this comments brings some attention to it.
For most sealed predictions it's not really a problem that it isn't end-to-end encrypted, but finding a way to make it so without retrieving a private key from the user it would be really cool.
Thank you Yoav, I have not received much feedback yet and I really appreciate the kind words. Please do let me know if you have any advice on how I could surface this more prominently to communities who might benefit from it.
Hi - I'm George, and have been a long time silent follower of LessWrong and other reasoning/rationality forums. I work in AI - for personal reasons I'd like to remain pseudo-anonymous for the time being. This is my first post on LessWrong. This post is the result of me observing a problem that relates to the prediction community, and trying to address it.
A Sealed Prediction, Quirinus_Quirrell, lesswrong.com, 28th Jan 2011.
The Problem
Being able to say 'I predicted this' confers some capital (money, kudos) to the predictor. However it also typically means publishing your prediction, which holds a degree of informational hazard - publishing leaks information, which could inform or alter the prediction itself.
This is a problem raised regularly by this community, and a solution has been suggested in the form of publishing hashes of your predictions until an 'unsealing' date. I don't think this is sufficient, from a practical or security standpoint - it is not scalable, structured, or secure. I thought I could address this problem, and have attempted to do so.
The Solution
I have built a web app. As this is my first post here, I am (slightly) mortified that even stating this will come across as too self promotional. However I think this is a useful enough thing to exist in the world, and believe that the LessWrong community could benefit from it. So, to properly align incentives here, I want to be clear up front - the app is free to use, and has no ads or trackers.
The premise is simple - a user should be able to make a claim, create an immutable hash of the claim text, seal it, and set a date for unsealing. Once sealed, a user should not be able to edit or delete the claim. The claim should unseal itself on the date given, and on unsealing some check should be performed to ensure that the claim has maintained its integrity.
Cryptographic Implementation
I have implemented this as follows.
A user authors a claim. There are three fields - Premise, Claim, and Reveal Date. Premise is a publicly visible teaser about the claim. Claim is the claim itself. Reveal date is the date on which the claim will automatically be revealed.
When the user clicks submit, the claim is first sent to a content moderation service (OAI's Omni Moderation). This felt like an important thing to implement to prevent the sorts of claims which could be harmful. If offensive content is flagged, the claim is rejected at this point, and the user is informed why.
The claim is first encrypted via an AES-256-GCM data encryption key (DEK), created via Amazon Web Services Key Management System (AWS KMS). Then, the DEK is encrypted by a master key which also sits within AWS KMS. Both the encrypted DEK and the encrypted claim are stored in a Postgres database. The encryption algorithm uses a random initialisation vector (IV) so that identical plaintext produce different cipher texts. An auth tag is generated as a byproduct of the encryption, which acts as a checksum to detect tampering and guarantee integrity.
For each claim, the following is stored:
Separate from the encryption, a hash and an n_once [1] are also generated. The n_once prevents rainbow table attacks[2] - without it, anyone could hash common prediction plaintext in an attempt to match them against the hash. The hash is formed as:
After sealing, anybody can see the hash and the premise, but the text is locked away until reveal date and time.
When the reveal date and time arrives, the following happens:
Design Choices
There were decisions made in the design which have some notable consequences.
What next?
This app is designed to serve the communities that would find it most useful. To help with this, I am looking for beta testers. If you are interested in sealing some claims, checking for bugs, interacting with other claims, and generally supporting the project, please reach out and I can share invite codes.
The project is not entirely free to run. For transparency, I do have a link on the website to 'BuyMeACoffee' for donations. These will be used solely for keeping the servers running.
If this is popular, I will continue running it[6]. If it doesn't catch on, I plan to lock it down to new users, whilst honouring claims which have already been sealed until their release dates.
The website is here: https://sealed-app.com
A n_once is a random number that is used just once in a cryptographic algorithm.
A rainbow table is a precomputed table for caching the outputs of a cryptographic hash function.
This process will also leave traces in the deployment, database, and KMS logs. I am considering how I could make each DEK one use only to mitigate this - it wouldn't prevent early decryption from a rogue admin but would make it apparent.
It would be slower, risks this appearing as a 'crypto app', and would increase computational overhead and cost.
Currently from OAI, Anthropic, and Google
Which will mean upgrading from the free tiers for numerous service providers