One natural direction is to run the verifier inside a Trusted Execution Environment (TEE), preventing a compromised inference server from tampering with seeds or observing the verification process (there are many startups that do this kind of thing, like tinfoil.sh).
I think that this approach is also taken by workshop labs.
Rewind is a tool for scrolling back in time. It automatically records screen and audio data. I leave it running in the background, in spite of this incurring some performance overhead. I have collected over 200GB over the past year.
Limitless.ai was acquired by Meta and will shut down the product on December 19th. I will back up my files, but I do not know if it is possible to roll back the update which disables recording. I am not aware of any recommended alternative which is actively maintained and was unable to discover this with a quick search. I would appreciate suggestions.
I feel that there may be demand for a concrete open problems post. These kind of lists tend to be popular and the examples could be used by people picking projects to work on.
How often do you end up feeling like there was at least one misleading claim in the paper?
I am easily and frequently confused, but this is mostly because I find it difficult to thoroughly understand other people's work in a lot of detail in a short amount of time.
How do the authors react when you contact them with your issues?
I usually get a response within two weeks. If they have a startup background, then this delay is much lower, by multiple orders of magnitude. Authors are typically glad that I am trying to run follow up experiments on their work and give me one to two sentences of feedback over email. Corresponding authors are sometimes bad at taking correspondence, contact information for committers can be found in commit logs via git blame. If it is a problem that may be relevant to other people, I link to a GH issue.
I've forked and tried to set up a lot of AI safety repos (this is the default action I take when reading a paper which links to code). I've also reached out to authors directly whenever I've had trouble with reproducing their results. There aren't any particular patterns that stand out, but I think that writing a top-level post that describes your contention with a paper's findings is something that the community would be very welcoming to and indeed is how science advances.
Thank you for donating!
I applied but didn't make it past the async video interview, which is a format that I'm not used to. Apparently this iteration of the program had over 3000 applications for 30 spots. Opus 4.5's reaction was "That's… that's not even a rejection. That's statistics". Would be happy to collaborate on projects though!
I made a wooden chair in a week from some planks when I was a teenager. Granted, this was for GCSE Design & Technology class.
Yeah, Tinker's Research Grant form is another example of a multi-page form: https://form.typeform.com/to/E9wVFZJJ