Seeds of Science is a journal (funded through Scott Alexander's ACX grants program) that publishes speculative or non-traditional articles on scientific topics. Peer review is conducted through community-based voting and commenting by a diverse network of reviewers (or "gardeners" as we call them); top comments are published after the main text of the manuscript. 

We have just sent out an article for review - "How to Escape from the Simulation" - that may be of interest to some in the LessWrong community, so I wanted to see if anyone would be interested in joining us a gardener to review the article. It is free to join and anyone is welcome (we currently have gardeners from all levels of academia and outside of it). Participation is entirely voluntary - we send you submitted articles and you can choose to vote/comment or abstain without notification (so it's no worries if you don't plan on reviewing very often but just want to take a look here and there at the articles people are submitting). 

To register, you can fill out this google form. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments. If you would like to just take a look at this article without being added to the mailing list, then just reach out (info@theseedsofscience.org) and say so. 

Happy to answer any questions about the journal through email or in the comments below. Here is the abstract for the article. 

How to Escape From the Simulation

Many researchers have conjectured that the humankind is simulated along with the rest of the physical universe – a Simulation Hypothesis. In this paper, we do not evaluate evidence for or against such claim, but instead ask a computer science question, namely: Can we hack the simulation? More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them? Given that the state-of-the-art literature on AI containment answers in the affirmative (AI is uncontainable in the long-term), we conclude that it should be possible to escape from the simulation, at least with the help of superintelligent AI. By contraposition, if escape from the simulation is not possible, containment of AI should be, an important theoretical result for AI safety research. Finally, the paper surveys and proposes ideas for such an undertaking. 

New Comment