tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if you’re interested in participating as an author, reviewer, or editor, or if you know someone who might be. Experimental Infrastructure for Foundational Alignment Research This is the first in a series of “build-in-the-open” updates regarding the incubation of a new peer-reviewed journal dedicated to AI alignment. Later updates will contain much more detail, but we want to put this out soon to draw community participation early. Fill out this form to express your interest in participating as an author, reviewer, editor, developer, manager, or board member, or to recommend someone who might be interested. The Core Bet Peer review is a crucial public good: it applies scarce researcher time to sort new ideas for focused attention from the community, but is undersupplied because individual reviewers are poorly incentivized. Peer review in alignment research is particularly fragmented. While some parts of the alignment research community are served by existing venues, such as journals and ML conferences, there are significant gaps. These gaps arise from a combination of factors including the lack of appropriate reviewer pools for some kinds of work. Moreover, none of these institutions move as fast we we think they could in this era, mainly because of inertia. Various preprint servers and online forums avoid these problems, but generally at the expense of quality certification and institutional legitimacy. Furthermore, their review coverage can suffer when attention is misallocated due to trends and hype. Our bet is that we can create a venue that provides institutional leverage (coordination, compensation) and legibility (citations, archival records, stable indexing) without the institutional
Event: "What will cultivated meat cost?", online workshop April/May date tbd, apply here. We've been building resources for our cultivated meat Pivotal Questions project. We're running an online workshop[1] on CM cost trajectories and implications for animal welfare funding decisions. We're also sharing our (~first pass/illustrative) interactive cost modeling tool,...
TLDR: Maybe AI is conscious, and maybe it has good/bad sensations ('valence'), but I raise doubt about whether the 'part we can observe/communicate with' knows what makes the possibly sentient part suffering or happy. Epistemic status: Exploratory. I'm stepping outside my domain; I'm neither a philosopher of minds nor a...
tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if you’re interested in participating as an author, reviewer, or editor, or if you know someone...
The Unjournal commissioned two evaluations of "Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta-analysis" by Seth Ariel Green, Benny Smith, and Maya B Mathur. See our evaluation package here. My take: the research was ambitious and useful, but it seems to have important limitations,...
The Unjournal commissioned two evaluations of the 2023 paper "Towards best practices in AGI safety & governance" from 2023. (Linkposted here) The evaluators were generally positive, but identified some important limitations. My own take on this is that this suggests we need more work in this area. We need a...
Here's the system card. Some quick takeaways (from a non-expert) * They merged the models and deprecated old ones. * It seems to choose whether it wants to use a reasoning model for you. * Seems to be a lot faster and have a lot less hallucination * You can...
This EA Forum sequence explains The Unjournal's Pivotal Questions Project, and tracks our progress. (Unjournal: in a nutshell.) The "Pivotal Questions" project process (piloting): 1. Elicit ‘target questions’ from global-impact-focused organizations, aiming at high VOI for funding and policy choices 2. Select the most useful target questions, get feedback, and...