I wish you the best of skill standing up to the incentives which have enshittified the academic publishing ecosystem.
I'm curious about the timeline. E.g., when do you expect to open the first call for papers, when do you expect the first issue to be published, etc?
It depends on few factors, but April at the earliest for initial submissions. Publication will almost certainly be on a rolling basis (no discrete issues). Our ambitious goal is to drive the submission-to-publication time down to something like a month, but it will require combining several new tricks so it won't be that fast at the beginning.
Sounds promising! Curious about whether you have plans to accept papers based on experimental setup instead of results (to reduce publication bias) and if you'll consider a "press abstract" designed to help journalists disseminate information to the broader public?
This post seems written as if it's "addressed to" the lesswrong community, rather than the broader community of researchers who might want to publish in such a journal. Was this intentional?
We are trying to do both, in that we are attempting to be a bridge between LW and wider scientific communities. Where do you feel our tone might be excluding domain scientists?
tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if you’re interested in participating as an author, reviewer, or editor, or if you know someone who might be.
Experimental Infrastructure for Foundational Alignment Research
This is the first in a series of “build-in-the-open” updates regarding the incubation of a new peer-reviewed journal dedicated to AI alignment. Later updates will contain much more detail, but we want to put this out soon to draw community participation early. Fill out this form to express your interest in participating as an author, reviewer, editor, developer, manager, or board member, or to recommend someone who might be interested.
The Core Bet
Peer review is a crucial public good: it applies scarce researcher time to sort new ideas for focused attention from the community, but is undersupplied because individual reviewers are poorly incentivized. Peer review in alignment research is particularly fragmented. While some parts of the alignment research community are served by existing venues, such as journals and ML conferences, there are significant gaps. These gaps arise from a combination of factors including the lack of appropriate reviewer pools for some kinds of work. Moreover, none of these institutions move as fast we we think they could in this era, mainly because of inertia. Various preprint servers and online forums avoid these problems, but generally at the expense of quality certification and institutional legitimacy. Furthermore, their review coverage can suffer when attention is misallocated due to trends and hype.
Our bet is that we can create a venue that provides institutional leverage (coordination, compensation) and legibility (citations, archival records, stable indexing) without the institutional friction that kills speed. Instead we can operate a small, agile scale that provides dedicated tooling and rapid experimentation.
Operational Design
We are designing the journal around a few specific, high-leverage hypotheses:
Our forthcoming formal description of the journal will have much more detail. Contact us to help shape it.
Scope
“AI Alignment” is a broad and often contested label. To provide a high-signal environment from day one, we are making a deliberate choice regarding our starting point:
This is just a starting point. The current team is not the final arbiter of what constitutes “alignment” for all time. While we are setting the initial direction to get the engine running, the long-term responsibility for expanding, narrowing, or shifting the scope will belong to the editorial board. Our job right now is to build a vessel sturdy enough to support those debates.
Governance
This project is in its incubation phase. As the “plumbing” of the journal grows, editorial and strategic authority will be taken up by an editorial board of respected researchers from the alignment community. The journal will be philanthropically funded, so our funders will naturally influence on how the journal develops, but we are committed to building a self-sustaining, public-good institution that belongs to the field.
Advisory board
We are grateful for the advice and support from the initial members of our advisory board:
Institutional stewardship
This project could fail. Poor execution could create a status-chasing bottleneck, further pollute the signal-to-noise ratio in alignment research, or just waste researchers' time. Poor coordination with other initiatives could hinder rather than help the field.
To reduce this risk, we will engage as a good citizen with the alignment research community. We will track and publish our own performance metrics: turnaround times, reviewer load, and author satisfaction, and solicit assessment by the wider community whether we are participating cooperatively and productively in the publication ecosystem. Continuing the journal will be contingent upon positive community feedback and the editorial board's continuing reassessment of counterfactually positive impact. Accepted papers will remain online, regardless of the ultimate fate of the project.
Next steps
Join the founding team
A journal is only as good as its community, and you could be part of it. We want participation in the Alignment Journal—as an editor, author, or reviewer—to be credibly status-accruing. This should be a justifiable use of time toward your career goals.
If you believe this infrastructure is a missing piece of the safety ecosystem, we want your help.
We’ll soon share an initial description of our design and plans for the journal with much more detail, so reach out now if you’d like to shape it.
Support us online
We welcome you following us on all the usual platforms,
@AlignmentJrnlAbove all, our content will be hosted at our main site alignmentjournal.org.
Contributors to this document
We are grateful to Geoffrey Irving, Victoria Krakovna, and David Duvenaud for their support and feedback on this post. The authors do not commit to every detail of the journal strategy outline, in perpetuity. This is the first stage in an ongoing consultation, and we expect to adjust our positions in the face of new evidence about best strategies. All responsibility for mistakes in content or execution resides with the current managing editors, Dan MacKinlay and Jess Riedel.
We intend to experiment with a variety of possible ratings, certifications and other quality signals. This is our starting proposal, as it is one we have some experience with.
The practical implications of the emphasis on achieving State-of-the-Art results on benchmarks in machine learning research is complicated and contentious, and, we argue, not yet well understood even inside the field. For an opinionated introduction, see Moritz Hardt’s book, The Emerging Science of Machine Learning Benchmarks.