We previously announced a forthcoming research journal for AI alignment. This cross-post from our blog describes our tentative plans for the features and policies of the journal, including experiments like reviewer compensation and reviewer abstracts. It is the first in a series of posts that will go on to discuss our theory of change, comparison to related projects, possible partnerships and extensions, scope, personnel, and organizational structure.
The journal is being built to serve the alignment research community. This post’s purpose is to solicit feedback and encourage you to contact us here if you want to participate, especially if you are interested in becoming a founding editor or part-time operations lead. The current plans are merely a starting point for the founding editorial team, so we encourage you to suggest changes and brainstorm the ideal journal.
The...
Like plex said, getting gpt or like to simulate current top researchers, where you can use it as a research assistant, would be hugely beneficial given how talent constrained we are. Getting more direct data on the actual process of coming up with AI Alignment ideas seems robustly good and I'm currently working on this.
What's the exact deadline for submissions?
Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.
Can you expand on which readings you think are dumb and wrong?