Recently, I gave some reasons for SIAI to begin publishing in mainstream journals, and outlined how it could be done.

I've recently been made aware of some pretty good reasons for SIAI to not publish in mainstream journals, so here they are:

  1. Articles published to websites (e.g. Yudkowsky's work, Bostrom's pre-prints) seem to have gotten more attention, and had more positive impact, than their in-journal counterparts.
  2. Articles in mainstream journals take a relatively large amount of time, money, and expertise to produce.
  3. Articles in mainstream journals must jump through lots of hoops - journals' aversion to novelty, reviewer bias, etc.
  4. It is easier to simply collaborate with (and greatly influence) established mainstream academics who have already jumped through mainstream academia's many hoops (as Carl Shulman has been doing, for example).
I still think there are strong reasons to publish articles in standard academic form (for readability purposes), but I've recently updated hugely toward SIAI not publishing in mainstream journals.

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 8:21 AM

Funny story. Just today I read the dissertation of a Rutgers philosophy PhD student (Rutgers is the #2 philosophy department worldwide according to the standard ranking ) on strategies for action in the face of normative uncertainty. It explores at great length the idea of instrumentally valuable strategies that boost our ability to attain a wide range of goals, and in particular the goals that we would seek under various idealizations.

Its introduction cites three inspirations leading to the work: John Stuart Mill's "On Liberty," John Rawls' "Theory of Justice," and Eliezer Yudkowsky's "Creating Friendly AI" (2001), discussed at greater length than the others.

EDIT: Because of the OP, I should note that I do favor SIAI publishing in mainstream venues.

Also, these recent papers refer to or quotes Eliezer's Bayes tutorial several times, as do many others.

That's quite interesting. It should be noted though that (according to google scholar at least) both his chapters in Bostrom's volume have eclipsed the Introduction to Bayes, even though they are much, much more recent. I expect the effect to be compounded with time.

I agree with previous comments about publishing in journals being an important status issue, but I think there is other value as well which is being ignored. For all of its annoyances and flaws, one good thing about peer review is that it really makes your paper better. When you submit a pretty good paper to a journal and get back the "revise and resubmit" along with the detailed list of criticisms and suggestions, then by the time the paper actually makes it into the journal, chances are that it will have become a really good paper.

But to return to the issue of papers being taken more seriously when published in a journal, I think that this view is actually quite justified. For researchers who are not already very knowledgeable in the precise area that is the topic for a given paper, whether or not the paper has withstood peer review is a very useful heuristic cue toward how much weight you should place on it. Basically, peer review keeps the author honest. An author posting a paper on his website can say pretty much whatever he wants. One of the purposes of peer reviewers is to make sure that the author isn't making unreasonable claims, mischaracterizing theoretical positions, "reviewing" the relevant previous literature in a grossly selective way, etc. Like I said, if someone is already very familiar with the area, then they can evaluate these aspects of the paper for themselves. But if you'd like to communicate your position to a wider academic audience, peer review can help carry your paper a longer way.

If you haven't passed peer review, it's almost certainly because you can't rather than because you have better things to do. If it's not published in a peer-reviewed journal, there's no reason to treat it any differently than the ramblings of the Time Cube guy."

-- Paraphrase of a speaker at the Northeast Conference on Science and Skepticism

If it's not published, it might be correct, but it's not science.

Perelman's proof of the Poincare conjecture was never published in an academic journal, but was merely posted on arXiv. If that's not science, then being correct is more important than being "scientific".

[-][anonymous]13y110

Perelman's proof has been published, e.g. this by the AMS, which has a rigorous refereeing process for books, and this in Asian Journal of Math with a more controversial refereeing process.

Though Perelman's preprints appeared in 2002 and 2003, the Clay prize (which Perelman turned down) was not offered to him until last year, because the rules stipulate that the solutions to the prize problems have to stand unchallenged in published, peer-reviewed form for a certain number of years.

I'm not really familiar with the topic matter here, but I want to note that Michael Nielsen contradicts what you said (though Nielsen isn't exactly an unbiased source here as an Open Science advocate):

Perelman's breakthrough solving the Poincare conjecture ONLY appeared at the arXiv

The important point is that it doesn't appear that Perelman produced the paper for publishing in a journal, but he made it and left it on the arXiv, which was later (you claim) published in journals. That's quite a different view than "if it's not published, it's not science"

Indeed. However, you've raised a single remarkable exception to a general heuristic as if a single example is all that is needed to thorougly refute a general heuristic, and of course that's not the case.

The overwhelming majority of papers put on arXiv and nowhere else are:

  • [ ] comparable to Perelman's proof of the Poincare conjecture
  • [ ] not comparable to Perelman's proof of the Poincare conjecture?

It's not clear to me what the disagreement is here. Which heuristic are you defending again?

If it's not published, it's not science

Response: Can we skip the pointless categorizations and evaluate whether material is valid or useful on a case by case basis? Clearly there is some material that has not been published that is useful (see: This website).

If it's not published in a peer-reviewed journal, there's no reason to treat it any differently than the ramblings of the Time Cube guy.

Response: Ahh yes, anything not peer-reviewed clearly contains Time Cube-levels of crazy.

Or none of the above? I'm not sure we actually disagree on anything here.

[-][anonymous]13y10

The problem of publication bias is another reason to be wary of the publication heuristic recommended a few comments above. If you follow that heuristic rigorously, you will necessarily expose yourself to the systematic distortions arising from publication bias.

This is not to say that you should therefore believe the first unpublished paper you come across. It's only to point out that the publication heuristic has certain problems, and while not ignored, it should be supplemented. You ignore unpublished research at your peril. In an ideal world, peer review filters the good from the bad and nothing else. We do not live in an ideal world, so caveat lector.

The process of journal publication is also extremely slow, so that refusal to read unpublished research threatens to retard your progress. This link gives time to publication for several journals - the average appears to be well over a year and approaching two years. What's two years in Internet Time? Pretty long.

[-][anonymous]13y10

The overwhelming majority of papers put on arXiv and nowhere else are:

[ ] comparable to Perelman's proof of the Poincare conjecture

[ ] not comparable to Perelman's proof of the Poincare conjecture?

Let's reword that: the overwhelming majority of papers published in peer-reviewed journals are comparable/not comparable to Perelman's proof?

Probable answer: not remotely comparable. In fact, a lot of them are just plain wrong.

Most of the math papers are not comparable to Perelman's proof in importance( that should be obvious) but most of them are mathematically correct and interesting to mathematicians. People will often see something on the arXiv and look at it. On the other hand, as I mentioned, people are also more likely to look at a preprint if they know it is actually accepted in a reputable journal.

Indeed, math isn't science. ;)

I wonder - if Perelman was just "some guy" with no reputation as a mathematician, would anyone have noticed when he uploaded his proof?

If you want to play that game, then it's not clear to me that the SIAI is doing "science" either, given that the focus is on existential risk due to AI (more like "philosophy" than "science") and formal friendliness (math).

I think a better interpretation of your quote is to replace the word "science" with "disseminated scholarly communication."

I think a better interpretation of your quote is to replace the word "science" with "disseminated scholarly communication."

Good point.

[-][anonymous]13y10

Essentially yes, though he might have had to individually contact a few mathematicians to make his existence known. Consider the example of Ramanujan. Wpedia:

In 1912–1913, he sent samples of his theorems to three academics at the University of Cambridge. Only Hardy recognized the brilliance of his work, subsequently inviting Ramanujan to visit and work with him at Cambridge.

[-]FAWS13y50

Do we know whether there are many Ramanujans who gave up before getting through to someone? One way to tell might be to look at such people who had already given up and were then discovered through a coincidence.

[-][anonymous]13y40

It would be nice to have additional data. However, I think we can mine the case of Ramanujan for clues about difficulty of entry. What I find striking is that he only contacted three mathematicians. Had he contacted, say, twenty before being noticed, that would have suggested a higher barrier to entry. But it was apparently just three. My own experience is that the great scientific minds are very approachable, aside from a tiny handful of scientist celebrities who understandably have to learn to be less approachable.

[-]Larks13y100

Division of Labour: FHI does journals, SIAI doesn't?

I think you're underestimating the status arguments in favour of publishing in journals. Status games are really really really influential in our world. We ignore them at our peril.

It goes both ways, though. Publishing in a journal means inheriting some of its prestige, but also means giving it some of your prestige. Do we want journals that are currently bad to be able to claim to have published papers by high-status rationalists, if those papers are going to be major outliers in quality?

Do we want journals that are currently bad to be able to claim to have published papers by high-status rationalists, if those papers are going to be major outliers in quality?

I think we would first need to be sure we had advanced to the stage of being publishable enough to have such problems.

I disagree, for the reason Gerard explains, and because SIAI can be selective about which journals it publishes in.

But I am not sure why your comment was downvoted. It seems fair to ask the question.

[-]gwern13y100

Articles in mainstream journals take a relatively large amount of time, money, and expertise to produce.

As do those articles in the first place. What is the incremental cost of making them fit for mainstream journals?

It is easier to simply collaborate with (and greatly influence) established mainstream academics who have already jumped through mainstream academia's many hoops (as Carl Shulman has been doing, for example).

Shulman is credited as co-author, IIRC. So wouldn't this still be SIAI publishing in mainstream journals?

So far, Shulman has usually been listed in papers' acknowledgements section. But if he's credited as co-author, then yes, that's publishing in mainstream journals in a sense.

I don't see why 1. is a reason not to publish. Presumably Bostrom's pre-prints went on to be published.

Because:

  • Many of Eliezer's papers weren't published for mainstream academia, and still had large impact.
  • It's harder to jump through mainstream academia publishing hoops than to publish directly, and Bostrom's impact from publishing preprints directly seems to have been larger than people who happen to subscribe to those particular journals.

and Bostrom's impact from publishing preprints directly seems to have been larger than people who happen to subscribe to those particular journals.

I'm not an expert on philosophy but at least in my own field (math) people are more likely too look at a preprint that is accepted in a major journal than look at a preprint otherwise. This isn't a strong effect but it is definitely present.

Agreed.

Data point on why we should publish, The Singularity as Religion mentions:

The singularity is based on science rather than superstition. My reply is that much of what I read about the singularity goes so far beyond any scientific data or present technical achievement that it looks very unscientific. Perhaps someday there will be an artificial general intelligence that far outperforms humans. But plenty of scientists and engineers seem highly skeptical about the grandiose claims of singularitarians. Are any of the claims of the singularitarians empirically testable? Verifiable or falsifiable? Only in some indefinite future. This is what John Hick called eschatological verification. But that’s not science at all. An interesting point here is that many singularitarians don’t seem to be interested in scientific research – such as writing papers for peer-reviewed journals. There is no such thing as the singularitarian research program in any standard academic or commercial sense. It looks like what Feynman called “cargo cult science”. And singularity activists have their own version of Pascal’s Wager. The singularity is so overwhelmingly transformative that even if it has a teeny-tiny chance of happening, the reward or punishment for us will be extremely great. It’s so easy to see! You just have to write out an expected utility equation.

Emphasis mine.

As far as Feynmanns "cargo cult" science goes. The rat psychologists who Feymanns criticised as doing cargo cult science did follow the procedure of science that involves publishing papers.

What JoshuaZ said: having point X cited from a published paper makes it easier for academics to take it seriously, regardless of where they come across X originally.

[-][anonymous]13y70

Is this a place-holder for an anti-publishing post that will be as well-argued as your pro-publishing post? You may have updated hugely but you haven't given us the tools to with your four-point summary.

No, I probably won't be taking the time to elaborate.

[-][anonymous]13y40

Are you really claiming that you were only recently made aware of these four points, or just that you find them to be more persuasive now than you did before? It matters for my own update-multiplier, I think.

A combination of both.

[-][anonymous]13y00

"If you haven't passed peer review, it's almost certainly because you can't rather than because you have better things to do. If it's not published in a peer-reviewed journal, there's no reason to treat it any differently than the ramblings of the Time Cube guy." - Paraphrase of a speaker at the Northeast Conference on Science and Skepticism