Over the past few months, the Singularity Institute has published many papers on topics related to Friendly AI. It's wonderful that these ideas are getting written up, and it's virtually always better to do something suboptimal than to do nothing. However, I will make the case below that academic papers are a terrible way to discuss Friendly AI, and other ideas in that region of thought space. We need something better.
I won't try to argue that papers aren't worth publishing. There are many reasons to publish papers - prestige in certain communities and promises to grant agencies, for instance - and I haven't looked at them all in detail. However, I think there is a conclusive case that as a discussion forum - a way for ideas to be read by other people, evaluated, spread, criticized, and built on - academic papers fail. Why?
1. The time lag is huge; it's measured in months, or even years.
Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. Ideas build on each other. Einstein built on Maxwell, who built on Faraday, who built on Newton, who built on Kepler, who built on Galileo and Copernicus.
For this to happen, ideas need to get out there - whether orally or in writing - so others can build on them. The publication cycle for ideas is like the release cycle for software. It determines how quickly you can get feedback, fix mistakes, and then use whatever you've already built to help make the next thing. Most academic papers take months to write up, and then once written up, take more months to publish. Compare that to Less Wrong articles or blog posts, where you can write an essay, get comments within a few hours, and then write up a reply or follow-up the next day.
Of course, some of that extra time lag is that big formal documents are sometimes needed for discussion, and big formal documents take a while. But academic papers aren't just limited by writing and reviewing time - they still fundamentally operate on the schedule of the seventeenth-century Transactions of the Royal Society. When Holden published his critique of the Singularity Institute on Less Wrong, a big formal document, Eliezer could reply with another big formal document in about three weeks.
2. Most academic publications are inaccessible outside universities.
This problem is familiar to anyone who's done research outside a university. The ubiquitous journal paywall. People complain about how the New York Times and Wall Street Journal have paywalls, but at least you can pay for them if you really want to. It isn't practical for almost anyone doing research to pay for the articles they need out-of-pocket, since journals commonly charge $30 or more per article, and any serious research project involves dozens or even hundreds of articles. Sure, there are ways to get around the system, and you can try to publish (and get everyone else in your field to publish) in open-access journals, but why introduce a trivial inconvenience?
3. Virtually no one reads most academic publications.
This obviously goes together with point #2, but even within universities, it's rare for papers, dissertations or even books to be read outside a very narrow community. Most people don't regularly read journals outside their field, let alone outside their department. Academic papers are hard to get statistics on, but eg., I was a math major in undergrad, and I can't even understand the titles of most new math papers. More broadly, the print run of most academic books is very small, only a few hundred or so. The average Less Wrong post gets more views than that.
4. It's very unusual to make successful philosophical arguments in paper form.
When doing research for Personalized Medicine, I often read papers to discover the results of some experiment. Someone gave drug X to people with disease Y. What were the results? How many were cured? How many had side effects? What were the costs and benefits? All useful information.
However, most recent Singularity Institute papers are neither empirical ("we did experiment X, these are the results") or mathematical ("if you assume A, B, and C, then D and E follow"). Rather, they are philosophical, like Paul Graham's essays. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.
5. Papers don't have prestige outside a narrow subset of society.
Several other arguments here - the time lag, for instance - also apply to books. However, society in general recognizes that writing a book is a noteworthy achievement, especially if it sells well. A successful author, even if not compensated well, is treated a little like a celebrity: media interviews, fan clubs, crazy people writing him letters in green ink, etc. (This is probably related to them not being paid well: in the labor market, payment in social status probably substitutes to a high degree for payment in money, as we see with actors and musicians.)
There's nothing comparable for academic papers. No one ever writes a really successful paper, and then goes on The Daily Show, or gets written up in the New York Times, or gets harassed by crowds of screaming fangirls. (There are a few exceptions, like medicine, but philosophy and computer science are not among them.) Eg., a lot of people are familiar with Ioannidis's paper, Why most published research findings are false. However, he also wrote another paper, a few years earlier, titled Replication validity of genetic association studies. This paper actually has more citations - over 1300 at least count. But not only have we not heard of it, no one else outside the field has either. (Try Googling it, and you'll see what I mean.)
6. Getting people to read papers is difficult.
Most intellectual people regularly read books, blogs, newspapers, magazines, and other common forms of memetic transmission. However, it's much less common for people to read papers, and that reduces the affordances that people have for doing so, if they are asking "hey, this thing is a crazy idea, why should I believe it?". Papers are, intentionally, written for an audience of specialists rather than a general interest group, which reduces both the tendency and ability of non-specialists to read them when asked (and also violates the "Explainers shoot high - aim low" rule).
7. Academia selects for conformity.
The whole point of tenure is to avoid selecting for conformity - if you have tenure, the theory goes, you can work on whatever you want, without fear of being fired or otherwise punished. However, only a small (and shrinking) number of academics have tenure. In order to make sure fools didn't get tenure, it turns out academia resorted to lots and lots of negative selection. The famous letter by chemistry professor Erick Carreira illustrates some of what the selection pressure is like, similar to medicine or investment banking: there's a single, narrow "track", and people who deviate at any point are pruned. Lee Smolin has written about this phenomenon in string theory, in his famous book The Trouble with Physics.
Things may change in the future, but as it stands now, many ideas like the Singularity are non-conformist, well outside the mainstream. They aren't likely to go very far in an environment where deviations from the norm are seen negatively.
8. The current community isn't academic in origin.
This isn't an airtight argument, because it's heuristic - "things which worked well before will probably work again". However, heuristic arguments still have a lot of validity. One of the key purposes of a discussion forum, like Less Wrong or the SL4 list that was, is to get new people with bright ideas interested in the topics under discussion. Academia's track record of getting new people interested isn't that great - of the current Singularity Institute directors and staff, only one (Anna Salamon) has an academic background, and she dropped out of her PhD program to work for SIAI. What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.
9. Our ideas aren't academic in origin.
Similarly to #8, this is a "heuristic argument" rather than an airtight proof. But I still think it's important to note that our current ideas about Friendly AI - any given AI will probably destroy the world, mathematical proof is needed to prevent that, human value is complicated and hard to get right, and so on - were not developed through papers, but through in-person and mailing list discussions (primarily). I'm also not aware of any ideas which came into our community through papers. Even science fiction has a better track record - eg. some of our key concepts originated in Vinge's True Names and Other Dangers. What formats have previously worked well for discussing ideas?
10. Papers have a tradition of violating the bottom line rule.
In a classic paper, one starts with the conclusion in the abstract, and then builds up an argument for it in the paper itself. Paul Graham has a fascinating essay on this form of writing, and how it came to be - it ultimately derives from the legal tradition, where one takes a position (guilty or innocent), and then defends it. However, this style of writing violates the bottom line rule. Once something is written on the paper, it is already either right or wrong, no matter what clever arguments you come up with in support of it. This doesn't make it wrong, of course, but it does tend to create a fitness environment where truth isn't selected for, just as Alabama creates a fitness environment where startups aren't selected for.
11. Academic moderation is both very strict and badly run.
All forums need some sort of moderation to avoid degenerating. However, academic moderation is very strict by normal standards - in a lot of journals, only a small fraction of submissions get approved. In addition, academic moderation has a large random element, and is just not very good overall; many quality papers get rejected, and many obvious errors slip through.
As if that wasn't enough, most journals are single-blind rather than double-blind. You don't know who the moderators are, but they know who you are, raising the potential for all kinds of obvious unfairness. The most common kind of bias is one that hurts us unusually badly: people from prestigious universities are given a huge leg up, compared to people outside the system.
(This article has been cross-posted to my blog, The Rationalist Conspiracy.)
EDIT #1: As Lukeprog notes in the comments, academic papers are not our main discussion forum for FAI ideas. In practice, the main forum is still in-person conversations. However, in-person conversations have critical limitations too, albeit more obvious ones. Some crucial limits are the small number of people who can participate at any one time; the lack of any external record that can be looked up later; the lack of any way to "broadcast" key findings to a larger audience (you can shout, but that's not terribly effective); and the lack of lots of time to think, since each participant in the conversation can't really wait three hours before replying.
EDIT #2: To give a specific example of an alternative forum for FAI discussion, I think the proposal for an AI Risk wiki would solve most of the problems listed here.
My reply, in the context of Singularity Institute research:
Almost all FAI discussion happens outside of papers. It happens on mailing lists, forums like Less Wrong, email threads, personal conversations, etc. Yesterday I had a three hour discussion about FAI with Eliezer, Paul Christiano, and Anna Salamon where we covered more ground than we possibly could in a 20-page paper because there's so much background material that we all agree on but hasn't been written up. Nobody is waiting around for papers to come out to advance FAI theory; that's not what papers are for.
Most SI papers borrow heavily from material that originated from mailing list discussions or LW posts, and most peer-reviewed SI publications are posted in preprint version when they are written instead of months later when they are published by the academic publisher.
All SI publications are published on our website, which is open to everyone. Same goes for all of Nick Bostrom's papers.
Not via the journals and academic books themselves, no. That's why SI and FHI publish their papers to our own websites, where they are read by far more people than read them in the journals themselves.
Don't generaltize from one example. I'm slowly surveying a good chunk of the "player characters" in the x-risk reduction space, and a good chunk of them were hugely influenced by Eliezer's two GCR chapters or by Bostrom's Astronomical Waste.
But we care unusually much about that narrow subset of society. Also, I don't write papers so much for prestige as for the fact that it forces me to write in a way that is unusually clear, well-referenced (so that people can check what other people are saying about each individual element), well-structured, careful, and so on. In contrast, people read the Hanson-Yudkowsky debate and there are 5 different ways to interpret every other paragraph and no references by which to check anything and they have no idea what to think.
Not as hard as getting them to read The Sequences. Also, many of the people we care about (e.g. me) find it easier to read papers than to read a few blog posts, because papers tend to be clearer written and point the reader to related sources.
No problem; there are plenty of journals that are likely to publish the kinds of papers SI publishes, and some already have.
As said previously, most FAI discussion still happens outside of papers, but in fact it turns out that several important people did come through Eliezer's and Bostrom's papers.
Same goes for all new areas of research. They're developed in person and on mailing lists long before they end up in journal articles.
This is sometimes a problem, sometimes not. Communications of the ACM might reject the paper Nick Bostrom and I wrote for it because it's too philosophical and we don't have the space to respond to all common objections. So we may end up publishing it somewhere else. But with my two TSH chapters, all that happened was that I got a bunch of feedback, some of it useful and some of it not, so I incorporated the useful feedback and ignored the useless feedback and published significantly improved papers as a result. Other people I've spoken to about this have reported a similar spread of experiences.
Also see two of my previous posts on the topic, neither of which I agree with anymore: How SIAI could publish in mainstream cognitive science journals and Reasons for SIAI to not publish in mainstream journals.
Hi Luke! Thanks for replying. Quick counterpoints:
Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?
If in-person conversations (despite all their limitations) are still the much preferred way to discuss things, instead of papers, that's evidence in favor of papers being bad. (It's also evidence of SIAI being effective, which is great, but that isn't the point under discussion.) If papers were a good discussion forum, there'd be fewer conversations and more papers.
If, as you say, the main audience for papers written by SIAI is through SIAI's website and not through the journals themselves, why spend the time and expense and hassle to write them up in journal form? Why not just publish them directly on the site, in (probably) a much more readable format?
The problem with conformity in academia isn't that it's impossible to find someplace to publish. You can always find somewhere, given enough effort. The problem is that a) it restricts the sorts of things you can say, b) restricts you, in many cases, to an awkward way of wording things (which I believe you've written about at http://lesswrong.com/lw/4r1/how_siai_could_publish_in_mainstream_cognitive/), and c) it makes academia a less fertile ground for recruiting people. Those are probably in addition to other problems.
I agree that we care more about prestige within academia than we do about prestige in almost all similarly sized groups. However, it seems fairly strongly that we aren't going to have that much prestige in academia anyway, given that the main prestige mechanism is elite university affiliations, and most of us don't have those.
Which people have come through Eliezer and Bostrom's papers? (That isn't a rhetorical question; given how large our community is compared to Dunbar's number, it's likely there is someone and it's also likely I've missed them, and they might be really cool people to know.)
Using my own personal experiences is generalizing from a single dataset, and that's indeed biased in some ways. However, it's very far from generalizing from a single example; it's generalizing from the many thousands of arguments that I've read and accepted at some point in the past. It's still obviously better to use multiple datasets, if you can get them.... but in this case they're difficult to get, because it's hard to know where your friends got all their beliefs.
Sure, it's easier to get people to read a single paper than all of the Sequences. But that's a totally unfair comparison: the Sequences are much, much longer, and it's always easier to read something shorter than something longer. How hard would it be to get someone to read a paper, vs. a single Sequence post of equal length, or a bunch of Sequence posts that sum to an equal length?
If all new areas of research are developed through in-person conversations and mailing lists, that doesn't imply that papers are a good way to do FAI research; it implies that papers are a bad way to do all those other kinds of research. If what you say is true, then my argument equally well applies to those fields too.
Of course, there are some instances of academic moderation being net good rather than net bad. However, to quote of your earlier arguments, "don't generalize from one example". I'm sure that there are some well-moderated journals, just as I'm sure there are Mafia bosses who are really nice helpful guys. However, that doesn't imply that hanging out with Mafia bosses is a good idea.
Grab the interest of smart people who won't be grabbed by cheaper methods. This has worked before. Also: Many smart and productive people are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read. In addition, many smart people prefer to read papers over blog posts because papers are generally better organized, are more clearly written, helpfully cite related work, etc.
Reduce communication overhead. We don't have time to have a personal conversation with every interested smart person, and blog posts are often too disorganized and ambiguous to help. Though for this, a scholarly AI risk wiki would probably be even better. Luckily, as I say in that post, there isn't much additional cost involved in turning parts of papers into wiki articles, or combining wiki articles into papers.
Grab some prestige and credibility, because this matters to lots of the people we care about.
Show that we're capable of doing serious research. "Eliezer did some work with Marcello that we can never tell you about" and "We wrote some blog posts this month" don't quite show to most people that we can do research.
Be kinda-forced into writing more clearly, and in a way that is more thoroughly connected to the relevant empirical literatures, than we might otherwise be tempted to write.
As I said before, many people find papers more readable than ambiguous blog posts barely connected to the relevant literatures. Eliezer's papers aren't written in a different style than his blog posts, anyway. Also, peer review often improves the final product.
Agree with (a) and somewhat with (b), but we're only writing certain things in paper form. Like I said, the vast majority of FAI work and discussion happens outside papers. I don't know what you mean by (c).
I don't care about something like "average prestige in academia." What I care about is some particular people thinking we have enough credibility to bother reading and engaging with. Many of the people I care about won't bother to check whether the author of an article has elite university affiliation, but will care if we bothered to write up our ideas clearly and with references to related work. The Singularity and Machine Ethics looks much less crankish than Creating Friendly AI, even though none of the authors have elite university affiliation.
Still gathering data, and I haven't gathered permission to share it. I think two people who wouldn't mind you knowing they came to x-risk through "Astronomical Waste" are Nick Beckstead and Jason Gaverick Matheny.
My intended point was that sometimes a paper has summed up the main points from something that Eliezer took 30 blog posts to write when he wrote The Sequences. But obviously you don't have to write a paper to do this, so I drop the point.
Remember: almost all FAI research is not done via papers. In my above list of reasons why SI publishes papers, I didn't even think to mention "to produce original research" (and I won't go back and add it now), though that sometimes happens.
If one journal is poorly moderated, then you jump to another one. Unlike Mafia bosses, a "problem" with journal moderators means "I wasted a few hours communicating with them and making revisions," not "They decided to cut off my thumbs."
This comment and your others in this thread have greatly improved my confidence in SI.
For people who "are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read", which specific examples are you thinking of, and how much any of them become nontrivial members of our community, or helped us out in nontrivial ways?
I'm sure there are people who a) are very smart, b) look impressive on paper, who c) we've contacted about FAI research, and d) have said "I'm not going to pay attention, since this isn't peer reviewed" (or some equivalent). However, I think that for most of those people, that isn't their true rejection (http://lesswrong.com/lw/wj/is_that_your_true_rejection/), and they aren't going to take us seriously anyway. But I could be wrong - what evidence do you have in mind?
A lot of your points are criticisms of blog posts, like "a lot of them don't have citations", or "a lot of them are poorly organized". These are true in many cases. However, if SIAI is considering whether to publish some given idea in paper or blog post form, they could simply spend the (fairly small) effort to write a blog post which was well organized and had citations, thereby making these problems moot.
Journal editors obviously aren't perfectly analogous to mob bosses. However, I've heard many stories from academics of authors spending huge amounts of time and effort trying to get stuff published. In the most recent case, which I discussed with a grad student just a few hours ago, it took hundreds of hours, over a full year. If it's usually easy to get around that sort of thing, by just publishing in a different journal, why don't more academics do so?
Your first two questions ask about evidence that I already said I'm not in a position to share yet. I know that's unsatisfying, but... are your priors on my claims being true really very low? Famous scientists, especially, are barraged with a few purported unifications of quantum theory and relativity every month, and "Did they bother to pass peer review?" is a pretty useful heuristic for them. When you visualize a busy academic receiving CFAI from one person, and The Singularity and Machine Ethics from somebody else, which one do you think they're more likely to read and take seriously, and why? (Feel free to take this as a rhetorical question.)
The effort required may be much larger than you think. Eliezer finds it very difficult to do that kind of work, for example. (Which is why his papers still read like long blog posts, and include very few citations. CEV even contains zero citations, despite re-treading ground that has been discussed by philosophers for centuries, as "The Singularity and Machine Ethics" shows.)
And if you've done all that work, then why not also tweak it for use in a scholarly AI risk wiki, and then combine it with a couple other wiki articles into a paper?
Because their career depends on satisfying their advisors, or on getting published in particular journals. SI researchers' careers don't depend on investing hundreds of hours making revisions. If publishing in a certain journal is going to require 30 hours of revisions that don't actually improve the paper in our eyes, then we aren't going to bother publishing in that journal.
Both links go to the same place.
If this is the case, then a significant benefit to Eliezer of trying to get papers published would be that it would be excellent discipline for Eliezer, and would make him an even better scholar.
A benefit that would follow on is that it would establish by example that nobody is above showing their work, acknowledging their debts and being current on the relevant literature. Conceivably Eliezer is such a talented guy that it is of no benefit to him to do these things, but if everyone who thought they were that talented were excused from showing their work and keeping current then progress would slow significantly.
It also avoids reinventing the wheel. No matter how smart Eliezer is, it's always conceivable that someone else thought of something first and expressed it in rigorous detail with proper citations. A proper literature review avoids this waste of valuable research time.
Luke (and his remote research assistants) have this angle covered.
... and if you're going through the effort of writing a blog post that's journal-quality anyway, you might as well go ahead and publish it as a full paper while you're at it.
Clearly the grad student (or more likely, their advisor) thought that getting published in journal X was worth enough status to spend over a year working on it. Fake utility functions and all that. Not even academics are perfectly rational.
I imagine that some academics would be more willing to read a paper from the website if it is also published in a journal, especially if they are writing their own journal articles in which they would prefer to cite a journal article to a paper published on a website.
This. Once our ideas enter the academic memespace, they may continue to live independently there. The difficult part is crossing the boundary, but it only has to be done once.
While it would be incorrect to say that I originally came to these issues only via Bostrom's papers, they certainly made me a lot more interested in the field. Partly because of the prestige of being actually peer-reviewed, but mostly (I think - it was a long time ago) because they were clear, rigorous and self-contained to an extent that few other materials were.
In order to think of some things I do that only have one important purpose, it was necessary to perform the ritual of closing my eyes and thinking about nothing else for a few minutes by the clock.
I plan on assuming things have multiple important purposes and asking for several, e.g. "what do you view as the purposes of X."
There was nothing wrong with what you said, but it is strange how easily the (my?) mind stops questioning after coming up with just one purpose for something someone is doing. In contrast, when justifying one's own behavior, it is easy to think of multiple justifications.
It makes some sense in a story about motivated cognition and tribal arguments. It might be that to criticize, we look mostly for something someone does that has no justification, and invest less in attacking someone along a road that has some defenses. A person being criticized invests in defending against those attacks they know are coming, and does not try and think of all possible weaknesses in their position. There is some advantage in being genuinely blind to one's weaknesses so one can, without lying, be confident in one's own position.
Maybe it is ultimately unimportant to ask what the "purposes" of someone doing something is, since they will be motivated to justify themselves as much as possible. In this case, asking what the "purpose" is would force them concentrate on their most persuasive and potentially best argument, even if it will rarely actually be the case that one purpose is a large supermajority of their motivation.
In PDF form. As far as trivial inconveniences go, the jump from html to pdf is nearly as debilitating as a paywall.
If you could publish in web-served html as well, that would be super cool. Much more pleasant to read (accessability, font, reflow), much easier to link, less hoops and bandwidth. If your papers have served html versions, they are not obvious.
It is incredibly embarrasing to admit being regularly defeated by something being .pdf instead of .html.
It's not "nearly as debilitating as a paywall" for most people. And many people prefer pdf to html, including jsteinhardt and myself.
I had my LaTeX team look into what it would cost to generate well-formatted HTML versions of our papers, and it doesn't seem worth it on the present margin. But at a larger funding level it clearly would be.
I find that surprising. Is this based on a study or something?
If I click a link and it leads to a pdf my reaction is usually something like "Ack Abort ABORT!!!" and then I won't download the pdf unless it's something that I really need - the sort of thing I would also likely pay for. Even then I'd in most cases prefer to see it in any other format.
When I click on a PDF link, my web browser opens the PDF in another tab. It's a quick and easy way to view PDFs without downloading them onto my hard drive. If you have an aversion to downloading PDFs but would still like to read them, then you may want to enable that feature in your web browser.
For some reason the PDF reader plug-in in my browser doesn't work as well as my stand-alone PDF reader (though they're both from Adobe), so I still prefer to download them.
Just FYI, when you click a link and view content, that content has been downloaded onto your hard drive, even if you only see it on a browser window..
The browser PDF readers are even worse than standalone Adobe Acrobat (especially in Chrome, which is my primary web browser).
I'd rather just not support the use of such a broken file format.
My opinion is the opposite, to the point I've set Chrome as my default program for PDFs, even those on my local hard drive. IIRC, it's based on the open source Foxit reader.
Chrome's PDF reader is missing a lot of features. Notably, no page numbers / jump to page.
Really? I thought viewing it in my browser is more akin to streaming a video. But I could easily be wrong about that.
Ah, okay. I use Firefox with an Adobe Acrobat plugin. Not familiar at all with Chrome and other PDF readers.
The difference is minor. FWIW, a better analogy might be downloading a video file to your browser's temp directory and then opening it in VLC to watch while it's still downloading.
Gotcha. Thank you for the correction.
For the record, I much prefer PDF.
Edit: and it shouldn't be inconvenient, at least on my computer PDF auto-download and open when you visit a link to a PDF file.
Luke's response excellently explains why many of these remarks don't make sense in the context of SI, but I would like to add that many of these remarks are misrepresentations of the academic process.
On (published) papers as discussion objects: this is why we have preprint servers and discussion lists. Common practice is to discuss draft papers, and to distribute them to interested parties. Publishing is a canonicalization process. It is really the minimum that needs to happen so that discussion does not get mired in draft invalidation problems.
Is this even a surprise? The vast majority of math research doesn't happen at anything resembling the undergrad level. Most undergraduate maths is at least a century old.
Everyone outside a narrow subset of society lacks the ability to give meaningful input on the content of most papers.
As far as I can tell, this was never true. Preventing financial interests from interfering in academic hiring processes is closer to the mark, but that's different from avoiding conformity. It's safe to say that most arguments on academic freedom are little more than political signalling.
I understand it's in vogue to criticize academia (usually under the guise of Traditional Rationality), but the system, as it is, actually exists and actually produces results. It currently has a virtual monopoly on the production of highly educated researchers -- the alternative, becoming an epic-level autodidact, is fraught with failure modes and lost opportunity costs. Neglecting academia merely because it has its own inconveniences and inefficiencies would be a large mistake.
Great post - I definitely agree with some of your points. I'm very new to LW and haven't even written an introductory post yet, but I'm very impressed with what I've seen overall. I am even flying out to San Francisco tomorrow to discuss joining the newly-renamed CFAR. My background is entirely academic, as I have a PhD in experimental psychology and I'm interested in formalising some of the rationality measures CFAR is looking at. I even had a brief email exchange with Anna Salamon about the usefulness and validity of academic publications.
Here's my take on your points.
Yes, the time lag is huge. It's worth noting that in academia papers aren't used as discussion forums to share ideas rapidly - you read them to see what other labs are doing, and discuss them in journal clubs where graduate students and postdocs can learn about the field and get ideas for their own work. Conference talks and poster presentations are the usual format for discussions between academics from different labs. The academic community as a whole has been very slow to harness the power of the web for the formal discussion of research, although much does take place on science blogs, and now some of the journals are very belatedly starting to host their own discussion forums.
Yes. This is hugely irritating, most academics hate it, and it's the result of a publishing model that's massively outdated. Many governments are now taking steps to ensure that publicly-funded research is accessible to all, though we are by no means there yet.
Yes, papers aren't widely read outside the field they're published in. But they're actually not usually supposed to be - the work and the jargon in most fields is so specialised that just being able to read it and understand what people are talking about takes years of training. At that level you need a specialised vocabulary just to be able to ask the questions. Compare it to the jargon used here at LW - it took me a while to get into it, and most of it isn't completely inaccessible, but it's still tough for a first-timer to break into this community if they don't know much about rationalism.
Not sure about this. Philosophers make philosophical arguments in paper form all the time, and they do it primarily to convince other philosophers. If they want to convince people without the formal training, that's when they'll usually write a book. (As an experimentalist myself, and given that I'm interested primarily in testing rationality measures, it doesn't really apply.)
Yes and no to this one. It's true that writing a paper doesn't get you mobbed by hordes of screaming fangirls, as I know to my detriment! But again it's about who you're trying to convince. You need publications to be taken seriously by other academics, if that's what you want to do. And if you want to be taken seriously by wider society, there are certainly other ways to do that than by becoming an academic. But as I mentioned before, there's a vibrant community on science blogs and people interested in science who consider a paper to be the gold standard of scholarship.
Yes, actually you're supporting the point here I made in my reply to 3. The two tie together quite well in fact.
This is definitely a problem with academia, although I'd consider it more a feature than a bug. Basically the current system is optimised to stop crazy people coming in and messing things up. That is, if you have an idea that's novel but mainstream you'll attract funding; if you have an idea that's way out there you often won't, people will think you're a crackpot, etc. etc. GIven the existence of actual crackpots, this is a necessary defence mechanism, although perhaps some knowledge of the base rate of crackpots would be helpful so that academia could make decently Bayesian decisions...
This is true for the moment but as I've said I'm interested in joining CFAR (probably not SIAI though, it just doesn't interest me as much as rationality training) and I'm probably not the only academic who's at least a bit interested. The more research you can pump out the more either organisation will be seen as a viable place for academics to go and work. I'm sure we can bring things to the table that most non-academics cannot, and can learn things ourselves as well.
Disagree strongly. I suspect you're right with FAI, which is a topic I'm not very familiar with, but other topics about cognitive biases and heuristics, Bayesian decision theory and so on, have been mainstream in academia for some time now. In fact part of my PhD was based on the premise that the brain treats incoming information in a Bayesian way (see also Tom Griffiths' work at Berkeley: http://cocosci.berkeley.edu/tom/). So I don't think it's correct to say that all of the ideas in this community aren't academic in origin; it might be better to say that some of them aren't.
Yes and no. Of course in the abstract you have a conclusion - because the abstract is a tiny mini-paper that you can read to find the answer without having to plough through the whole thing. If you want to check the work, you go and do just that. Papers should be stories - they should flow well and communicate the idea they're trying to sell. But that idea isn't written down before the research is done, otherwise there'd be no point in doing the research. It might be written down in the paper before you reach the evidence that supports it, but that's for narrative reasons. I should note that many papers do start off with a question: "is it A or B? let's find out!" rather than stating outright whether A or B has won the day. But in reality one of them has already, because the universe is as it is, and the experiments you do to test it only reflect that.
Again this has its good and bad points. A fraction of submissions getting approval goes back to the 'crackpot filter' I talked about in my response to 7, which also explains why universities are privileged over other organisations. You don't want journals filled with any old rubbish, though some journals are obviously better than others (and in your field it very quickly becomes apparent which ones those are). So how do you keep quality up without filtering? You're right that good papers get rejected and errors slip through - but generally that stuff is caught pretty fast and corrected, and eventually a whole field changes when the errors are brought to its attention. A good example is neuroimaging using fMRI, which a decade ago you could publish anything in really easily but now your stats have to be pretty much watertight. And you're right that it's unfair that reviewers get to know who you are but you don't get to know who they are for the most part... means they can block your grants or your papers if they don't 'like your ideas or think you're getting too close to their turf. We are, after all, hierarchical social apes, even those of us who try to be rational.
To summarise then, I generally agree for the most part but there are nuances beyond what you've stated in your arguments. I'd agree that papers aren't great for discussion among the community at the time they're published, but the advantage is that anyone can go back and find them and other papers and build a narrative from that, much more easily (at the moment) than can be done with a blog. The signal-to-noise ratio is generally higher, although a blog like this with good moderation and smart, curious people ameliorates many problems.
While that is how paper are structured, it is in my experience not how they are written. The abstract is written last, or at least after the results are known, and summarizes the rest of the paper.
The real problem is the difficulty in getting negative results published, which pushes authors to make things appear better than they really are or to hunt for positive aspects.
Yeah, that struck me as a "WTF?" I mean, it may certainly be the case the authors decide their bottom line before coming up with the evidence and arguments for it, but you can't infer that from the fact that the abstract comes first and gives its conclusion -- they're not (or at least shouldn't be) written in the order you read the paper.
I would much prefer that an abstract give the paper's conclusion! I've seen too many abstracts that either leave it off, or leave out the key insight driving that conclusion, forcing me to dig through the paper, and generally defeating the purpose of it.
As someone who has written a few papers myself I fully agree. Just because the conclusion is in the abstract, doesn't mean that one starts out with a conclusion in mind. The whole idea of the scientific method, hypothesis testing, etc. is to look at the data as objectively as possible. Of course, not all scientists follow the best practice, but this is what peer review is for (a word completely missing in the article).
Actually, it would be nice to have an abstract for the post "The Bottom Line" that tells you what the bottom line rule is without having to read the whole thing. It's basically the confirmation bias, right?
Still the post makes a lot of good points about single-blind journals, paywall, etc. These are real problems that have to be solved to make science more accessible.
I'd take issue with #10. Just because the abstract is read first doesn't mean it's written first, much less written before the investigation that the paper documents.
Discussion of this article on Hacker News
Two small points in favour of publishing:
1) When people do a lit review (especially in such a small field), they have to find your paper and at least pretend to have read it.
2) When you've published enough of these papers, journals will ask you to be reviewers of other papers in the field.
Also, I got into the whole AI issue partially via Nick's writings...
Hopefully there will be some mathematical results coming out at some point, it's the only way to make any real progress toward the stated objective of provably friendly AI.
Well, at some point somebody's going to have to figure out (formally, mathematically) whatever it is that is meant by "naturalistic" AIXI.
First, "Well, at some point somebody's going to have to figure out" how to partition the issue into manageable chunks. Like, what would be some of the very first steps?
Invent a model of a sequential learning algorithm that has access to its own source code and can rewrite it in some way -- in short, the model should consider itself part of "its world," in contrast to the way AIXI isn't. Giving it an explicit reward channel is probably a bad idea.
Prove that algorithm learns sequences in some optimal or nearly optimal sense.
Develop approximate algorithms that are actually computable if the resulting algorithm fails to be.
This is so far my expertise domain that I hesitate to open up any of these black boxes any further.
What would be the first step in doing that? Alternatively, is it really the first step?
You have the essential training in math research. This makes you at least as qualified as probably anyone on the SI staff.
You flatter me. My training is in medical imaging and inverse problems, not logic and machine learning. I've probably spent a total of eight hours thinking about sequence learning algorithms in my life.
In my experience, the process of research does not correspond to what you're describing. First, you find something interesting which could be made into a paper. The first thing you do at this point is speaking about it to other researchers, more or less formally (face-to-face discussion, email, formal presentation). Only after having got some feedback, you begin to write a draft. Then, you publish this draft on your personal website at the same time you send it to a journal for review. You get feedback both from the official reviewers and readers of your site.
So it adresses point 10 : the bottom line is presented first on the paper, but it was not written first (hopefully, at least). It is presented first to readers, but you can always choose to skip it. I mainly use abstracts to actually understand what the paper is about, because titles are not always clear.
It also adresses point 1 : sure, the time lag for publishing a paper is huge, but people have an access to it very early, and as Lukeprog said, it shows that you are at least willing to submit your ideas to review.
And finally, for point 2, more and more people publish a version of their papers on their websites, at least before publication. More generally, journals have come under heavy attacks, and the current model of expensive journals may not last long.
That said, I also think that there are problems with the way papers are published (points 4, 5 and 11 seem most problematic).
I have been convinced of the invalidity of other arguments by academic papers.
I have also been significantly persuaded by the failure of academic papers to make their case. That is, seeing that a poor argument is held in wide regard is evidence that the advocates of that position have no better arguments.
I too do not remember being convinced of many things by formal academic papers, just a very few things.
My impression was that the SIAI was publishing academic papers primarily for credibility purposes rather than informative purposes, in response to criticisms about not being credible due to not doing things like publishing academic papers.
And here is the upside. Other than the confounding point, academic papers in settled fields get studies that answer 'yes' to the following criteria (taken from the centre for evidence based something or the other):
Has the evidence been evaluated in any way? If so, how and by whom?
How up-to-date is the evidence?
Second, you could look at the study itself and ask the following general appraisal questions:
Did the study address a clearly focused issue?
Is the study design appropriate to the stated aims?
Are the measurements likely to be valid and reliable?
Are the statistical methods described?
How large was the effect size?
How precise was the estimate of the effect (look for the confidence intervals!)
Could there be confounding?
What implications does the study have for your practice? Is it relevant?
Can the results be applied to your organization?
Is the intervention feasible in your organization?
Don't equate print run with readership. I don't know any individuals with academic books, but my university's library is full of them - if a book gets bought by a few hundred university libraries, it's potentially going to be read by far more than a few hundred people.
The “open-access journals” link doesn't go where it should (probably for lack of a
http://in the source code).
Agreed and upvoted, but:
Are any of the points airtight proofs?
Not in the mathematical sense, but it's a difference of degree.
This is actually a really good point that I hadn't considered much. This is strongly in favor of your conclusion (Academic Papers Are A Terrible Discussion Forum). It is perhaps a less strong argument against the usefulness of papers in general. They serve several roles other than discussion, perhaps the best being consolidation.