This deserves to be on the front page. A handful of people at Less Wrong have been saying this for a year or more, but none of them have offered a step-by-step plan as wonderfully detailed as this one. SIAI should indeed publish their ideas in more mainstream journals--as you point out, how else are we really going to put these ideas to the test?
Remember your strengths and weaknesses
More particularly, remember your comparative advantage. Which means you get your postgrad students (or a non-university equivalent) to do most of the research work even though you could do it brilliantly yourself.
Exactly. Yudkowsky probably should not be doing a literature slog on machine ethics, especially since 30% of the papers are about how deontological ethics will save the day. :)
This is a great use for a volunteer network. Those with university access can get on a university PC and have access to JSTOR and so on and just download every single paper that matches certain key terms and email the PDFs to whoever is working on the project. You still need someone who knows the fields, and which keywords to look up, and you need someone to read all the papers and understand them and synthesize them and judge which papers are the most important, but many of these little steps can be done by anybody who knows how to use a search engine.
Part of the research for my friendly AI people was to simply download every single paper that exists on about 10 different keywords, skim-read them all, make a list of all the ones that mattered, and then read those ones in more detail.
It's not that researchers just "know" all this stuff already when they write a paper. It's systematic hard work.
On that part, I'm skeptical. Being able to think like a mainstream philosopher about concepts and write (and plan the writing) in that style is a very particular skill that almost nobody is trained in, just like almost nobody is trained in deontic logic or Presidential speechwriting.
I used to think the singularity stuff was kinda silly and too speculative, but that's because I'd only read little bits of Kurzweil. Then I encountered the Good / Yudkowsky / Chalmers arguments and was persuaded. My interests in philosophy of religion have almost completely disappeared, and working on something other than Friendly AI seems kinda pointless to me now.
So I already have the motivation to do the things I've described above. The problem is that it takes a lot of time, and I'm not independently wealthy. I'm only able to make serious progress on my Friendly AI paper and book because I recently quit my IT job and I get to work on them 8 hours per day. But I can't coast for long. Lots of other people, I suspect, also have this problem of not being independently wealthy. :)
I'm not so sure about the Research Fellow thing, but I was accepted into the SIAI Visiting Fellows program a while back, and that's why I quit my job.
Target Journals
Where machine ethics aka Friendly AI stuff is usually published:
IEEE Intelligent Systems
Minds and Machines
Ethics and Information Technology
AI & Society
Leading researchers: Michael and Susan Anderson, Colin Allen, Wendell Wallach, Bruce McLaren, James Moor, Eliezer Yudkowsky, Blay Whitby, Steve Torrance, John Sullins, J. Storrs Hall, Thomas Powers
Where AGI stuff is usually published:
Journal of Artificial General Intelligence
International Journal of Machine Consciousness
Artificial Intelligence
Cognitive Systems Research
Topics in Cognitive Science
AI Magazine
IEEE Intelligent Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Autonomous Agents and Multi-Agent Systems
...and, well, pretty much any AI journal
Leading researchers: Douglas Lenat, Jurgen Schmidhuber, Stan Franklin, Ben Goertzel, Marcus Hutter, Peter Voss, Pei Wang, Cassio Pennachin, Aaron Sloman, Ron Chrisley, Eliezer Yudkowsky
Where papers about the implications of superintelligence for moral theory could be published:
Ethics
Utilitas
Philosophy and Public Affairs
...and, well, any general-subject philosophy journal: Mind, Nous, etc.
If SIAI were to use timeless decision theory to develop a...
Agree or disagree with the following statement?
"After publishing the paper in a philosophy journal so that academics would be allowed to talk about it without losing face, you would have to write a separate essay to explain the ideas to anyone who actually wanted to know them, including those philosophers."
Disagree. Pointlessly difficult and jargon-laden writing is not an inevitable feature of academic philosophical writing, just a common one. The best philosophical writing is as technical as it needs to be but also is clear, vivid, and even fun, and surely this should be the standard to aspire to for any SIAI-sponsored effort to produce papers pitched at the academic philosophy community.
Do you think Nick Bostrom's journal-published work on very similar subjects needs to be rewritten in different language to be understood? I don't, anyway. I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.
Frankly, I think your arguments can be made more clear and persuasive to a greater number of intelligent people if phrased in the common language.
Just because most philosophy is bad doesn't mean that when you write mainstream philosophy, you have to write badly.
lukeprog:
I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.
Seconded. I haven't read that many academic philosophy papers, but what I have seen has almost always been remarkably clear and understandable. I'm baffled that Eliezer would make such an extreme statement and actually mean it seriously (and get upvoted for it?!), considering how often he's cited academic philosophers like e.g. Chalmers, Bostrom, Dennett, or Parfit.
(Here of course I have in mind the Anglospheric analytic philosophy; continental philosophy is a horrible mess in comparison.)
It technically is redundant, though, because it has the form (A=>~B)&(B=>~A), while A=>~B and B=>~A are equivalent to each other. It doesn't need to be symmetrized because the statement was symmetric in the first place, even if it wasn't stated in an obviously symmetric form such as ~(A&B). (Going to have to say I like the redundant version for emphasis, though.)
Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.
You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it "machine ethics" or "machine morality" or "artificial morality" or "computational ethics" or "friendly AI." (As it turns out, "machine ethics" looks like it will win.)
But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)
The AGI researchers you're talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That's where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online "journal", Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too - though as a brand new online journal with little current prestige, it's far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it's not just a matter of rewriting things in philosophical jargon for ...
I just realized that maybe I'm confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.
But what I said in my original post applies to cognitive science journals as well, it's just that when you're talking about philosophy (e.g. idealized preference theories of value), you place what you're saying in the context of the relevant philosophy, and when you're talking about neuroscience (e.g. the complexity of human values) then you place what you're saying in the context of the relevant neuroscience, and when you're talking about AI (e.g. approaches to AGI) then you place what you're saying in the context of relevant AI research. You can do all three in the same paper.
The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about...
How much academic philosophy have you personally read?
I've read a fair amount, and I don't find it particularly abstruse. This includes not only quasi-popular books by the highest-status practitioners like Dennett and Chalmers but also ordinary journal papers by professors at my undergraduate institution.
It might be worth taking a look at Chalmers' philosophy of mind anthology if you haven't already.
I'll just use this page as a storage bay for discussion of how to get published in mainstream academic journals. So, here's an unordered list of advice.
Hi, this looks like a very good idea to me. People use a whole set of standards to judge how serious an argument is, and this is a biggy.
I'm interested in your four reasons: which I would summarise as 1) Donors think you're more credible - get more money 2) Generally people think you're more credible - more support and perhaps more confidence from those currently interested 3) Provides good references to answer basic questions - not sure what the deep benefit is here, apart from the desire to stop people being wrong on the internet clahing with having a da...
This is right on target.
It is not that hard to get published academically. Just read some journals and see the dreck mixed with the jewels: you will gain some self-confidence. And much of that dreck is not because of excess academic jargon; on the contrary, much of the lower-quality stuff suffers precisely from a lack of the specific kind of polish required by academic style.
There is a very specific writing style which one must follow unfailingly. (This does not mean one must be unclear.)
At worst, one can publish in lower-prestige journals, though of...
(2) the mainstream community has not caught up with SIAI's advances because SIAI has not shared them with anyone - at least not in their language, in their journals, to their expectations of clarity and style.
Are you sure that's the problem? The fact that SIAI wants to build Friendly AI has been mentioned in Time magazine and such places. Surely if mainstream machine ethicists read those stories, they would be as curious as anyone else what SIAI means by "Friendly AI", which ought to start a chain of events eventually leading to them learning ...
I'm not so sure.
Academics always have far more to read than they have time to read. Only reading stuff that has passed peer review is a useful filter. They might be curious enough to begin reading, say, the CEV article from 2004, but after going just a short distance and encountering the kinds of terminology issues I described above, they might not keep reading.
I'm imagining that they, for example, realize that Eliezer is proposing what is called an 'idealized preference' theory of value, but does not cite or respond to any of the many objections that have been raised against such theories, and so they doubt that reading further will be enlightening. They're wrong - though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories - but I sympathize with academics who need a strong filter in order to survive, even if it means they'll miss out on a few great things.
I applaud the attention to detail. Really keeps things in perspective.
Who knows, one of the articles produced here might be featured in the New York Times. Maybe even the same time some controversy or other begins. I look forward to it! :)
Or maybe the mainstream philosophy journals (isn't that the pre-internet term for a blog?) should get on line, and start using a state-of-the-art interactive discussion system.
The quick upvoting suggests people are interested in this. If people have more questions about how to write a publishable philosophy paper, I'm happy to take them.
The advice looks great (and I say that as an academic in a field whose professional structure is not that different from philosophy).
Frankly, I think people are upvoting it so much not only because it's a very good post, but because they really wish that SIAI would take your advice and do all this stuff.
Simply put, we don't have anyone who can except Carl Shulman and myself. I'm busy writing a book. I think Carl actually is doing papers but I'm not sure this is his highest-priority subject.
It might be that hiring a real professor to supervise would enable us to get this sort of work out of a postdoc going through the Visiting Fellows program, but that itself is not easy.
Excellent post, but I'm surprised, browsing through the comments, that nobody has mentioned what seems to me like the obvious trade-off: cash $$$$.
Either you will have to pay money to publish your article (if the journal is open access, and your article is accepted), or you'll have to refrain from publishing the article elsewhere (i.e., making it available to the public). Otherwise, how would the journals make any money? But due diligence... these are the journals you mentioned, with their qualities w/r/t open access or author fees:
IEEE Intelligent System...
Newest edit: I just realized that by "philosophy journals" in the original post I really meant "cognitive science" journals. (I made the mistake because for me, philosophy basically just is cognitive science.) So please the read the below in terms of cognitive science journals, not just philosophy journals.
First edit: Some people apparently read this as an "ultimatum" for SIAI, which was not the intent at all. It's merely an argument for why I think SIAI could benefit from publishing in mainstream journals, and then some advice on how to do it. I'm making recommendations, not "demands" - how silly would that be? Also, it's not like I'm saying SIAI should do a bunch of stuff, and I'm then walking away. For example, I'm actively writing a journal-grade paper on Friendly AI, putting it in the context of existing literature on the subject. And I'd love to do more.
Also, I suspect that many at SIAI already want to be publishing in mainstream philosophy journals. The problem is that it requires a fair amount of resources and know-how to do so (as the below post shows), and that takes time. It doesn't appear that SIAI has anyone whose primary training is in philosophy, because they've (wisely) invested their resources in, you know, math geniuses and people who can bring in funds and so on. Anyhoo...
After reading about 80% of the literature in the field of machine ethics, I've realized that the field hasn't quite caught up to where Yudkowsky's thinking was (on the most important issues) circa 2001.*
One cause of this may be the fact that unlike almost every other 10-year research institute or university research program on the planet, SIAI has no publications in established peer-reviewed journals. This fact has at least two effects: (1) SIAI's researchers are able to work more quickly on these problems when they are not spending their time reading hundreds of mostly useless papers from the mainstream literature, and then composing arduously crafted papers that conform to the style and expectations of the mainstream community, citing all the right literature. And: (2) the mainstream community has not caught up with SIAI's advances because SIAI has not shared them with anyone - at least not in their language, in their journals, to their expectations of clarity and style.
However, I suspect that SIAI may now want to devote some resources doing what must be done to get published in mainstream journals, because (1) many donors do know the difference between conference papers and papers published in mainstream journals, and will see SIAI as more valuable and credible if they are publishing in mainstream journals, (2) SIAI's views will look less cult-like and more academically credible in general if they publish in mainstream journals, and (3) SIAI and LW people will need to spend less time answering dumb questions like "Why not just program the AI to maximize human happiness?" if SIAI publishes short, well-cited, well-argued responses to such questions in the language that everybody else knows how to understand, rather than responding to those questions in a way that requires someone to read a set of dozens of blog posts and articles with a complex web of dependencies and an unfamiliar writing/citation style and vocabulary. Also: (4) Talking in everyone else's language and their journals will probably help some really smart people make genuine progress on the Friendly AI problem! Gert-Jan Lokhorst is a really smart guy interested in these issues, but it's not clear that he has read Yudkowsky. Perhaps he's never heard of Yudkowsky, or if he has, he doesn't have time to risk spending it on something that hasn't even bothered to pass a journal's peer review process. Finally, bringing the arguments to the world in the common language and journals will (5) invite criticism, some of which will be valid and helpful in reformulating SIAI's views and giving us all a better chance of surviving the singularity.
Thus, I share some advice on how to get published in philosophy journals. Much of SIAI's work is technically part of the domain of 'philosophy', even when it looks like math or computer science. Just don't think of Kant or Plato when you think of 'philosophy.' Much of SIAI's work is more appropriate for math and computer science journals, but I'm not familiar with how to get published in those fields, though I suspect the strategy is much the same.
Who am I to share advice? I've never published in a philosophy journal. But a large cause of that fact is that I haven't tried. (Though, I'm beginning on early drafts of some journal-bound papers now.) Besides, what I share with you below is just repeating what published authors do say to me and online, so you're getting their advice, not particularly mine.
Okay, how to get published in philosophy journals...
The easiest way to get published is to be a respected academic with a long publication history, working at a major university. Barring that, find a co-author or two who fit that description.
Still, that won't be enough, and sometimes the other conditions below will be sufficient if you don't match that profile. After all, people do manage to build up a long publication history starting with a publication history of 0. Here's how they do it:
1. Write in the proper style. Anglophone philosophy has, over the years, developed a particular style marked by clarity and other norms. These norms have been expressed in writing guides for undergraduate philosophy students here, here, and elsewhere. However, such guides are insufficient. Really, the only way to learn the style of Anglophone philosophy is to read hundreds and hundreds of journal articles. You will then have an intuitive sense of what sounds right or wrong, and which structures are right or wrong, and your writing will be much easier because you won't need to look it up in a style guide every time. As an example, Yudkowsky's TDT paper is much closer to the standard style than his CEV paper, but it's still not quite there yet.
2. Use the right vocabulary and categories. Of course, you might write a paper aiming to recommend a new term or new categories, but even then you need to place your arguments in the context of the existing terms and categories first. As an example, consider Eliezer's Coherent Extrapolated Volition paper from 2004. The paper was not written for journals, so I'm not criticizing the paper. I'm explaining how it would need to be written differently if it was intended for journal publication. Let's pretend it is now 2004, and I am co-writing the Coherent Extrapolated Volition paper with Eliezer, and we want to publish it in a mainstream journal.
First, what is Eliezer's topic? It is the topic of how to design the goal system of an AI so that it behaves ethically, or in ways that we want. For a journal paper, our first goal would be to place the project of our paper in the context of the existing literature on that subject. Now, in 2004, it wasn't clear that this field would come to be called by the term "machine ethics" rather than by other terms that were floating around at the time like "artificial morality" (Danielson, 1992) or "computational ethics" (Allen et al., 2000) or "friendly ai" (Yudkowsky, 2001). So, we would probably cite the existing literature on this issue of how to design the goal system of an AI so that it behaves ethically (only about a two dozen works existed in 2004) and pick the terms that worked best for our purpose, after making clear what we meant by them.
Next, we would undertake the same considerations for the other concepts we use. For example, Eliezer introduces the term volition:
But here, it's unnecessary to invent a new term, because philosophers talk a lot about this concept, and they already have a well-developed vocabulary for talking about it. Eliezer is making use of the distinction between "means" and "ends," and he's talking about "informed desires" or "informed wants" or "what an agent would want if fully informed." There is a massive and precise literature on this concept, and mainstream journals would expect us to pick one variant of this vocabulary for use in our paper and cite the people who use it, rather than just introducing a brand new term for no good reason.
Next, when Eliezer writes about "extrapolating" human volition, he actually blends two concepts that philosophers keep distinct for good reasons. He blends the concept of distinguishing means from ends with the notion of ends that change in response to the environment or inner processes. To describe the boxes example above, a mainstream philosopher would say that you desired to choose box A as a means, but you desired the diamond in box B as an end. (You were simply mistaken about which box contained the diamond.) Eliezer calls this a type of "extrapolation," but he also refers to something else as "extrapolation":
This is a very different thing to the mainstream philosopher. This is an actual changing of (or extrapolating of) what one desires as an end, perhaps through a process by which reward signals reinforce certain neural pathways, thus in certain circumstances transforming a desire-as-means into a desire-as-end (Schroeder, 2004). Or, in Yudkowsky's sense, it's an "extrapolation" of what we would desire as an end if our desires-as-ends were transformed through a process that involved not just more information but also changes to our neural structure due to environment (such as growing up together).
This kind of unjustified blending and mixing of concepts - without first putting your work in the context of the current language and then justifying your use of a brand new language - is definitely something that would keep our paper out of mainstream journals. In this case, I think the mainstream language is just fine, so I would simply adopt it, briefly cite some of the people who explain and defend that language, and move forward.
There are other examples, right from the start. Eliezer talks about the "spread" of "extrapolated volition" where a mainstream philosopher would talk about its uncertainty. He talks about "muddle" where a mainstream philosopher would call it inconsistency or incoherence. And so on. If we were writing the CEV paper in 2004 with the intention of publishing in a mainstream journal, we would simply adopt the mainstream language if we found it adequate, or we would first explain ourselves in terms of the mainstream language and then argue in favor of using a different language, before giving other arguments in that brand new language.
Same goes for every other subject. If you're writing on the complexity of wishes, you should probably be citing from, say, OUP's recent edited volume on the very latest affective neuroscience of pleasure and desire, and you should probably know that what you're talking about is called "affective neuroscience," and you should probably know that one of the leading researchers in that field is Kent Berridge, and that he recently co-edited a volume for OUP on exactly the subject you are talking about. (Hint: neuroscience overwhelmingly confirms Eliezer's claims about the complexity of wishes, but the standards of mainstream philosophy expect you to cite actual science on the topic, not just appeal to your readers' intuitions. Or at least, good mainstream philosophy requires you to cite actual science.)
I should also mention there's a huge literature on this "fully informed" business, too. One of the major papers is from David Sobel.
3. Put your paper in context and cite the right literature. Place your work in the context of things already written on the subject. Start with a brief overview of the field or sub-field, citing a few key works. Distinguish a few of the relevant questions from each other, and explain exactly which questions you'll be tackling in this paper, and which ones you will not. Explain how other people have answered those questions, and explain why your paper is needed. Then, go on to give your arguments, along the way explaining why you think your position on the question, or your arguments, are superior to the others that have been given, or valuable in some other way. Cite the literature all along the way.
4. Get feedback from mainstream philosophers. After you've written a pretty good third draft, send it to the philosophers whose work you interact with most thoroughly. If the draft is well-written according to the above rules, they will probably read it. Philosophers get way less attention than scientists, and are usually interested to read anything that engages their work directly. They will probably send you a few comments within a month or two, and may name a few other papers you may want to read. Revise.
5. Submit to the right journals. If you have no mainstream academic publishing history, you may want to start conservatively and submit to some established but less-prestigious journals first. As your mainstream academic publishing record grows, you can feel more confident in submitting to major journals in your field - in the case of CEV, this would be journals like Minds and Machines and IEEE Intelligent Systems and International Journal of Applied Philosophy. After a couple successes there, you might be able to publish in a major general-subject philosophy journal like Journal of Philosophy or Nous. But don't get your hopes up.
Note that journals vary widely in what percentage of submissions they accept, how good the feedback is, and so on. For that, you'll want to keep track of what the community is saying about various journals. This kind of thing is often reported on blogs like Leiter Reports.
6. Remember your strengths and weaknesses. If this process sounds like a lot of work - poring through hundreds of journal articles and books to figure out what the existing language is for each concept you want to employ, and thinking about whether you want to adopt that language or argue for a new one, figuring out which journals to submit to, and so on - you're right! Writing for mainstream journals is a lot of work. It's made much easier these days with online search engines and digital copies of articles, but it's still work, and you have to know how to look for it. You have to know the names of related terms that might bring you to the right articles. You have to know which journals and publishers and people are the "big names." You have to know what the fields and sub-fields of philosophy (and any relevant sciences) are, and how they interact. This is one advantage that someone who is familiar with philosophy has over someone who is not - it may not be that the former is any smarter or creative than the latter, it's just that the former knows what to look for, and probably already knows what language to use for a greatly many subjects so he doesn't have to look it up. Also, if you're going to do this it is critical that you have some mastery over procrastination.
Poring through the literature, along with other steps in the process of writing a mainstream philosophy paper, is often a godawful slog. And of course it helps if you quite simply enjoy research. That's probably the most important quality you can have. If you're not great with procrastination and you don't enjoy research but you have brilliant and important ideas to publish, team up with somebody who does enjoy research and has a handle on procrastination as your writing partner. You can do the brilliant insight stuff, the other person can do the literature slog and using-the-right-terms-and-categories part.
There is tons more I could say about the subject, but that's at least a start. I hope it's valuable to some people, especially if you think you might want to publish something on a really important subject like existential risks and Friendly AI. Good luck!
* This is not to say the field of machine ethics is without valuable contributions. Far from it!