How SIAI could publish in mainstream cognitive science journals

Newest edit: I just realized that by "philosophy journals" in the original post I really meant "cognitive science" journals. (I made the mistake because for me, philosophy basically just is cognitive science.) So please the read the below in terms of cognitive science journals, not just philosophy journals. 

First edit: Some people apparently read this as an "ultimatum" for SIAI, which was not the intent at all. It's merely an argument for why I think SIAI could benefit from publishing in mainstream journals, and then some advice on how to do it. I'm making recommendations, not "demands" - how silly would that be? Also, it's not like I'm saying SIAI should do a bunch of stuff, and I'm then walking away. For example, I'm actively writing a journal-grade paper on Friendly AI, putting it in the context of existing literature on the subject. And I'd love to do more.

Also, I suspect that many at SIAI already want to be publishing in mainstream philosophy journals. The problem is that it requires a fair amount of resources and know-how to do so (as the below post shows), and that takes time. It doesn't appear that SIAI has anyone whose primary training is in philosophy, because they've (wisely) invested their resources in, you know, math geniuses and people who can bring in funds and so on. Anyhoo...

 

After reading about 80% of the literature in the field of machine ethics, I've realized that the field hasn't quite caught up to where Yudkowsky's thinking was (on the most important issues) circa 2001.*

One cause of this may be the fact that unlike almost every other 10-year research institute or university research program on the planet, SIAI has no publications in established peer-reviewed journals. This fact has at least two effects: (1) SIAI's researchers are able to work more quickly on these problems when they are not spending their time reading hundreds of mostly useless papers from the mainstream literature, and then composing arduously crafted papers that conform to the style and expectations of the mainstream community, citing all the right literature. And: (2) the mainstream community has not caught up with SIAI's advances because SIAI has not shared them with anyone - at least not in their language, in their journals, to their expectations of clarity and style.

However, I suspect that SIAI may now want to devote some resources doing what must be done to get published in mainstream journals, because (1) many donors do know the difference between conference papers and papers published in mainstream journals, and will see SIAI as more valuable and credible if they are publishing in mainstream journals, (2) SIAI's views will look less cult-like and more academically credible in general if they publish in mainstream journals, and (3) SIAI and LW people will need to spend less time answering dumb questions like "Why not just program the AI to maximize human happiness?" if SIAI publishes short, well-cited, well-argued responses to such questions in the language that everybody else knows how to understand, rather than responding to those questions in a way that requires someone to read a set of dozens of blog posts and articles with a complex web of dependencies and an unfamiliar writing/citation style and vocabulary. Also: (4) Talking in everyone else's language and their journals will probably help some really smart people make genuine progress on the Friendly AI problem! Gert-Jan Lokhorst is a really smart guy interested in these issues, but it's not clear that he has read Yudkowsky. Perhaps he's never heard of Yudkowsky, or if he has, he doesn't have time to risk spending it on something that hasn't even bothered to pass a journal's peer review process. Finally, bringing the arguments to the world in the common language and journals will (5) invite criticism, some of which will be valid and helpful in reformulating SIAI's views and giving us all a better chance of surviving the singularity.

Thus, I share some advice on how to get published in philosophy journals. Much of SIAI's work is technically part of the domain of 'philosophy', even when it looks like math or computer science. Just don't think of Kant or Plato when you think of 'philosophy.' Much of SIAI's work is more appropriate for math and computer science journals, but I'm not familiar with how to get published in those fields, though I suspect the strategy is much the same.

Who am I to share advice? I've never published in a philosophy journal. But a large cause of that fact is that I haven't tried. (Though, I'm beginning on early drafts of some journal-bound papers now.) Besides, what I share with you below is just repeating what published authors do say to me and online, so you're getting their advice, not particularly mine.

Okay, how to get published in philosophy journals...

The easiest way to get published is to be a respected academic with a long publication history, working at a major university. Barring that, find a co-author or two who fit that description.

Still, that won't be enough, and sometimes the other conditions below will be sufficient if you don't match that profile. After all, people do manage to build up a long publication history starting with a publication history of 0. Here's how they do it:

1. Write in the proper style. Anglophone philosophy has, over the years, developed a particular style marked by clarity and other norms. These norms have been expressed in writing guides for undergraduate philosophy students here, here, and elsewhere. However, such guides are insufficient. Really, the only way to learn the style of Anglophone philosophy is to read hundreds and hundreds of journal articles. You will then have an intuitive sense of what sounds right or wrong, and which structures are right or wrong, and your writing will be much easier because you won't need to look it up in a style guide every time. As an example, Yudkowsky's TDT paper is much closer to the standard style than his CEV paper, but it's still not quite there yet.

2. Use the right vocabulary and categories. Of course, you might write a paper aiming to recommend a new term or new categories, but even then you need to place your arguments in the context of the existing terms and categories first. As an example, consider Eliezer's Coherent Extrapolated Volition paper from 2004. The paper was not written for journals, so I'm not criticizing the paper. I'm explaining how it would need to be written differently if it was intended for journal publication. Let's pretend it is now 2004, and I am co-writing the Coherent Extrapolated Volition paper with Eliezer, and we want to publish it in a mainstream journal.

First, what is Eliezer's topic? It is the topic of how to design the goal system of an AI so that it behaves ethically, or in ways that we want. For a journal paper, our first goal would be to place the project of our paper in the context of the existing literature on that subject. Now, in 2004, it wasn't clear that this field would come to be called by the term "machine ethics" rather than by other terms that were floating around at the time like "artificial morality" (Danielson, 1992) or "computational ethics" (Allen et al., 2000) or "friendly ai" (Yudkowsky, 2001). So, we would probably cite the existing literature on this issue of how to design the goal system of an AI so that it behaves ethically (only about a two dozen works existed in 2004) and pick the terms that worked best for our purpose, after making clear what we meant by them.

Next, we would undertake the same considerations for the other concepts we use. For example, Eliezer introduces the term volition:

Suppose you're faced with a choice between two boxes, A and B. One and only one of the boxes contains a diamond. You guess that the box which contains the diamond is box A. It turns out that the diamond is in box B. Your decision will be to take box A. I now apply the term volition to describe the sense in which you may be said towant box B, even though your guess leads you to pick box A.

But here, it's unnecessary to invent a new term, because philosophers talk a lot about this concept, and they already have a well-developed vocabulary for talking about it. Eliezer is making use of the distinction between "means" and "ends," and he's talking about "informed desires" or "informed wants" or "what an agent would want if fully informed." There is a massive and precise literature on this concept, and mainstream journals would expect us to pick one variant of this vocabulary for use in our paper and cite the people who use it, rather than just introducing a brand new term for no good reason.

Next, when Eliezer writes about "extrapolating" human volition, he actually blends two concepts that philosophers keep distinct for good reasons. He blends the concept of distinguishing means from ends with the notion of ends that change in response to the environment or inner processes. To describe the boxes example above, a mainstream philosopher would say that you desired to choose box A as a means, but you desired the diamond in box B as an end. (You were simply mistaken about which box contained the diamond.) Eliezer calls this a type of "extrapolation," but he also refers to something else as "extrapolation":

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

This is a very different thing to the mainstream philosopher. This is an actual changing of (or extrapolating of) what one desires as an end, perhaps through a process by which reward signals reinforce certain neural pathways, thus in certain circumstances transforming a desire-as-means into a desire-as-end (Schroeder, 2004). Or, in Yudkowsky's sense, it's an "extrapolation" of what we would desire as an end if our desires-as-ends were transformed through a process that involved not just more information but also changes to our neural structure due to environment (such as growing up together).

This kind of unjustified blending and mixing of concepts - without first putting your work in the context of the current language and then justifying your use of a brand new language - is definitely something that would keep our paper out of mainstream journals. In this case, I think the mainstream language is just fine, so I would simply adopt it, briefly cite some of the people who explain and defend that language, and move forward.

There are other examples, right from the start. Eliezer talks about the "spread" of "extrapolated volition" where a mainstream philosopher would talk about its uncertainty. He talks about "muddle" where a mainstream philosopher would call it inconsistency or incoherence. And so on. If we were writing the CEV paper in 2004 with the intention of publishing in a mainstream journal, we would simply adopt the mainstream language if we found it adequate, or we would first explain ourselves in terms of the mainstream language and then argue in favor of using a different language, before giving other arguments in that brand new language.

Same goes for every other subject. If you're writing on the complexity of wishes, you should probably be citing from, say, OUP's recent edited volume on the very latest affective neuroscience of pleasure and desire, and you should probably know that what you're talking about is called "affective neuroscience," and you should probably know that one of the leading researchers in that field is Kent Berridge, and that he recently co-edited a volume for OUP on exactly the subject you are talking about. (Hint: neuroscience overwhelmingly confirms Eliezer's claims about the complexity of wishes, but the standards of mainstream philosophy expect you to cite actual science on the topic, not just appeal to your readers' intuitions. Or at least, good mainstream philosophy requires you to cite actual science.)

I should also mention there's a huge literature on this "fully informed" business, too. One of the major papers is from David Sobel.

3. Put your paper in context and cite the right literature. Place your work in the context of things already written on the subject. Start with a brief overview of the field or sub-field, citing a few key works. Distinguish a few of the relevant questions from each other, and explain exactly which questions you'll be tackling in this paper, and which ones you will not. Explain how other people have answered those questions, and explain why your paper is needed. Then, go on to give your arguments, along the way explaining why you think your position on the question, or your arguments, are superior to the others that have been given, or valuable in some other way. Cite the literature all along the way.

4. Get feedback from mainstream philosophers. After you've written a pretty good third draft, send it to the philosophers whose work you interact with most thoroughly. If the draft is well-written according to the above rules, they will probably read it. Philosophers get way less attention than scientists, and are usually interested to read anything that engages their work directly. They will probably send you a few comments within a month or two, and may name a few other papers you may want to read. Revise.

5. Submit to the right journals. If you have no mainstream academic publishing history, you may want to start conservatively and submit to some established but less-prestigious journals first. As your mainstream academic publishing record grows, you can feel more confident in submitting to major journals in your field - in the case of CEV, this would be journals like Minds and Machines and IEEE Intelligent Systems and International Journal of Applied Philosophy. After a couple successes there, you might be able to publish in a major general-subject philosophy journal like Journal of Philosophy or Nous. But don't get your hopes up.

Note that journals vary widely in what percentage of submissions they accept, how good the feedback is, and so on. For that, you'll want to keep track of what the community is saying about various journals. This kind of thing is often reported on blogs like Leiter Reports.

6. Remember your strengths and weaknesses. If this process sounds like a lot of work - poring through hundreds of journal articles and books to figure out what the existing language is for each concept you want to employ, and thinking about whether you want to adopt that language or argue for a new one, figuring out which journals to submit to, and so on - you're right! Writing for mainstream journals is a lot of work. It's made much easier these days with online search engines and digital copies of articles, but it's still work, and you have to know how to look for it. You have to know the names of related terms that might bring you to the right articles. You have to know which journals and publishers and people are the "big names." You have to know what the fields and sub-fields of philosophy (and any relevant sciences) are, and how they interact. This is one advantage that someone who is familiar with philosophy has over someone who is not - it may not be that the former is any smarter or creative than the latter, it's just that the former knows what to look for, and probably already knows what language to use for a greatly many subjects so he doesn't have to look it up. Also, if you're going to do this it is critical that you have some mastery over procrastination.

Poring through the literature, along with other steps in the process of writing a mainstream philosophy paper, is often a godawful slog. And of course it helps if you quite simply enjoy research. That's probably the most important quality you can have. If you're not great with procrastination and you don't enjoy research but you have brilliant and important ideas to publish, team up with somebody who does enjoy research and has a handle on procrastination as your writing partner. You can do the brilliant insight stuff, the other person can do the literature slog and using-the-right-terms-and-categories part.

There is tons more I could say about the subject, but that's at least a start. I hope it's valuable to some people, especially if you think you might want to publish something on a really important subject like existential risks and Friendly AI. Good luck!

 

 

 

* This is not to say the field of machine ethics is without valuable contributions. Far from it!

76 comments, sorted by
magical algorithm
Highlighting new comments since Today at 4:59 AM
Select new highlight date
Moderation Guidelinesexpand_more

This deserves to be on the front page. A handful of people at Less Wrong have been saying this for a year or more, but none of them have offered a step-by-step plan as wonderfully detailed as this one. SIAI should indeed publish their ideas in more mainstream journals--as you point out, how else are we really going to put these ideas to the test?

I think this is too meta and SAIA-specific and poorly (quickly) written to be front-page material, but thanks for your support.

nod. The large 'conversational' sections are more 'discussion' directed. The more concise and direct bullet point sections would fit the front page.

Remember your strengths and weaknesses

More particularly, remember your comparative advantage. Which means you get your postgrad students (or a non-university equivalent) to do most of the research work even though you could do it brilliantly yourself.

Exactly. Yudkowsky probably should not be doing a literature slog on machine ethics, especially since 30% of the papers are about how deontological ethics will save the day. :)

So, who is willing to collaborate with SIAI research fellows and do litterature slogs for them?

This is a great use for a volunteer network. Those with university access can get on a university PC and have access to JSTOR and so on and just download every single paper that matches certain key terms and email the PDFs to whoever is working on the project. You still need someone who knows the fields, and which keywords to look up, and you need someone to read all the papers and understand them and synthesize them and judge which papers are the most important, but many of these little steps can be done by anybody who knows how to use a search engine.

Part of the research for my friendly AI people was to simply download every single paper that exists on about 10 different keywords, skim-read them all, make a list of all the ones that mattered, and then read those ones in more detail.

It's not that researchers just "know" all this stuff already when they write a paper. It's systematic hard work.

Could this work be more distributed? Could different people read the papers related to different concepts and contribute to the presentation that concept within a paper?

On that part, I'm skeptical. Being able to think like a mainstream philosopher about concepts and write (and plan the writing) in that style is a very particular skill that almost nobody is trained in, just like almost nobody is trained in deontic logic or Presidential speechwriting.

Yeah, that word felt wrong as I typed it, but I couldn't recall the right one. Fixed.

Though I was hoping for a different sort of response from you. As was Louie.

I used to think the singularity stuff was kinda silly and too speculative, but that's because I'd only read little bits of Kurzweil. Then I encountered the Good / Yudkowsky / Chalmers arguments and was persuaded. My interests in philosophy of religion have almost completely disappeared, and working on something other than Friendly AI seems kinda pointless to me now.

So I already have the motivation to do the things I've described above. The problem is that it takes a lot of time, and I'm not independently wealthy. I'm only able to make serious progress on my Friendly AI paper and book because I recently quit my IT job and I get to work on them 8 hours per day. But I can't coast for long. Lots of other people, I suspect, also have this problem of not being independently wealthy. :)

I used to think the singularity stuff was kinda silly and too speculative, but that's because I'd only read little bits of Kurzweil. Then I encountered the Good / Yudkowsky / Chalmers arguments and was persuaded.

This sounds familiar to me, somehow.

So I already have the motivation to do the things I've described above. The problem is that it takes a lot of time, and I'm not independently wealthy. I'm only able to make serious progress on my Friendly AI paper and book because I recently quit my IT job and I get to work on them 8 hours per day. But I can't coast for long. Lots of other people, I suspect, also have this problem of not being independently wealthy. :)

I know I've suggested this before, but the SIAI Visiting Fellows program will take care of living expenses short term, and in your case would likely lead to a long term position as a Research Fellow. It seems that you and they have a lot to offer each other.

I'm not so sure about the Research Fellow thing, but I was accepted into the SIAI Visiting Fellows program a while back, and that's why I quit my job.

My interests in philosophy of religion have almost completely disappeared, and working on something other than Friendly AI seems kinda pointless to me now.

Abrahamic religion seems to be a rather useless time sink to me. Rescuing people from it seems to be worth something - but it usually seems like a lot of effort for few results. A gutter outreach program is messy work as well.

commonsenseatheism.com is starting to seem like a misnomer. Your blog is now mostly about intelligent machines. Time for a new domain?

Target Journals

Where machine ethics aka Friendly AI stuff is usually published:

IEEE Intelligent Systems
Minds and Machines
Ethics and Information Technology
AI & Society

Leading researchers: Michael and Susan Anderson, Colin Allen, Wendell Wallach, Bruce McLaren, James Moor, Eliezer Yudkowsky, Blay Whitby, Steve Torrance, John Sullins, J. Storrs Hall, Thomas Powers

Where AGI stuff is usually published:

Journal of Artificial General Intelligence
International Journal of Machine Consciousness
Artificial Intelligence
Cognitive Systems Research
Topics in Cognitive Science
AI Magazine
IEEE Intelligent Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Autonomous Agents and Multi-Agent Systems
...and, well, pretty much any AI journal

Leading researchers: Douglas Lenat, Jurgen Schmidhuber, Stan Franklin, Ben Goertzel, Marcus Hutter, Peter Voss, Pei Wang, Cassio Pennachin, Aaron Sloman, Ron Chrisley, Eliezer Yudkowsky

Where papers about the implications of superintelligence for moral theory could be published:

Ethics
Utilitas
Philosophy and Public Affairs
...and, well, any general-subject philosophy journal: Mind, Nous, etc.

If SIAI were to use timeless decision theory to develop a new variant of decision network, this could be published in an AI journal or in:

Management Science
Decision Analysis

How many of these are easily accessible for those not in a currently academic environment? I ask because it would be helpful if a broader community could read both the SIAI stuff AND the contextual other articles, and because I've suffered slight withdrawal systems ever since leaving uni and losing my access to academic journals

Just as with Eliezer's papers that have been published in academic volumes that almost nobody buys, SIAI could publish the PDFs of the papers on their website for everyone to read, assuming permission from the journal.

Agree or disagree with the following statement?

"After publishing the paper in a philosophy journal so that academics would be allowed to talk about it without losing face, you would have to write a separate essay to explain the ideas to anyone who actually wanted to know them, including those philosophers."

Disagree. Pointlessly difficult and jargon-laden writing is not an inevitable feature of academic philosophical writing, just a common one. The best philosophical writing is as technical as it needs to be but also is clear, vivid, and even fun, and surely this should be the standard to aspire to for any SIAI-sponsored effort to produce papers pitched at the academic philosophy community.

Do you think Nick Bostrom's journal-published work on very similar subjects needs to be rewritten in different language to be understood? I don't, anyway. I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.

Frankly, I think your arguments can be made more clear and persuasive to a greater number of intelligent people if phrased in the common language.

Just because most philosophy is bad doesn't mean that when you write mainstream philosophy, you have to write badly.

lukeprog:

I personally find the style of mainstream philosophy and science much easier to understand than, say, your CEV paper. But that might be because mainstream philosophy and science is what I spend most of my time reading.

Seconded. I haven't read that many academic philosophy papers, but what I have seen has almost always been remarkably clear and understandable. I'm baffled that Eliezer would make such an extreme statement and actually mean it seriously (and get upvoted for it?!), considering how often he's cited academic philosophers like e.g. Chalmers, Bostrom, Dennett, or Parfit.

(Here of course I have in mind the Anglospheric analytic philosophy; continental philosophy is a horrible mess in comparison.)

Yeah, don't get me started on continental philosophy.

BTW, one of my favorite takedowns of postmodernism is this one.

lukeprog:

BTW, one of my favorite takedowns of postmodernism is this one.

Thanks for the link. I skimmed the article and it seems well written and quite informative; I'll read it in full later.

In my opinion, there are some good insights in postmodernism, but as someone (Eysenck?) said about Freud, what's true in it isn't new, and what's new isn't true. In a sense, postmodernism itself provides perhaps the most fruitful target for a postmodernist analysis (of sorts). What these people say is of little real interest when taken at face value, but some fascinating insight can be obtained by analyzing the social role of them and their intellectual output, their interactions and conflicts with other sorts of intellectuals, and the implicit (conscious or not) meanings of their claims.

what's true in it isn't new, and what's new isn't true

The logical redundancy in this phrase has long bothered me.

If I remember correctly, you're Russian? Those Slavic double negatives must be giving you constant distress, if you're so bothered by (seeming) deficiencies of logic in natural language.

It's not redundant; it's a more witty and elegant way of saying that there are some new things, some true things, but none that are both.

It technically is redundant, though, because it has the form (A=>~B)&(B=>~A), while A=>~B and B=>~A are equivalent to each other. It doesn't need to be symmetrized because the statement was symmetric in the first place, even if it wasn't stated in an obviously symmetric form such as ~(A&B). (Going to have to say I like the redundant version for emphasis, though.)

If we're talking about CEV, I agree. It needs rewriting. So does the Intuitive Explanation of Bayesian Reasoning, or any number of other documents produced by earlier Eliezers.

It was the linked Sobel paper which called forth that particular comment by me, if you're wondering. I looked at it in hopes of finding useful details about how to construe an extrapolated volition, and got a couple of pages in before I decided that I wasn't willing to read this paper unless someone had produced a human-readable version of it. (Scanning the rest did not change my mind.)

I'm not sure I want to import FAI ethics into philosophical academia as a field where people can garner prestige by writing papers that are hard to read. Maybe it makes sense to put up a fence around it and declare that if you can't write plain-English papers you shouldn't be writing about FAI.

Yes, the Sobel paper is definitely not an example of how I would write a philosophy paper, and your not reading it was a wise choice. Unfortunately, it is one of the major pieces in the literature on the subject of informed preference. But have you really never read any journal-published philosophy you thought was clear, such that you think one cannot write clearly if writing philosophy for journals? That would be shocking if true.

You will not stop people from garnering prestige by writing papers that are hard to read. You will also not stop people from writing hard-to-read papers on Friendly AI. That subject is already becoming a major field, whether you call it "machine ethics" or "machine morality" or "artificial morality" or "computational ethics" or "friendly AI." (As it turns out, "machine ethics" looks like it will win.)

But one can write clear and easy-to-read papers on Friendly AI. Who knows? Maybe it will even make your work stand out among all the other people writing on the subject, for example those proposing Kantian machines. (The horror!)

Bostrom writes clearly.

But I will suggest for the record that we can probably get away with just ignoring anything that was written for other philosophers rather than for the general public or competent AGI researchers, since those are the only two constituencies we care about. If anyone in philosophy has something to contribute to the real FAI discussion, let them rewrite it in English. I should also note that anything which does not assume analytic naturalism as a matter of course is going to be rejected out of hand because it cannot be conjugate to a computer program composed of ones and zeroes.

Philosophers are not the actual audience. The general public and competent AGI researchers are the actual audience. Now there's some case to be made for trying to communicate with the real audience via a complicated indirect method that involves rewriting things in philosophical jargon to get published in philosophy journals, but we shouldn't overlook that this is not, in fact, the end goal.

Relevance. It's what's for dinner.

The AGI researchers you're talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That's where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online "journal", Dynamical Psychology.

So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.

The papers in Journal of Artificial General Intelligence follow the recommendations given above, too - though as a brand new online journal with little current prestige, it's far less picky about those things than more established journals.

Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)

Also, it's not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.

Above, I gave the example of making a distinction between "extrapolating" from means to ends, and "extrapolating" current ends to new ends given a process of reflective equilibrium and other mental changes. That's a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn't carve reality at its joints terribly well.

And of course I agree that anything not assuming reductionism must be dismissed.

But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It's just that, as you said, you don't have many people who can do that kind of thing, and those people are tied up with other things. Yes?

I just realized that maybe I'm confusing things by talking about philosophy journals, when really I mean to include cognitive science journals in general.

But what I said in my original post applies to cognitive science journals as well, it's just that when you're talking about philosophy (e.g. idealized preference theories of value), you place what you're saying in the context of the relevant philosophy, and when you're talking about neuroscience (e.g. the complexity of human values) then you place what you're saying in the context of the relevant neuroscience, and when you're talking about AI (e.g. approaches to AGI) then you place what you're saying in the context of relevant AI research. You can do all three in the same paper.

The kind of philosophy I spend most of my time reading these days is just like that, actually. Epistemology and the Psychology of Human Judgment spends just as much time discussing work done by psychologists like Dawes and Kahneman as it does discussing epistemologists like Goldman and Stich. Philosophy and Neuroscience: A Ruthlessly Reductive Account spends much more time discussing neuroscience than it does philosophy. Three Faces of Desire is split about 60/40 between philosophy and neuroscience. Many of the papers on machine ethics aka Friendly AI are split about 50/50 between philosophy and AI programming. Cognitive science is like this, after all.

In fact, I've been going through the Pennachin & Goertzel volume, reading it as a philosophy of mind book when most people, I guess, are probably considering it a computer science book. Whatever. Cognitive science is probably what I should have said. This is all cognitive science, whether it's slightly more heavy on philosophy or computer science or neuroscience or experimental psychology or whatever. The problem is that philosophy almost just is cognitive science, to me. Cognitive science + logics/maths.

Anyway, sorry if the 'philosophy' word caused any confusion.

You probably should have just titled it "How SIAI could publish in mainstream academic journals".

Maybe. But while I'm pretty familiar with philosophy journals and cognitive science journals, I'm not familiar with some other types of journals, and so I'm not sure whether my advice applies to, for example, math journals.

I'm not sure whether my advice applies to, for example, math journals.

It definitely does.

Above, I gave the example of making a distinction between "extrapolating" from means to ends, and "extrapolating" current ends to new ends given a process of reflective equilibrium and other mental changes. That's a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn't carve reality at its joints terribly well.

Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don't know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.

(Prompted by the reference you made in this comment.)

Did you read the original discussion post to which the linked comment is attached? I go into more detail there.

Yes, I read it, and it's still not clear. Recent discussion made a connection with terminal/instrumental values, but it's not clear in what context they play a role.

I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn't mention this issue if you didn't attach some significance to it by giving it as an example in a recent comment.

I'm not sure what to say beyond what I said in the post. Which part is unclear?

In any case, it's kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn't deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.

How much academic philosophy have you personally read?

I've read a fair amount, and I don't find it particularly abstruse. This includes not only quasi-popular books by the highest-status practitioners like Dennett and Chalmers but also ordinary journal papers by professors at my undergraduate institution.

It might be worth taking a look at Chalmers' philosophy of mind anthology if you haven't already.

Agree with your point, though I wouldn't say the extremely diverse set of essays in Chalmers' compilation are a shining example of philosophical clarity. I would recommend something like Epistemology and the Psychology of Human Judgment or Philosophy and Neuroscience: A Ruthlessly Reductive Account.

I wouldn't say the extremely diverse set of essays in Chalmers' compilation are a shining example of philosophical clarity

Oh, certainly not -- it's a sampler, and all levels of clarity and confusion present in the field are represented. I cited it to show the typical writing style of papers in philosophy (over the years, since as I recall it starts with Descartes!).

Disagree. The area of philosophy I'm most familiar with (phil sci) is generally very easily understandable. I'm also not even sure this is a substantial objection. In many areas of learning, there are specialized vocabulary that take a lot of effort to understand. That's due to the deepness of the areas. Math is one example of this. I actually have more trouble reading papers in math, which is my own field, than I often do in biology (although this may be connected to the fact that I don't read highly technical papers in bio.) So even if your claim were true, it isn't at all clear to me why it would be relevant.

As a simple status issue, if you can get the philosophers to take you seriously, it will cause the people who aren't philosophers but who respect philosophy to take you more seriously.

I don't know. But even if you are correct, writing papers for the sole purpose of getting SIAI to be taken seriously, rather than actually explaining anything, sounds like a good idea anyway (assuming that it would work and that actually explaining things would not, that is).

We were discussing future fame and press coverage at the London meetup on Sunday (because a Fast Company journalist was present, no less - and participating in discussion very productively, I might add). I noted from Wikipedia's experience that the tech press are best treated with gunfire - do not talk to them under any circumstances. (There are individuals who are worth talking to, but they're very rare.) In retrospect, Wikipedia should really have gone headlong for the academic-interest press then the mainstream, bypassing the tech press entirely, from the beginning. An important place to apply the rule "taking someone seriously just because they pay you attention may not be a good idea."

What you do is philosophical engineering. Hit the philosophy journals and the tech press may find something more interesting to troll about.

(This ties into this thread.)

I'll just use this page as a storage bay for discussion of how to get published in mainstream academic journals. So, here's an unordered list of advice.

  • No misspellings, no grammatical errors, avoid unnecessary commas, avoid double negatives (duh).
  • Avoid sexist language.
  • Read your paper out loud several times; this will alert you to parts that sound clumsy.
  • Walk the fine line between stating the obvious and failing to explain yourself. For example: Don't write "Rene Descartes, a French philosopher..." but also don't assume that your audience knows what Casati & Varzi's approach to mereology is without explaining it briefly.
  • Do not use big words unnecessarily. Write as simply and clearly as possible. Of course, certain big words exist so that you can avoid writing long phrases again and again.
  • Avoid rhetorical questions.
  • Don't forget your quantifiers! Instead of "Philosophers have long held that..." make sure to write "Many philosophers have long held that..." or "Most philosophers have long held that..."
  • Show late drafts with lots of people; a new set of eyes can see what you cannot.
  • If possible, you may want to publish in a science journal rather than a philosophy journal. Here's why.
  • If you have no academic publications yet, a good way to get start is by writing a book review. Make sure you contact the journal's editor in advance and ask if they'd be interested in a review of the book in question.
  • Journal reviewers are not usually paid. Do not torture them with underdeveloped work.
  • Pay attention to the journal's self description, and read several of their past published works, to get a feel for the type of work they like to accept.
  • Note the difference between a 'substantial article' (>3000 words, makes a new contribution), 'discussion piece' (<3000 words, makes a few brief comments or criticisms of somebody else's work published in that journal), 'critical notice' (>3000 words, usually a book review with substantial new material, usually solicited by the journal). Certain journals publish only one or two of these types of submission.
  • For initial submission, you usually don't need to style the paper for that journal specifically. Just pick a standard format, double-spaced, use a standard serif font, etc. If the paper is accepted, then follow to a T whichever style guide is appropriate for the journal to which you are submitting. This information is usually listed on the journal's website ("submission details", "information for contributors").
  • Most journals will only consider papers that have not been submitted elsewhere. Don't waste the reviewers' and editors' time.
  • Look up journal turnaround times and so on; this will help you which journal to submit to. Try here.
  • Read Rowena Murray's Writing for Academic Journals.

If you do say "Many philosophers have long held that...", the natural response from your readers will be: which philosophers? and since when? It would be immensely better to at least include the originator for the principle in the statement.

A journal that receives a large number of papers will probably reject incoming papers based on formatting. Many computer science journals do this.

(2) the mainstream community has not caught up with SIAI's advances because SIAI has not shared them with anyone - at least not in their language, in their journals, to their expectations of clarity and style.

Are you sure that's the problem? The fact that SIAI wants to build Friendly AI has been mentioned in Time magazine and such places. Surely if mainstream machine ethicists read those stories, they would be as curious as anyone else what SIAI means by "Friendly AI", which ought to start a chain of events eventually leading to them learning about SIAI's ideas.

I mean, if you were a mainstream machine ethicist, and you read that or a similar article, wouldn't you be curious enough to not let language/journals/etc. stop you from finding out what this "institute" is talking about?

I'm not so sure.

Academics always have far more to read than they have time to read. Only reading stuff that has passed peer review is a useful filter. They might be curious enough to begin reading, say, the CEV article from 2004, but after going just a short distance and encountering the kinds of terminology issues I described above, they might not keep reading.

I'm imagining that they, for example, realize that Eliezer is proposing what is called an 'idealized preference' theory of value, but does not cite or respond to any of the many objections that have been raised against such theories, and so they doubt that reading further will be enlightening. They're wrong - though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories - but I sympathize with academics who need a strong filter in order to survive, even if it means they'll miss out on a few great things.

They're wrong - though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories

Interesting... Having done a quick search on those keywords, it seems that some of my own objections to Eliezer's theory simply mirror standard objections to idealized preference. But you say "they're wrong" to not pay more attention to Eliezer -- what do you think is Eliezer's advance over the existing idealized preference theories?

Sorry, I just meant that Eliezer's CEV work has lots of value in general, not that it solves outstanding issues in idealized preference theory. Indeed, Eliezer's idealized preference theory is more ambitious than any other idealized preference theory I've ever seen, and probably more problematic because of it. (But, it might be the only thing that will actually make the future not totally suck.)

Anyway, I don't know whether Eliezer's CEV has overcome the standard problems with idealized preference theories. I was one of those people who tried to read the CEV paper a few times and got so confused (by things like the conflation I talked about above) that I didn't keep at it until I fully understood - but at least I get the basic plan being proposed. Frankly, I'd love to work with Eliezer to write a new update to CEV and write it in the mainstream style and publish it in an AI journal - that way I will fully understand it, and so will others.

Hi, this looks like a very good idea to me. People use a whole set of standards to judge how serious an argument is, and this is a biggy.

I'm interested in your four reasons: which I would summarise as 1) Donors think you're more credible - get more money 2) Generally people think you're more credible - more support and perhaps more confidence from those currently interested 3) Provides good references to answer basic questions - not sure what the deep benefit is here, apart from the desire to stop people being wrong on the internet clahing with having a day job 4) Give good researchers the tools to collaborate - get more people on the same problem, perhaps from a different angle

I think they're all valid, but that they don't make explicit one of the most important benefits, and what really should be the main purpose of any publication

5) Invite criticism - by talking the right language to the right people, you will spark responses and counter-arguments. At least some of these might raise serious concerns or problems that need to be addressed as soon as possible and might even change the direction of thinking on a few key issues

This is right on target.

It is not that hard to get published academically. Just read some journals and see the dreck mixed with the jewels: you will gain some self-confidence. And much of that dreck is not because of excess academic jargon; on the contrary, much of the lower-quality stuff suffers precisely from a lack of the specific kind of polish required by academic style.

There is a very specific writing style which one must follow unfailingly. (This does not mean one must be unclear.)

At worst, one can publish in lower-prestige journals, though of course one should shoot for the top to start with, and we wouldn't want to publish any low-quality articles.

It is also easy to get good ideas. Just borrow existing ideas from the blogosphere (giving citations, of course) and write them up for the academic audience!

Most journals have blind review, so you don't have to worry about affiliation.

If you are still worried about that, you can partner with academics, which gives the additional advantage of bringing someone into the FAI field. You get the benefit of the partner's academic skills. (Ideally, you should tilt the style, content, and target journal towards the partner's field.) The academic also gets a publication for his career, although it can be of lesser value if it is not in his field

I have to disagree with Eliezer. There is value to bringing in some very smart new researchers, but they need this sort of validation. And many excellent academic articles are easy to read; though on the other hand some assume a large and specific body of specialized background knowledge -- I only wish the FAI field could get to that point!

As to the journal: There are a number of fields where one could get a foot in the door, such as philosophy, cognitive science, decision theory, machine ethics, AI, etc. You just have to sculpt your proposal carefully to get it accepted.

The formalized academic discourse has its problems, but that's where a lot of the smart people are, so let's see if we can get them aboard.

Or maybe the mainstream philosophy journals (isn't that the pre-internet term for a blog?) should get on line, and start using a state-of-the-art interactive discussion system.

It's pretty common these days for people to publish early drafts of their papers on their blog for commentary, and then they submit them to the journals. The philosophy world is highly interactive now, by way of the web.

The quick upvoting suggests people are interested in this. If people have more questions about how to write a publishable philosophy paper, I'm happy to take them.

The advice looks great (and I say that as an academic in a field whose professional structure is not that different from philosophy).

Frankly, I think people are upvoting it so much not only because it's a very good post, but because they really wish that SIAI would take your advice and do all this stuff.

Yes. Considering how hard SIAI is trying to get taken seriously by the mainstream academic community, you'd think that they would already be doing this.

Simply put, we don't have anyone who can except Carl Shulman and myself. I'm busy writing a book. I think Carl actually is doing papers but I'm not sure this is his highest-priority subject.

It might be that hiring a real professor to supervise would enable us to get this sort of work out of a postdoc going through the Visiting Fellows program, but that itself is not easy.

I applaud the attention to detail. Really keeps things in perspective.

Who knows, one of the articles produced here might be featured in the New York Times. Maybe even the same time some controversy or other begins. I look forward to it! :)

The New York Times wants a different style of writing. But it helps if you've done the peer-reviewed journal style first. :)

Excellent post, but I'm surprised, browsing through the comments, that nobody has mentioned what seems to me like the obvious trade-off: cash $$$$.

Either you will have to pay money to publish your article (if the journal is open access, and your article is accepted), or you'll have to refrain from publishing the article elsewhere (i.e., making it available to the public). Otherwise, how would the journals make any money? But due diligence... these are the journals you mentioned, with their qualities w/r/t open access or author fees:

IEEE Intelligent Systems -- not open access, http://www.computer.org/portal/web/csdl/abs/mags/ex/2011/01/mex201101toc.htm

Minds and Machines -- not open access, http://www.springerlink.com/content/p16821r7663k/

Ethics and Information Technology -- not open access, http://www.springerlink.com/content/j77j724u4784/

AI & Society -- not open access, http://www.springerlink.com/content/h732536p1k16/

Journal of Artificial General Intelligence -- open access and seemingly no author fees! may be the way to go... http://journal.agi-network.org/

International Journal of Machine Consciousness -- not open access, http://www.worldscinet.com/ijmc/

Artificial Intelligence -- not open access... do not be fooled by the "open access options available", you will have to pay, http://www.elsevier.com/wps/find/journaldescription.cws_home/505601/description#description

Cognitive Systems Research -- not open access, http://www.elsevier.com/wps/find/journaldescription.cws_home/620288/description#description

Topics in Cognitive Science -- looks like an odd journal, "there will be no such thing as an unsolicited topiCS paper. If you have an idea for a topic then submit a proposal for a topic via the topiCS Editorial Manager website. ... to soliciting papers that we know fit the charter of the journal from researchers who we know have something to say. "

AI Magazine -- not open access, http://www.aaai.org/ojs/index.php/aimagazine/issue/view/192/showToc

IEEE Intelligent Systems -- not open access, http://www.computer.org/portal/web/csdl/abs/mags/ex/2011/01/mex201101toc.htm

IEEE Transactions on Pattern Analysis and Machine Intelligence -- not open access, http://www.computer.org/portal/web/csdl/abs/trans/tp/2011/04/ttp201104toc.htm

Autonomous Agents and Multi-Agent Systems -- not open access, http://www.springer.com/computer/ai/journal/10458

Ethics -- not open access, http://www.jstor.org/action/showPublication?journalCode=ethics

Utilitas -- seems to be open access! http://journals.cambridge.org/action/displayIssue?jid=UTI&tab=currentissue

Philosophy and Public Affairs -- not open access, http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291088-4963

By the way, the easiest way to tell if a journal is open access is to try to read some of the recent articles.

I'd suggest writing the papers up using standard terminology as lukeprog suggested--I agree with his assessment of the lay-lucidity of Eliezer's CEV and TDT papers, although his other writing is clearly really good. And then I would also submit to http://philpapers.org/. And then maybe submit to one of the few open access journals above or elsewhere.

Maybe I'm biased though, because it just makes me sad to think that at an institution where people don't even care about tenure, people would still be worrying about where to submit papers. Sometimes I think that every time somebody strategizes over which journal submission will lead to the most prestige, somewhere, somehow, a kitten dies. That was never supposed to be the point.

Maybe I'm biased though, because it just makes me sad to think that at an institution where people don't even care about tenure, people would still be worrying about where to submit papers. Sometimes I think that every time somebody strategizes over which journal submission will lead to the most prestige, somewhere, somehow, a kitten dies. That was never supposed to be the point.

If they didn't care about prestige they wouldn't be publishing in a journal at all. Finding the most prestigious is just the natural extension.

To me, the submission fees are trivial if you've already decided to devote months of your time writing and researching a paper or two.

I would reserve the front page for things directionally related to rationality rather than strategically advisable paths for the SIAI to take.