NeurIPS (formerly NIPS) is a top conference in machine learning and computational neuroscience. The recently published call for papers for NeurIPS 2020 includes the following (which did not appear in previous years):

In order to provide a balanced perspective, authors are required to include a statement of the potential broader impact of their work, including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes.

[...]

Evaluation criteria: [...] Submissions will also be considered on ethical grounds. Regardless of scientific quality or contribution, a submission may be rejected for ethical considerations, including methods, applications, or data that create or reinforce unfair bias or that have a primary purpose of harm or injury.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 4:29 AM

Looks promising to me. Technological development isn't by default good.

Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren't by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)

Agreed, I think of this like sending a signal that at least a limited concern for safety is important. I'm sure we'll see a bunch of papers with sections addressing this that won't be great, but over time it stands some chance of more regularizing considering concerns about safety and ethics of ML work in the field such that safety work will become more accepted as valuable. So even without a lot of guidance or strong evaluative criteria, this seems a small win to me that, at worst, causes some papers to just have extra fluff sections their authors wrote to pretend to care about safety rather than ignoring it completely.

Goodhart's Law poised to strike.

Just like with renaming NIPS to NeurIPS, this is wokeness gone wild.

So is this a good thing or bad thing? Is wokeness a stepping stone towards some future enlightened morality, and help bring AI/ML along to that destination (in which case perhaps we should excuse its current excesses), or will it ultimately collapse while doing a lot of damage in the meantime (like communism)?