I saw this thread complaining about the state of peer review in machine learning. Has anyone thought about trying to design a better peer review system, then creating a new ML conference around it and also adding in a safety emphasis?

Showing 3 of 5 replies (Click to show all)
4rohinmshah6moYes (though just the peer review, not the safety emphasis). I can send you thoughts about it if you'd like, email me at <my LW username> at gmail. I thought about the differential development point and came away thinking it would be net positive, and convinced a few other people as well, even if it's just modifying peer review without having safety researchers run the conference.
4John_Maxwell6moCool! I guess another way of thinking about this is not a safety emphasis so much as a forecasting emphasis. Reminds me of our previous discussion here [https://www.lesswrong.com/posts/rBkZvbGDQZhEymReM/forum-participation-as-a-research-strategy#Rhj8mK2F8C4pgxdkT] . If someone could invent new scientific institutions which reward accurate forecasts about scientific progress, that could be really helpful for knowing how AI will progress and building consensus regarding which approaches are safe/unsafe.

+1, that's basically the story I have in mind. I think of it as less about forecasting and more about understanding deep learning and how it works, but I think it serves basically the same purpose: it's helpful for knowing how AI will progress and building consensus about what's safe / unsafe.

AI Alignment Open Thread August 2019

by habryka 1 min read4th Aug 201996 comments


Ω 12

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is an experiment in having an Open Thread dedicated to AI Alignment discussion, hopefully enabling researchers and upcoming researchers to ask small questions they are confused about, share very early stage ideas and have lower-key discussions.