One idea is to try to differentiate the NYC 'product' from the Berkeley 'product'. For example, the advantage of Vancouver over the Bay Area is that you can live in Vancouver if you're Canadian. The kernel project attempted to differentiate itself through e.g. a manifesto. In the ...(read more)
I think I see how X-and-only-X is a problem if we are using a classifier to furnish a 0/1 reward. However, it seems like less of a problem if we're using a regression model to furnish a floating point reward that attempts to describe _all_ of our values (not just our values as they pertain to the co...(read more)
Oh, that makes sense.
Thanks for the links! (That goes for Wei and Paul too.)
> a group of researchers are beginning to think that in a broader sense "adversarial vulnerability" and "amount of test set error" are inextricably linked in a deep and foundational way - that they may not even be two separate problems.
I'd e...(read more)
Well, is there anything that can be done to stop the x-risk? If there is, maybe tell the people who are best positioned to stop it. Re: the AGI thing, is it a scheme that could plausibly be made friendly? If yes, maybe tell people who are working on friendliness/work on making it friendly yourself. ...(read more)
> deep learning is not unusually susceptible to adversarial examples
FWIW, this claim doesn't match my intuition, and googling around, I wasn't able to quickly find any papers or blog posts supporting it. [This](http://karpathy.github.io/2015/03/30/breaking-convnets/) 2015 blog post discusses how d...(read more)
Well, I haven't seen even a blog post's worth of effort put into doing something like what I suggested. So an extreme level of pessimism doesn't seem especially well-justified to me. It seems relatively common for a task to be hard in one framework while being easy in another.
Standard CFAR advice:...(read more)
Well, [here](https://www.datasciencecentral.com/profiles/blogs/what-comes-after-deep-learning) is a list of paradigms that might overtake deep learning. This list could probably be expanded, e.g. by researching various attempts to integrate deep learning with Bayesian reasoning, create more interpre...(read more)
I like the idea of optimizing for career growth & AI safety separately. However, I'm not sure the difference between "capabilities research" and "safety research" is as clear-cut as Critch makes it sound.
Consider the problem of making ML more data-efficient. Superficially, this is "capabilities re...(read more)
X-risks are one cause area where "raising awareness" is probably a bad idea.