Extensive comments at https://www.reddit.com/r/MachineLearning/comments/8di9nk/d_ai_researchers_are_making_more_than_1_million/ https://news.ycombinator.com/item?id=16880276 https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/
Their 2016 Form 990 just surfaced: http://www.guidestar.org/FinDocuments/2016/810/861/2016-810861541-0eb61629-9.pdf says they had $13m income in 2016, and at the end, assets of ~$2.6m.
It was partially to point out that you can get self-modification hazards with a substantially less complex setup than your proposal with a little hand-engineering of the agents; since none of the AI safety gridworld problems could be said to be rigorously solved, there's no need for more realistic s...(read more)
Have you seen "AI Safety Gridworlds", Leike et al 2017?
Doesn't the speed prior diverge quite rapidly from the universal prior? There are many short programs of length _n_ which take a long time to compute their final result - up to BB(_n_) timesteps, specifically...
Speaking of Billboard: ["What Makes Popular Culture Popular? Product Features and Optimal Differentiation in Music"](https://www.dropbox.com/s/wtbbyodpyzau3o4/2017-askin.pdf?dl=0) Askin & Mauskapf 2017:
> In this article, we propose a new explanation for why certain cultural products outperform the...(read more)
No idea. Some Google Scholar checks turns up nothing.
David Abel has provided a fairly nice summary set of notes: https://cs.brown.edu/people/dabel/blog/posts/misc/aaai_2018.pdf