I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique.

What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge.

The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind.

It would be especially useful if the collaborators were located physically close to Oxford (UK).

Let me know if you know or are a potential candidate, in the comments.

Cheers!

Hi Stuart. I am not an ML person, and I am not close to Oxford, but I am interested in this type of stuff (in particular, I went through the FDT paper just two days ago with someone). I do write papers for ML conferences sometimes.

Hi Stuart, I am about to complete a PhD in Machine Learning and would be interested in collaborations like these but only October onwards.

I have written and presented papers at Machine Learning conferences, and am quite interested in contributing to concrete AI safety research. My work so far has been on issues in supervised ranking tasks, but I have read a fair bit on reinforcement learning.

I am not close to Oxford. I am current in Austin, TX and will be in the bay area October onwards.

ok! Sending you my email in a PM. Would you mind contacting me in October, if that's ok and you're still interested?

Cheers!

Consider for a moment, that this DL thing may be soon obsolete. It is great, the best so far, but anyway.

The first problem I have with it is the enormous data set needed for training.

The second problem is the inherent non-understandability of what those weights mean.

So, perhaps something better may be just around the corner.

I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.

I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.

As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture, and have spent a fair bit of time thinking about the Friendly/Unfriendly AI problem. I even came up with an attempt at a thought experiment to convince an AI to be friendly.

Alas, I am based near Toronto, Ontario, Canada, so distance might be an issue.