Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?
This has been discussed several times in the past, see: * Have You Tried Hiring People?, a LW post * It talks about this ACX comment thread * Greg Coulbourn’s “Mega-money for mega-smart people to solve AGI Alignment” * Google Docs document * LW comment about the document * Short discussion on MIRI’s Facebook page * Thread on the EA forum * Comments arguing for Terence Tao specifically * Comment 1 * Comment 2 with replies worth reading But I’m not aware of anyone that has actually even tried to do something like this. Of special interest is this comment by Eliezer about Tao: > We'd absolutely pay him if he showed up and said he wanted to work on the problem. Every time I've asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don't interest them. We have already extensively verified that it doesn't particularly work for eg university professors. So if anyone has contacted him or people like him (instead of regular college professors), I’d like to know how that went. Otherwise, especially for people that aren’t merely pessimistic but measure success probability in log-odds, sending that email is a low cost action that we should definitely try. So you (whoever is reading this) have until June 23rd to convince me that I shouldn’t send this to his @math.ucla.edu address: Edit: I’ve been informed that someone with much better chances of success will be trying to contact him soon, so the priority now is to convince Demis Hassabis (see further below) and to find other similarly talented people. Title: Have you considered working on AI alignment? Body: > It is not once but twice that I have heard leaders of AI research orgs say they want you to work on AI alignment. Demis Hassabis said on a podcast that when we near AGI (i.e. when we no longer have time) he would want to assemble a team with you on it but, I quote, “I didn’t quite tell him the full plan of
To the extent that maternal instincts are some actual small concrete set of things, you are probably making two somewhat opposite mistakes here: Imagining something that doesn't truly run on maternal instinct, and assuming that mothers actually care about their babies (for a certain definition of "care").
You say that mothers aren't actually "endlessly selfless, forever attuned to every cry, governed by an unshakable instinct to nurture", that there are "identities beyond 'mum' to be kept alive" and that there are nights that instinct disappears. But that's because you feel exhaustion, or also care about things other than your children. We don't need to create things like that. If "maternal instincts" are or... (read 373 more words →)