I created a post for the Future Fund AI World Prize (AGI xRisk Competition) only days after seeing it mentioned on twitter, but I don’t really follow people on the alignment-side of the isle, so of course that post fell flat. As I’ve finally decided to make the formal post here according to the challenge rules, it was yet another knock against me seeing that I may have already poked the bear when I was participating in the MIRI/Visible Thoughts Project. It’s the reason I already had an account on LessWrong, but my post covering the experience called into question the reasoning. Short version: it was aimed at collecting data that explicitly formats data to cover for training model failures vs being geared toward future architectures. The long version is here: Slaying the ML Dragon 

While I am slightly critical of in-group vs open debate, in terms of being able to even accept “outside ideas,” recent events have highlighted how that happens within the AI community. Industry insiders are talking down or sniping at one another. The actual commentary, and I would argue, the actual dialog on ideas, happens on the fringe, below the attention of those that have already established themselves (thru action or pedigree). The idea of training a new generation of researchers on safety and ethics is all for naught when those attempting to engage are derided or chastised.

*not without irony that Alignment forum has “participation in the Forum is limited to deeply established researchers” as part of the Welcome page, and later states that there are only about 100 people that can post there!  

If “urgency” is based on the assumption that AGI could arrive within a decade, then any effort directed at “the entry point” is already moot. The snowball has already started its descent down the hillside, as one might say. 

While mentioned in my original post, I must emphasize the importance of heads up and eyes open. Understand what the real milestones are by understanding the real challenges involved. Then, instead of sitting-back and waiting for bold announcements by tech giants or presentations at large conferences, look for the people “on the fringe” that are at least saying the right things. 

New to LessWrong?

New Comment