Very recently, I was made aware of a quite remarkable addition to the world of hobbyist AI in the form of Silviu-Marian Udrescu and Max Tegmarks new AI, 'AI-Feynman'. For those already familiar with the software Eureqa, and its potential to help humanity as a kind of AI-scientist, its a software that is able to form quantitative expressions for observed quantities of data. You feed it observations, it outputs mathematical formulas for the quantity it is attempting to observe. Eureqa isn't free, but if you have the means I strongly suggest examing it more closely in your own time regardless, its sort of that amazing. Richard Carrier has this seperate article on AI safety, but gives Eureqa more attention and detail than I do here: https://www.richardcarrier.info/archives/3195
Now when I saw this, I had largely assumed that the machine learning community would embrace this kind of AI. Why wouldn't we want to have an AI that gave us the form of an answer, instead of a black box that simply did it all for us in its opaque function approximations (neural networks being the primary culprit here). I always regarded the two scenarios as the difference, essentially, between someone giving you the equation you needed on a piece of math homework, and leaving you to figure out why that was correct and build your own understanding, and someone just doing the entire homework for you wholesale and handing in the homework as well so you couldn't even see why what was done was correct. However, as it turned out, most of the proffessional serious machine learning community has done work on things much closer to what I percieve as the latter part of that distinction.
Deep learning is simply the rage, and with good reason, because it can do a LOT. We on Lesswrong, MIRI (and indeed many others outside of these communities) are of course aware of the problem with this (and understanding that problem in a more generalised but also more incorporative framework of existential risk analysis), and so when I actually read Silviu-Marian Udrescu and Max Tegmarks paper (included here: https://arxiv.org/abs/1905.11481 ), to say it was a breath of fresh air is simply an understatement. AI-Feynman is very much the same kind of program as Eureqa, in that it is an AI-scientist that produces quantitative formulas for its observations. It has though, thus far proved to be significantly more effective than Eureqa in certain domains, and whats more, its the right kind of black box: Its that super genius in your class that will give you a leg up on your work, but will leave you to actually make sure you now know what you're talking about.
This program, as a piece of workable, hands on, hobbyist coding to be casually implemented, is about as easy to use as the simplest implementation of a keras program (whats more, as Tegmark puts it, its free! Check out this nice article for more in terms of hands on use: https://towardsdatascience.com/ai-feynman-2-0-learning-regression-equations-from-data-3232151bd929 ), though of course time will tell in the end. My point in this post was to bring more attention to the significant work being made in this area, and that hopefully with the more people who learn of it, the more people we might convince to put the tenchniques inspired by deep learning towards programs that are more transparent in what they show in the real world like Eureqa and AI-Feynman.