Very recently, I was made aware of a quite remarkable addition to the world of hobbyist AI in the form of Silviu-Marian Udrescu and Max Tegmarks new AI, 'AI-Feynman'. For those already familiar with the software Eureqa, and its potential to help humanity as a kind of AI-scientist, its a software that is able to form quantitative expressions for observed quantities of data. You feed it observations, it outputs mathematical formulas for the quantity it is attempting to observe. Eureqa isn't free, but if you have the means I strongly suggest examing it more closely in your own time regardless, its sort of that amazing. Richard Carrier has this seperate article on AI safety, but gives Eureqa more attention and detail than I do here: https://www.richardcarrier.info/archives/3195


Now when I saw this, I had largely assumed that the machine learning community would embrace this kind of AI. Why wouldn't we want to have an AI that gave us the form of an answer, instead of a black box that simply did it all for us in its opaque function approximations (neural networks being the primary culprit here). I always regarded the two scenarios as the difference, essentially, between someone giving you the equation you needed on a piece of math homework, and leaving you to figure out why that was correct and build your own understanding, and someone just doing the entire homework for you wholesale and handing in the homework as well so you couldn't even see why what was done was correct. However, as it turned out, most of the proffessional serious machine learning community has done work on things much closer to what I percieve as the latter part of that distinction.


Deep learning is simply the rage, and with good reason, because it can do a LOT. We on Lesswrong, MIRI (and indeed many others outside of these communities) are of course aware of the problem with this (and understanding that problem in a more generalised but also more incorporative framework of existential risk analysis), and so when I actually read Silviu-Marian Udrescu and Max Tegmarks paper (included here: https://arxiv.org/abs/1905.11481 ), to say it was a breath of fresh air is simply an understatement. AI-Feynman is very much the same kind of program as Eureqa, in that it is an AI-scientist that produces quantitative formulas for its observations. It has though, thus far proved to be significantly more effective than Eureqa in certain domains, and whats more, its the right kind of black box: Its that super genius in your class that will give you a leg up on your work, but will leave you to actually make sure you now know what you're talking about.

This program, as a piece of workable, hands on, hobbyist coding to be casually implemented, is about as easy to use as the simplest implementation of a keras program (whats more, as Tegmark puts it, its free! Check out this nice article for more in terms of hands on use: https://towardsdatascience.com/ai-feynman-2-0-learning-regression-equations-from-data-3232151bd929 ), though of course time will tell in the end. My point in this post was to bring more attention to the significant work being made in this area, and that hopefully with the more people who learn of it, the more people we might convince to put the tenchniques inspired by deep learning towards programs that are more transparent in what they show in the real world like Eureqa and AI-Feynman.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:00 AM

I'm not actually sure how this kind of algorithm could be used to produce safe AI. It is fairly useful in domains that can be described by a simple formula, but physisists have already understood most of the important domains with simple equations and pleantyful data. I don't think that even a magic box that could predict formulae from data perfectly, say by brute force, would be that much use. (Assuming no reverse engineering to get general compute.)

I don't see the techniques generalizing to AGI, the understandability of the result comes from the equations being simple, and most real world phenomena don't have simple, real world, practically calculable equations. (Yes quantum field theory is fairly simple, and there might be an even simpler and more general theory. But it is 0 use for calculating the stock market.) The techniques they use seem not to be terribly deep principles, just spotting a bunch of simple special cases they can write custom algorithms for. I can see this being useful to figure out some obscure physics equation, but I would be moderately surprised if this kind of program benefited physics more than the real Feynman.