Wiki Contributions

Comments

With the collapse of FTX Future Fund, I'm guessing this prize/contest is no longer funded. 

Errors from transposing or misreading numbers, placing the decimal in the wrong location, etc. The machine mind has a "perfect cache" to hold numbers, concepts, and steps involved. Math is just the simple example of their ability. Such machine minds will be able to hold every state and federal law in their mind, and could re-draft legislation that is "clean" while porting case law from the old to the new legal references. 

*for an example of current tech getting simple math wrong:   https://twitter.com/KordingLab/status/1588625510804119553?s=20&t=lIJcvTaFTLK8ZlgEfT-NfA

*You can't align something that doesn't really work (it not really working is the current danger). A better question is, can you take a working AI and brainwash it? The unhackable (machine) mind? A good system, in the wrong hands, and all that. 

**Again, free yourself from the monolithic view of current architectures. 

I don't see how mechanistic and modular are different, and it's only clear (upon reading your last point) that your analogy might be flipped. If a car breaks-down, you need to troubleshoot, which means checking that different parts are working correctly. Interpretability (which I think fails as a term because it's very specific to math/xNN's, and not high-level behaviors) happens at that level. Specific to the argument, though, is that these are not two different concepts. Going back to the car analogy, a spark plug and fuel injector are roughly the same size, and we can assume represent an equivalent number of neurons to function. ML fails by being focused on the per-neuron activity; and thus, AGI will be based on the larger structures. We would also ask what is wrong, in terms or elephant psychology, in trying to troubleshoot an elephants mind (how something "lights up" in a region of the brain from a scan is only a hint as to the mechanisms we need to focus on, though maybe the problem is that another area didn't light up, etc).

"Solving" AGI is about knowing enough about the modules of the brain, and then building from scratch. ML research is about sub-module activity, while safety seems to be above module activity. None of these things are compatible with each other.       

First, we can all ignore LeCun, because despite his born-again claims, the guy wants to solve all the DL problems he has been pushing (and winning awards for) with more DL (absolutely not neuro-symbolic, despite borrowing related terms).

Second, I made the case that amazing progress in ML is only good for more ML, and that AGI will come from a different direction. Wanting to know that seems to have been the claim of this contest, but the distribution of posts indicates strong confirmation bias toward more, faster, sooner, danger!

Third, I think most people understand your position, but you have to understand the audience. Even if there is no AGI by 2029, on a long enough time scale, we don't just reach AGI, there is a likely outcome that intelligent machines exist for 10's of thousands of years longer than the human race (and they will be the ones to make first contact with another race of intelligent machines from halfway across the galaxy; and yes, it's interesting to consider future AI's contemplating the Fermi Paradox long after we have died-off). 

Advances in ML over the next few years as being no different than advances (over the next few years) of any other technology VS the hard leap into something that is right out of science fiction. There is a gap, and a very large one at that. What I have posted for this "prize" (and personally as a regular course of action in calling out the ability gap) is about looking for milestones of development of that sci-fi stuff, while giving less weight to flashy demo's that don't reflect core methods (only incremental advancement of existing methods). 

*under current group think, risk from ML is going to happen faster than can be planned for, while AGI risk sneaks-up on you because you were looking in the wrong direction. At least, mitigation policies for AGI risk will target ML methods, and won't even apply to AGI fundamentals.  

True enough that n AGI won't have the same "emotional loop" as humans, and that could be grounds for risk, of some kind. Not clear if such "feelings" at that level are actually needed, and no one seems to have concerns about loss of such an ability from mind uploading (so perhaps it's just a bias against machines?). 

Also true that current levels of compute are enough for an AGI, and you at least hint at a change in architecture. 

However, for the rest of the post, your descriptions are strictly talking about machine learning. It's my continued contention that we don't reach AGI under current paradigms, making such arguments about AGI risk moot.  

Computer vision is just scanning for high probability matches between an area of the image and a set of tokenized segments that have an assigned label. No conceptual understanding of objects or actions in an image. No internal representation, and no expectations for what should "be there" a moment later. And no form of attention to drive focus (area of interest). 

Canned performances and human control just off camera give the false impression of animal behaviors in what we see today, but there has been little progress since the mid-1980's into behavior-driven research. *learning to play a video game with only 20 hours of real-time play would be a better measure than trying to understand (and match) animal minds (though good research in the direction of human-level will absolutely include that).   

This is the boiled-down version of your argument? You clearly state that GOFAI failed and that classic symbolic AI won't help. You also seem to be in favor of DL methods, but in a confusing way, say they don't go far enough (and then refer to others that have said as much). But I don't understand the leap to AGI, where you only seem to say we can't get there, not really connecting the two ideas. Neurosymbolic AI is not enough? 

I think you're saying DL captures how the brain works without being a model for cognitive functions, but you don't expand on the link between human cognition and AGI. And while I've thrown some shade on the group dynamics here (moderators can surface the posts they like while letting others sink, aka discovery issues), you have to understand that this 'AGI is near' mentality is based on accelerating progress in ML/DL (and thus, the safety issues are based on the many problems with ML/DL systems). At best, we can shift attention onto more realistic milestones (thou they will call that goalpost moving). If your conclusion is that this AGI stuff is unlikely to ever pan-out, then yes, all of your arguments will be rejected. 

Load More