All of this AI stuff is a misguided sideshow.

The disconnect between what Machine Learning represents and the desired or “hyped” abilities is very real. The flashy, headline grabbing results of the past decade are certainly a sideshow, but instead of hiding how far these systems are from actual cognition (and especially from being sentient), they obscure the simple nature of what the xNN models represent. They are a vast probability table with a search function. The impressive outputs are based on the impressive inputs that went into the training process. There is no active mind in the middle. There is no form of cognition taking place at the time of training. 

We should be even more focused on AI.

When focus is based on the latest arXiv paper, or this weeks SOTA, then general sight has already been lost. Rather than follow behind the circus animals, effort should be made to stop and orient one’s self. Where is AI today. Where was it 60 years ago. What direction should it be going. Is the circus, the sideshow, really where all time and effort should be invested? What else is out there? In describing AI research as a journey from Los Angeles to New York, ML is Las Vegas. Sentient animal research might be Albuquerque or Denver. Chicago as early childhood development.

Different aspects of the problem. 

My original post attempted to address this very point. Current efforts to predict arrival times or plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late. 

Take a leap of faith and assume advances in Machine Learning are only specific to ML. Looking for the right kind of markers of progress toward the science fiction levels of AI, the kind that are not just incrementally better than current years examples, requires understanding AI itself. Not what is current. Not what came before. An AI that understood basic math would make zero errors on a math test. That current systems get some percentage below that tells us they don’t REALLY understand. 

It's beyond the scope of EA efforts, but I need to add that examples of high-level thinking are what current systems try to mimic. Actual research doesn’t start with language translation or solving mathematical conjectures. Like the human mind, a lot of effort goes into building a structure that supports abstract logic, and solving this “critical mass” problem is likely to leave a very tiny footprint.  

Scrutiny is mutiny.

While the premise is to have the communities assumptions “exposed to external scrutiny,” there is a strong correlation between “popular” posts and those that support existing assumptions. I don’t think selection bias is going to improve anything.

Do you really want your mind changed? If the AI challenged is solved in a manner you didn’t plan for, then 10’s of millions spent on ‘wrong method’ alignment will have been wasted. Seems like you literally can afford to keep your options open.

New Comment
5 comments, sorted by Click to highlight new comments since:

plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late. 

 

Scrutiny is mutiny.

Quick note as a mod for the site. I feel this post's ratio of substance to witty/snarky metaphor/word play isn't high enough. I downvoted and as part of an effort to nudge the site more towards great content, I'm applying a 1 post/comment per day rate limit to your account (in light of this post and other downvoted posts).

We're a little more trigger-ready with Future Fund Worldview Prizes because it seems the quality is lower than average for LW.  And I don't think that's just because we're resistant to contrary opinions.

An AI that understood basic math would make zero errors on a math test. 

Do humans that understand basic math make zero errors on math tests? I don't think that's the case. Part of human intelligence involves making all sorts of random errors. 

If you think this is a major current problem, how certain are you that a scaled-up Gato won't be able to do all math at the level of a high school student?

Errors from transposing or misreading numbers, placing the decimal in the wrong location, etc. The machine mind has a "perfect cache" to hold numbers, concepts, and steps involved. Math is just the simple example of their ability. Such machine minds will be able to hold every state and federal law in their mind, and could re-draft legislation that is "clean" while porting case law from the old to the new legal references. 

*for an example of current tech getting simple math wrong:   https://twitter.com/KordingLab/status/1588625510804119553?s=20&t=lIJcvTaFTLK8ZlgEfT-NfA

It's part of human intelligence to make errors. Making errors is a sign of human-like intelligence.

You could imagine an AGI that doesn't make any mistakes, but the presence of errors is no argument against it achieving human-like performance. 

It's interesting that you completely ignored the question about what you believe will be the likely capabilities of near-future technology like Gato.

*You can't align something that doesn't really work (it not really working is the current danger). A better question is, can you take a working AI and brainwash it? The unhackable (machine) mind? A good system, in the wrong hands, and all that. 

**Again, free yourself from the monolithic view of current architectures.