nlholdem
nlholdem has not written any posts yet.

That's a very well-argued point. I have precisely the opposite intuition of course, but I can't deny the strength of your argument.. I tend to be less interested in tasks that are well-bounded, than those that are open-ended and uncertain. I agree that much of what we call intelligent might be much simpler. But then I think common sense reasoning is much harder. I think maybe I'll try to draw up my own list of tasks for AGI :)
I think you need to be sceptical about what kind of reasoning these systems are actually doing. My contention is that they are all shallow. A system that is trained on near-infinite training sets can look indistinguishable from one that can do deep reasoning, but is in fact just pattern-matching. Or might be. This paper is very pertinent I think:
https://arxiv.org/abs/2205.11502
short summary: train a deep network on examples from a logical reasoning task, obtain near-perfect validation error, but find it hasn't learnt the task at all! It's learned arbitrary statistical properties of the dataset, completely unrelated to the task. Which is what deep learning does by default. That isn't going to go away... (read more)
I agree it's an attempt to poke Elon, although I suspect he knew that he'd never take the bet. Also agree that anything involving real world robotics in unknown environments is massively more difficult. Having said that, the criteria from Effective Altuirism here:
for any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for $25/hr or less
do say 'any job', and we often seem to forget how many jobs require insane levels of dexterity and dealing with the unknown. We could think about the difficulty of building a robot plasterer or car mechanic for example, and see... (read more)
Quite possibly. I just meant: you can't conclude from the bet that AGI is even more imminent.
Genuinely, I would love to hear people's thoughts on Marcus's 5 conditions, and hear their reasoning. For me, the one of having a robot cook that can work in pretty much anyone's kitchen is a severe test, and a long way from current capabilities.
"If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world."
One problem I have with these scenarios is that they always rely on some lethal means for the AGI to actually kill people. And those lethal means are also available to humans, of course. If it's possible for an AGI to 'simply' hack into a nuclear base and launch all it's missiles, it's possible for a human to do the same - possibly using AI to assist themselves. I would wager... (read more)
The reason he offered that bet was because Elon Musk had predicted that we'd likely have AGI by 2029, so you're drawing the wrong conclusion from that. Other people joined in with Marcus to push the wager up to $500k, but Musk didn't take the bet of course, so you might infer something from that!
The bet itself is quite insightful, and I would be very interested to hear your thoughts on its 5 conditions:
https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things
In fact anyone thinking that AGI is imminent would do well to read it - it focusses the mind on specific capabilities and how you might build them, which I think it more useful than thinking in vague terms... (read more)
Some humans care about humans. Some humans are mass murdering sociopaths. It only requires a small number of said sociopaths to get their hands on weapons of mass destruction to cause disaster. A cursory reading of history (and current events!) confirms this.