All of Sen's Comments + Replies

Sen17d12-8

A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development? 

To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy un... (read more)

6Vladimir_Nesov14d
When models give particular ways of updating on future evidence, current predictions being wrong doesn't by itself make models wrong. Models learn, the way they learn is already part of them. An updating model is itself wrong when other available models are better in some harder-to-pin-down sense, not just at being right about particular predictions. When future evidence isn't in scope of a model, that invalidates the model. But not all models are like that with respect to relevant future evidence, even when such evidence dramatically changes their predictions in retrospect.

Thank you for raising this explicitly. I think probably lots of people's timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn't totally unreasonable so deserves serious  explicit consideration.

I think it'll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it's only on the same OOM as it is today, then it must have been government crackdown. If instead it's several OOMs more, and moreover the train... (read more)

3porby8mo
Yup. For what its worth, I haven't noticeably updated my timelines since the post; GPT-4 and whatnot are pretty much what I expected, and I'd be pretty surprised if GPT-5 eats us. (Edit: my P(doom by date) has actually gone down a bit for technical reasons as I've continued research! I'll probably add a more complete update to this post as a comment when I get around to editing it for the openphil version of the contest.)
Sen8mo-2-1

If your goal is to get to your house, there is only one thing that will satisfy the goal: being at your house. There is a limited set of optimal solutions that will get you there. If your goal is to move as far away from your house as possible, there are infinite ways to satisfy the goal and many more solutions at your disposal.

Natural selection is a "move away" strategy, it only seeks to avoid death, not go towards anything in particular, making the possible class of problems it can solve much more open ended. Gradient Descent is a "move towards" strategy... (read more)

Gradient descent by default would just like do, not quite the same thing, it's going to do a weirder thing, because natural selection has a much narrower information bottleneck. In one sense, you could say that natural selection was at an advantage, because it finds simpler solutions.

This is silly because it's actually the exact opposite. Gradient descent is incredibly narrow. Natural selection is the polar opposite of that kind of optimisation: an organism or even computer can come up with a complex solution to any and every problem given enough time to e... (read more)

1Noosphere898mo
Can you show how gradient descent solves a much narrower class of problems compared to natural selection?
1Noosphere899mo
There are differences, but the major differences usually are quantitative, not binary changes. The major differences are compute, energy, algorithms (sometimes), and currently memorylessness (Though PaLM-E might be changing this). Can a AI recognize emotions right now? IDK, I haven't heard of any results on it right now. Can it learn to recognize emotions to X% accuracy? I'd say yes, but how useful that ability is depends highly on how accurate it can be.
2Cleo Nardo9mo
Yeah, and the way that you recognise dogs is different from the way that cats recognise dogs. Doesn't seem to matter much. Two processes don't need to be exactly identical to do the same thing. My calculator adds numbers, and I add numbers. Yet my calculator isn't the same as my brain. Huh? What notion of complexity do you mean? People are quite happy to accept that computers can perform tasks with high k-complexity or t-complexity. It is mostly "sacred" things (in the Hansonian sense) that people are unwilling to accept. Nowhere in this article to I address AI sentience.
1Noosphere899mo
There are differences, but the major differences usually are quantitative, not binary changes. The major differences are compute, energy, algorithms (sometimes), and currently memorylessness (Though PaLM-E might be changing this). Can a AI recognize emotions right now? IDK, I haven't heard of any results on it right now. Can it learn to recognize emotions to X% accuracy? I'd say yes, but how useful that ability is depends highly on how accurate it can be.

If AI behaves identically to me but our internals are different, does that mean I can learn everything about myself from studying it? If so, the input->output pipeline is the only thing that matters, and we can disregard internal mechanisms. Black boxes are all you need to learn everything about the universe, and observing how the output changes for every input is enough to replicate the functions and behaviours of any object in the world. Does this sound correct? If not, then clearly it is important to point out that the algorithm is doing Y and not X.

2Cleo Nardo9mo
There's no sense in which my computer is doing matrix multiplication but isn't recognising dogs. At the level of internal mechanism, the computer is doing neither, it's just varying transistor voltages. If you admit a computer can be multiplying matrices, or sorting integers, or scheduling events, etc — then you've already appealed to the X-Y Criterion.
1Gerald Monroe9mo
Efficiency/lifespan/memory window size.

AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for

This is just a false claim. Seriously, where is the evidence for this? We have AIs that are superhuman at any task we can define a benchmark for? That's not even true in the digital world let alone in the world of mechatronic AIs. Once again i will be saving this post and coming back to it in 5 years to point out that we are not all dead. This is getting ridiculous at this point.

1Portia9mo
I agree, this is mostly displaying a limited conception of what constitutes challenging tasks based on a computer science mindset on minds. Their motor control still sucks. Their art still sucks. They are still unable to do science, they are failing to distinguish accurate extrapolations from data from plausible hallucinations. Their theory of mind is still outdone by human 9 year olds, tricking chatGPT is literally like tricking a child in that regard. That doesn't mean AI is stupid, it is fucking marvellous and impressive. But we have not taught it to universally do tasks that humans do, and with some, we are not even sure how to.
Sen1y-1-6

If the Author believes what they've written then they clearly think that it would be more dangerous to ignore this than to be wrong about it, so I can't really argue that they shouldn't be person number 1.  It's a comfortable moral position you can force yourself into though. "If I'm wrong, at least we avoided total annihilation, so in a way I still feel good about myself".

I see this particular kind of prediction as a kind of ethical posturing and can't in good conscience let people make them without some kind of accountability. People have been paid ... (read more)

6porby1y
Hmm. Apparently you meant something a little more extreme than I first thought. It kind of sounds like you think the content of my post is hazardous. Not sure what you mean by ethical posturing here. It's generally useful for people to put their reasoning and thoughts out in public so that other people can take from the reasoning what they find valuable, and making a bunch of predictions ahead of time makes the reasoning testable. For example, I'd really, really like it if a bunch of people who think long timelines are more likely wrote up detailed descriptions of their models and made lots of predictions. Who knows, they might know things I don't, and I might change my mind! I'd like to! I, um, haven't. Maybe the FTX Future Fund will decide to throw money at me later if they think the information was worth it to them, but that's their decision to make. If I am to owe a debt to Society if I am wrong, will Society pay me if I am right? Have I established a bet with Society? No. I just spent some time writing up why I changed my mind. Going through the effort to provide testable reasoning is a service. That's what FTX would be giving me money for, if they give me any money at all. You may make the valid argument that I should consider possible downstream uses of the information I post- which I do! Not providing the information also has consequences. I weighed them to the best of my ability, but I just don't see much predictable harm from providing testable reasoning to an audience of people who understand reasoning under uncertainty. (Incidentally, I don't plan to go on cable news to be a talking head about ~impending doom~.) I'm perfectly fine with taking a reputational hit for being wrong about something I should have known, or paying up in a bet when I lose. I worry what you're proposing here is something closer to "stop talking about things in public because they might be wrong and being wrong might have costs." That line of reasoning, taken to the limit, y

I have saved this post on the internet archive[1]. 

If in 5-15 years, the prediction does not come true, i would like it to be saved as evidence of one of the many serious claims that world-ending AI will be with us in very short timelines. I think the author has given more than enough detail on what they mean by AGI, and has given more than enough detail on what it might look like, so it should be obvious whether or not the prediction comes true. In other words, no rationalising past this or taking it back. If this is what the author truly believes, t... (read more)

1moridinamael8mo
So do we call it in favor of porby, or wait a bit longer for the ambiguity over whether we've truly crossed the AGI threshold to resolve?

There are three kinds of people. Those who in the past made predictions which turned out to be false, those who didn't make predictions, and those who in the past made predictions which turned out to be true. Obviously the third kind is the best & should be trusted the most. But what about the first and second kinds?

I get the impression from your comment that you think the second kind is better than the first kind; that the first kind should be avoided and the second kind taken seriously (provided they are making plausible arguments etc.) If so, I disa... (read more)

May the forces of the cosmos intervene to make me look silly.