Posts

Sorted by New

Wiki Contributions

Comments

Sen5mo12-7

A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development? 

To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy uncertainty. He didn't even disagree very strongly with most of the central premises of the other participants, he just placed his estimates much more humbly and cautiously. He also brought up the mundanity of progress and boring engineering problems, something I see as the main bottleneck in the way of a singularity. I wouldn't be surprised if the singularity turns out to be a physically impossible phenomenon because of hard limits in parallelisation of compute or queueing theory or supply chains or materials processing or something.

Sen1y-2-1

If your goal is to get to your house, there is only one thing that will satisfy the goal: being at your house. There is a limited set of optimal solutions that will get you there. If your goal is to move as far away from your house as possible, there are infinite ways to satisfy the goal and many more solutions at your disposal.

Natural selection is a "move away" strategy, it only seeks to avoid death, not go towards anything in particular, making the possible class of problems it can solve much more open ended. Gradient Descent is a "move towards" strategy, if there is a solution that would help it reach a goal but it's not within the target direction, it mostly won't reach it without help or modification. This is why the ML industry is using evolutionary algorithms to solve global optimisation problems that GD cannot solve. The random search / brute force nature of evolution is inherently more versatile and is a well known limitation of GD.
 

Sen1y-10

Gradient descent by default would just like do, not quite the same thing, it's going to do a weirder thing, because natural selection has a much narrower information bottleneck. In one sense, you could say that natural selection was at an advantage, because it finds simpler solutions.

This is silly because it's actually the exact opposite. Gradient descent is incredibly narrow. Natural selection is the polar opposite of that kind of optimisation: an organism or even computer can come up with a complex solution to any and every problem given enough time to evolve. Evolution fundamentally overcomes global optimisation problems that are mathematically impossible for gradient descent to overcome without serious modifications, possibly not even then. It is the 'alkahest' of ML, even if it is slow and not as popular.

Sen1y-10

If AI behaves identically to me but our internals are different, does that mean I can learn everything about myself from studying it? If so, the input->output pipeline is the only thing that matters, and we can disregard internal mechanisms. Black boxes are all you need to learn everything about the universe, and observing how the output changes for every input is enough to replicate the functions and behaviours of any object in the world. Does this sound correct? If not, then clearly it is important to point out that the algorithm is doing Y and not X.

Sen1y85

AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for

This is just a false claim. Seriously, where is the evidence for this? We have AIs that are superhuman at any task we can define a benchmark for? That's not even true in the digital world let alone in the world of mechatronic AIs. Once again i will be saving this post and coming back to it in 5 years to point out that we are not all dead. This is getting ridiculous at this point.

Sen2y-1-6

If the Author believes what they've written then they clearly think that it would be more dangerous to ignore this than to be wrong about it, so I can't really argue that they shouldn't be person number 1.  It's a comfortable moral position you can force yourself into though. "If I'm wrong, at least we avoided total annihilation, so in a way I still feel good about myself".

I see this particular kind of prediction as a kind of ethical posturing and can't in good conscience let people make them without some kind of accountability. People have been paid millions to work on predictions similar to these. If they are wrong, they should be held accountable in proportion to whatever cost they have have incurred on society, big or small, financial or behavioural. 

If wrong, I don't want anyone brushing these predictions off as silly mistakes, simple errors in models, or rationalising them away. "That's not actually what they meant by AGI", or "It was better to be wrong than say nothing, please keep taking me seriously". Sometimes mistakes are made because of huge fundamental errors in understanding across the entire subject and we do need a record of that for reasons more important than fun and games, so definitely be the first kind of person but, you know, people are watching is all.

Load More