Wiki Contributions

Comments

There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.

In brief, a few issues I see here:

  • You haven't actually provided any evidence that GPT does simulation other than "Just saying “this AI is a simulator” naturalizes many of the counterintuitive properties of GPT which don’t usually become apparent to people until they’ve had a lot of hands-on experience with generating text." What counterintuitve properties, exactly? Examples I've seen show GPT-3 is not simulating the environment being described in the text. I've seen a lot impressive examples too, but I find it hard to draw conclusions on how the model works by just reading lots and lots of outputs... I wonder what experiments could be done to test your idea that it's running a simulation. 
  • Even for very simple to simulate processes such as addition or symbol substitution, GPT has, in my view,  trouble learning them, even though it does Grok those things eventually. For things like multiplication, the accuracy it has depends on how often the numbers appear in the training data (https://arxiv.org/abs/2202.07206), which is a bit telling, I think.
  • Simulating the laws of physics is really hard.. trust me on this (I did a Ph.D. in molecular dynamics simulation). If it's doing any simulation at all, it's got to be some high level heuristic type stuff. If it's really good, it might be capable of simulating basic geometric constraints (although IIRC GPT is superb at spatial reasoning). Even humans are really bad at properly simulating physics accurately (researchers found that most people do really poorly on a test of basic physics based reasoning, like basic kinematics (will this ball curve left, right , or go straight, etc)). I imagine gradient descent is going to be much more likely to settle on shortcut rules and heuristics rather than implementing a complex simulation. 

Peperine (black pepper extract) can help make quercetin more bioavailable. They are co-administered in many studies on the neuroprotective effects of quercetin: https://scholar.google.com/scholar?hl=en&as_sdt=0,22&q=piperine+quercetin

I find slower take-off scenarios more plausible. I like the general thrust of Christiano's "What failure looks like". I wonder if anyone has written up a more narrative / concrete account of that sort of scenario.

The thing you are trying to study ("returns on cognitive reinvestment") is probably one of the hardest things in the world to understand scientifically. It requires understanding both the capabilities of specific self-modifying agents and the complexity of the world. It depends what problem you are focusing on too -- the shape of the curve may be very different for chess vs something like curing disease. Why? Because chess I can simulate on a computer, so throwing more compute at it leads to some returns. I can't simulate human biology in a computer - we have to actually have people in labs doing complicated experiments just to understand one tiny bit of human biology.. so having more compute / cognitive power in any given agent isn't necessarily going to speed things along.. you also need a way of manipulating things in labs (either humans or robots doing lots of experiments). Maybe in the future an AI could read massive numbers of scientific papers and synthesize them into new insights, but precisely what sort of "cognitive engine" is required to do that is also very controversial (could GPT-N do it?).

Are you familiar with the debate about Bloom et al and whether ideas are getting harder to find? (https://guzey.com/economics/bloom/ , https://www.cold-takes.com/why-it-matters-if-ideas-get-harder-to-find/). That's relevant to predicting take-off.

The other post I always point people too is this one by Chollet.

I don't necessarily agree with it but I found it stimulating and helpful for understanding some of the complexities here.

So basically, this is a really complex thing.. throwing some definitions and math at it isn't going to be very useful, I'm sorry to say. Throwing math and definitions at stuff is easy. Modeling data by fitting functions is easy. Neither is very useful in terms of actually being able to predict in novel situations (ie extrapolation / generalization), which is what we need to predict AI take-off dynamics. Actually understanding things mechanistically and coming up with explanatory theories that can withstand criticism and repeated experimental tests is very hard. That's why typically people break hard questions/problems down into easier sub-questions/problems.

How familiar are you with Chollet's paper "On the Measure of Intelligence"? He disagrees a bit with the idea of "AGI" but if you operationalize it as "skill acquisition efficiency at the level of a human" then he has a test called ARC which purports to measure when AI has achieved human-like generality.

This seems to be a good direction, in my opinion. There is an ARC challenge on Kaggle and so far AI is far below the human level. On the other hand, "being good at a lot of different things", ie task performance across one or many tasks, is obviously very important to understand and Chollet's definition is independent from that.

Thanks, it's been fixed!!

Interesting, thanks. 10x reduction in cost every 4 years is roughly twice what I would have expected. But it sounds quite plausible especially considering AI accelerators and ASICs.

Thanks for sharing! That's a pretty sophisticated modeling function but it makes sense. I personally think Moore's law (the FLOPS/$ version) will continue, but I know there's a lot of skepticism about that.

Could you make another graph like Fig 4 but showing projected cost, using Moore's law to estimate cost? The cost is going to be a lot, right?

Networks with loops are much harder to train.. that was one of the motivations for going to transformers instead of RNNs. But yeah, sure, I agree. My objection is more that posts like this are so high level I have trouble following the argument, if that makes sense. The argument seems roughly plausible but not making contact with any real object level stuff makes it a lot weaker, at least to me. The argument seems to rely on "emergence of self-awareness / discovery of malevolence/deception during SGD" being likely which is unjustified in my view. I'm not saying the argument is wrong, more that I personally don't find it very convincing.

Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the real world" and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed - even from top firms - for instance Microsoft's Tay and Google Health's system for detecting diabetic retinopathy. Another example is Google's Duplex. And for how long have we heard about burger flipping robots taking people's jobs?

There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There's also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I'm pretty skeptical about that.

I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.

Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can't see that far in the future, since it depends on discoveries and scientific knowledge we don't have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).

Load More