Dave Orr

Google AI PM; Foundation board member

Wiki Contributions

Comments

Partly this will be because in fact current ML systems are not analogous to future AGI in some ways - probably if you tell the AGI that A is B, it will also know that B is A.

One oddity of LLMs is that we don't have a good way to tell the model that A is B in a way that it can remember. Prompts are not persistent, and as this paper shows, fine tuning doesn't do a good job of getting a fact into the model without doing a bunch of paraphrasing. Pretraining presumably works in a similar way.

This is weird! And I think helps make sense of some of the problems we see with current language models.

45->55% is a 22% relative gain, while 90->100% is only an 11% gain. 

On the other hand, 45->55% is a reduction in error by 18%, while 90->100% is a 100% reduction in errors.

Which framing is best depends on the use case. Preferring one naively over the other is definitely an error. :)

I think the argument against LeCun is simple: while it may be true that AIs won't necessarily have a dominance instinct the way that people do, they could try to dominate for other reasons: namely that such dominance is an instrumental goal towards whatever its objective is. And in fact that is a significant risk, and can't be discounted by pointing out that they may not have a natural instinct towards dominance.

I just think that to an economist, models and survey results are different things, and he's not asking for the latter.

I think that Tyler is thinking more of an economic type model that looks at the incentives of various actors and uses that to understand what might go wrong and why. I predict that he would look at this model and say, "misaligned AI can cause catastrophes" is the hand-wavy bit that he would like to see an actual model of.

I'm not an economist (is IANAE a known initialization yet?), but it would probably include things like the AI labs, the AIs, and potentially regulators or hackers/thieves, try to understand and model their incentives and behaviors, and see what comes out of that. It's less about subjective probabilities from experts and more about trying to understand the forces acting on the players and how they respond to them.

So... when can we get the optimal guide, if this isn't it? :)

In general to solve an NP complete problem like 3-SAT, you have to spend compute or storage to solve it. 

Suppose you solve one 3-SAT problem. If you don't write down the solution and steps along the way, then you have no way to get the benefit of the work for the next problem. But if you do store the results of the intermediate steps, then you need to store data that's also polynomial in size.

In practice often you can do much better than that because the problems you're solving may share certain data or characteristics that lead to shortcuts, but in the general case you have to pay the cost every time you need to solve an NP complete problem.

If one person estimates the odds at a billion to one, and the other at even, you should clearly bet the middle. You can easily construct bets that offer each of them a very good deal by their lights and guarantee you a win. This won't maximize your EV but seems pretty great if you agree with Nick.

Anthropic reportedly got a $4B valuation on negligible revenue. Cohere is reportedly asking for a $6B valuation on maybe a few $M in revenue.

AI startups are getting pretty absurd valuations based on I'm not sure what, but I don't think it's ARR.

I'm not sure multiple of revenue is meaningful right now. Nobody is investing in OAI because of their current business. Also there are tons of investments at infinite multiples once you realize that many companies get investments with no revenue.

Load More