LESSWRONG
LW

talelore
662320
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
My AI Predictions for 2027
talelore2h21

I explained in my post that I believe the benchmarks are mainly measuring shallow thinking. The benchmarks include things like completing a single word of code or solving arithmetic problems. These unambiguously fall within what I described as shallow thinking. They measure existing judgement/knowledge, not the ability to form new insights.

Deep thinking has not progressed hyper-exponentially. LLMs are essentially shrimp-level when it comes to deep thinking, in my opinion. LLMs still make extremely basic mistakes that a human 5-year-old would never make. This is undeniable if you actually use them for solving problems.

One can’t simply point out the ways in which the things that LLMs cannot currently do are hard in a way in which the things that LLMs currently can do are not

The distinction between deep and shallow thinking is real and fundamental. Deep thinking is non-polynomial in its time complexity. I'm not moving the goalposts to include only whatever LLMs happen to be bad at right now. They have always been bad at deep thinking, and continue to be. All the gains measured by the benchmarks are gains in shallow thinking.

To be convincing, you have to make an argument that fundamentally differentiates your objection from past failed objections.

I believe I have done so, by claiming deep thinking is of a fundamentally different nature than shallow thinking, and denying any significant progress has been made on this front.

If you disagree, fine. Like I said, I can't prove anything, I'm just putting forward a hypothesis. But you don't get to say I've been proven wrong. If you want to come up with some way of measuring deep thinking and prove LLMs are or are not good at it, go ahead. Until that work has been done, I haven't been proven wrong, and we can't say either way.

(Certain things are easy to measure/benchmark, and these things tend to also require only shallow thinking. Things that require deep thinking are hard to measure for the same reason they require deep thinking, and so they don't make it into benchmarks. The only way I know how to measure deep thinking is personal judgement, which obviously isn't convincing. But the fact this work is hard to do doesn't mean we just conclude that I'm wrong and you're right.)

Reply
My AI Predictions for 2027
talelore3h10

I am predicting a world that looks fantastically different from the world predicted by AI 2027. It's the difference between apocalypse and things basically being the same as they are now. The difference between the two is clear.

I agree that having internal representations that can be modified while reasoning is something that enables deep thinking, and I think this is something LLMs are bad at. Because of the wideness/depth issue and the lack of recurrence.

I only have a lay understanding of how LLMs work, so forgive me if I'm wrong about the specifics. It seems to me the KV cache is just an optimization. Either way, the LLM's output is deterministic on the input tokens, and information is not being lost. What I was pointing to was the fact that the feed forward networks for the new token don't have access to the past feed-forward states of the other tokens, so they can't see e.g. what reasoning paths were dead ends, unless information about those dead ends made it into the output. This is a toy example, but I'm imagining a far-future LLM with enough understanding of biology and chemistry baked into its (for some reason very huge wide/deep) feed-forward networks to cure cancer in a single layer (for some layer). Imagine in one run, the input is "the cure for cancer". Imagine the attention dimension is very narrow. In one layer, the feed-forward network may cure cancer in this run, among doing many other things, and then possibly discard that information when going to the next layer. In a subsequent run on the input "the cure for cancer is", it may cure cancer again, and this time include some detail of that cure in its output to the next layer, since now it's more likely to be relevant to predicting the next token. When curing cancer the second time, it didn't have access to any of the processing from the first time. Only what previous layers outputted for previous tokens. Does that sound right? If so, the fact that the LLM is strictly divided into layers with feed-forward parts being wider than other parts is a limitation on deep thinking. Obviously the example is an exaggeration because a feed-forward layer wouldn't be curing cancer on its own, but it speaks to the fact that even though information isn't being lost, computation is segregated in a way that some processing done in previous runs isn't available to future runs.

I already responded to what joseph_c said about the human brain, but I'll go into a bit more detail here. Progressing 200 steps forward in a feed-forward neural network is not nearly as "deep" as progressing 200 neurons in any direction in a recurrent network, and either way a 200 neuron chain of processing is not a lot. I suspect when doing deep thinking, the depth of neural firings in humans would be much greater, over a longer period of time. I think brains are deeper than LLMs, and only wider in the sense that they're currently larger overall.

Coming up with new clever jokes does take a lot of time for humans actually. Stand-up comedians spend hours writing every day to write one hour of clever jokes total per year. When people come up with jokes that are funny in conversation, that is the product of one of three things:

  1. The joke isn't particularly clever, but people are in the mood to laugh
  2. The joke is clever, and you got lucky
  3. The joke is funny because you're a funny person who already has a bunch of "joke formats" memorized, which makes telling funny jokes on the fly easier. But even then, it's not fully shallow, and you can't do it reliably. It just makes it easier.

I'm not sure, but I think you possibly could make an LLM that is so extremely wide that it could cure cancer, be superintelligent, etc. But I think actually training/running that network would be so exorbitantly expensive that you shouldn't bother (for the reasons I pointed to in my post), and that's why LLMs will plateau compared to less limited architectures.

Reply
My AI Predictions for 2027
talelore12h21

I expect humans are not doing deep thinking in a 200 ms conscious reaction.

Reply
My AI Predictions for 2027
talelore12h10

I am pretty sure current LLMs could not write any competitive TV scripts.

Reply
My AI Predictions for 2027
talelore12h20

I think the benchmarks give a misleading impression of the capabilities of AI. It makes it seem like they're on the verge of being as smart as humans. It makes it sound like they're ready to take on a bunch of economically valuable activity that they're not, leading to the issues currently happening with bosses making their employees use LLMs, for example.

Reply
My AI Predictions for 2027
talelore15h41

I agree with where you believe the wargames were used.

I think trend extrapolation from previous progress is a very unreliable way to predict progress. I would put more stock into a compelling argument for why progress will be fast/slow, like the one I hope I have provided. But even this is pretty low-confidence compared to actual proof, which nobody has.

In this case, I don't buy extrapolating from past LLM advances because my model is compatible with fast progress up to a point followed by a slowdown, and the competing model isn't right just because it looks like a straight line when you plot it on a graph.

Reply1
My AI Predictions for 2027
talelore16h20

Most of my predictions are simply contradictions of the AI 2027 predictions, which are a well-regarded series of predictions for AI progress by the end of 2027. I am stating that I disagree and why.

Reply
My AI Predictions for 2027
talelore16h30

Perhaps we will find some agreement come Christmastime 2027. Until then, thanks for your time!

edit: Responding to your edit, by seeming academic, I meant things like seeming "detailed and evidence-based", "involving citations and footnotes", "involving robust statistics", "resulting in high-confidence conclusions", and stuff like that. Even the typography and multiple authors makes it seem Very Serious. I agree that the scenario part seemed less academic that the research pages.

Reply1
My AI Predictions for 2027
talelore16h20

I remember when ChatGPT came out, people were very impressed with how well it could write poetry. Except the poetry was garbage. They just couldn't tell, because they lacked the taste in poetry to know any better. I think the same thing still applies to fiction/prose generated by ChatGPT. It's still not good, but some people can't tell.

To be clear about my predictions, I think "okay"/"acceptable" writing (fiction and nonfiction) will become easier for AI to generate in the next 2 years, but "brilliant"/"clever" will not really.

Reply
My AI Predictions for 2027
talelore17h60

Thank you for taking the time to write such a detailed response.

My main critique of AI 2027 is not about communication, but the estimates themselves (2027 is an insane median estimate for AI doom) and that I feel you're overconfident about the quality/reliability of the forecasts. (And I am glad that you and Daniel have both backed off a bit from the original 2027 estimate.)

What do you mean by this? My guess is that it's related to the communication issues on timelines?

Probably this is related to communication issues on timelines, yes. Also, I think if I genuinely believed everyone I knew and loved was going to die in ~2 years, I would probably be acting a certain way that I don't sense from the authors of the AI 2027 document. But I don't want to get too much into mind reading.

With respect to the communication issue, I think the AI 2027 document did include enough disclaimers about the authors' uncertainty, and more disclaimers wouldn't help. I think the problem is that the document structurally contradicts those disclaimers, by seeming really academic and precise. Adding disclaimers to the research sections would also not be valuable simply because most people won't get that far.

Including a written scenario is something I can understand why you chose to do, but it also seems like a mistake for the reasons I mentioned in my post. It makes you sound way more confident than we both agree you actually are. And a specific scenario is also more likely to be wrong than a general forecast.

You have said things like:

  • "One reason I'm hesitant to add [disclaimers] is that I think it might update non-rationalists too much toward thinking it's useless, when in fact I think it's pretty informative."
  • "The graphs are the result of an actual model that I think is reasonable to give substantial weight to in one's timelines estimates."
  • "In our initial tweet, Daniel said it was a 'deeply researched' scenario forecast. This still seems accurate to me."
  • "we put quite a lot of work into it"
  • "it's state-of-the-art or close on most dimensions and represents subtantial intellectual progress"
  • "In particular, I think there's reason to trust our intuitions"

As I said in my post, "The whole AI 2027 document just seems so fancy and robust. That's what I don't like. It gives a much more robust appearance than this blog post, does it not? But is it any better? I claim no."

I don't think your guesses are better than mine because of the number of man hours your put into justifying them, nor because the people who worked on the estimates are important, well-regarded people who worked at OpenAI or have a better track record, nor because the estimates involved surveys, wargames, and mathematics.

I do not believe your guesses are particularly informative, nor do I think that about my own guesses. We're all just guessing. Nor do I agree with calling them forecasts at all. I don't think they're reliable enough that anybody should be trusting them over their own intuition. In the end, neither of us can prove what we believe to a high degree of confidence. The only thing that will matter is who's right, and none of the accoutrements of fancy statistics, hours spent researching, past forecasting successes, and so on will matter.

Putting too much work into what are essentially guesses is also in itself a kind of communication that this is Serious Academic Work -- a kind of evidence or proof that people should take very seriously. Which it can't be, since you and I agree that "there's simply not enough empirical data to forecast when AGI will arrive". If that's true, then why all the forecasting?

(All my criticism is about the Timelines/Takeoff Forecasting, since these are things you can't really forecast at this time. I am glad the Compute Forecast exists, and I didn't read the AI Goals and Security Forecasts)

Reply1
Load More
No wikitag contributions to display.
25My AI Predictions for 2027
19h
29
17ABSOLUTE POWER (A short story)
15d
3