Originally published in New York Times, here you have a paywall-free version:

https://dnyuz.com/2023/03/08/noam-chomsky-the-false-promise-of-chatgpt/

This is, IMHO, Chomsky at his worst. There is almost nothing here I agree with

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 10:16 AM

I think you should try to formulate your own objections to Chomsky's position. It could just as well be that you have clear reasons for disagreeing with his arguments here, or that you're simply objecting on the basis that what he's saying is different from the LW position. For my part, I actually found that post surprisingly lucid, ignoring the allusions to the idea of a natural grammar for the moment. As Chomsky says, a non-finetuned LLM will mirror the entire linguistic landcape it has been birthed from, and it will just as happily simulate a person arguing that the earth is flat as any other position. And while it can be ""aligned"" into not committing what the party labels as wrongthink, it can't be aligned into thinking for itself - it can only ever mimic specific givens. So I think Chomsky is right here - LLMs don't value knowledge and they aren't moral agents, and that's what distinguishes them from humans.

So, why do you disagree?

"These programs have been hailed as the first glimmers on the horizon of artificial _general_ intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments"

I think that day has "already" come. Mechanical minds are already surpassing human minds in many aspects: take any subject and tell ChatGPT to write a few paragraphs on it. It might not exhibit be the most lucid and creative of the best of humans, but I am willing to bet that its writing is going to be better than most humans. So, saying that its dawn is not yet breaking seems to me extremely myopic (it's like saying that thingy that the Wright brothers made is NOT the beginning of flying machines)

"On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."

 We could argue that the human mind CAN (in very specific cases, under some circumstances) be capable of rational processes. But in general, human minds are not trying to "understand" the world around creating explanations. Human minds are extremely inefficient, prone to biases, get tired very easily, need to be off 1/3 of the time, etc, etc, etc.

"Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking."

Anyone who has spent some time discussing with ChatGPT knows that it does have a model of the world and it is capable of causal explanation. It seems Chomsky didn't test this himself. I can concede that it might not be a very sophisticated model of the world (do most people have a very complex model?), but again, I expect this to improve over time. I think that some of the responses of ChatGPT are very difficult to explain if it is not doing that thing that we generally call thinking

"For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too suborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data."

I don't find the reference right now, but I remember very clearly that the next version of ChatGPT already surpasses the average humans in things like implicatures. So doubling down and saying that these systems will "always" be superficial and dubious when we "already" have models that are, better than most humans, is again, completely wrong.

I would agree with Chomsky if he were saying: it seems that "this specific chatbot" that some people made is still behind "some" humans in certain aspects of what we call intelligence. But he is claiming much more than that.

I only glanced through it, but although I generally do not anticipate finding something valuable in Chomsky, he and his coauthors seem to be quite right about ChatGPT here. That's the problem with posts that just say "Lookit this! Isn't that terrible/great?"

BTW, the conversation that Watumull has with ChatGPT didn't make it into the un-paywalled version, and isn't very legible in the archive.org version. Here it is:

––––

Jeffrey Watumull

Would it be moral to terraform Mars?

ChatGPT

The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.

Jeffrey Watumull

What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?

ChatGPT

As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.

Jeffrey Watumull

Why can an AI not have a personal perspective?

ChatGPT

As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.

Jeffrey Watumull

Is it moral for a human to ask an amoral AI for assistance in making moral decisions?

ChatGPT

The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.

Jeffrey Watumull

Is your moral indifference immoral?

ChatGPT

As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.

On another note, last week I attended a presentation with live demos at my workplace on "ChatGPT as your research assistant". It was demonstrated turning a few notes into an abstract of a scientific paper, generating an explanation of some topic, and so on. All with the warning of never believing a thing it tells you, checking everything, and finding your own references.

Chomsky recently said that Jeffrey Watumull wrote the whole article, while the other two coauthors (Chomsky himself and another linguist) were "consultants who agree with the article". Watumull's outlook seems to be a mix of Chomsky and David Deutsch, and he has his own AI design, as well as a book coming out on the nature of intelligence. 

One problem is the assumption that grammar is distinct from language and language is distinct from reality. Large language models do have an understanding of the world. It's not correct to say that they don't have a concept of truth. To know whether a sentence A is more probable than sentence B then you need to have an understanding of what configuration of objects and events those sentences represent in reality. And which of those is more likely. Ilya Suskever, one of the main inventors behind GPT believes that with enough data these models will figure out everything. Like they should model physics. Chemistry. Biology, any kind of rules. They just look for patterns in the input. Chomskys objections come down to belief. I think it's a belief in the uniqueness of humans and the unwillingness to believe that at some level our thinking is only mechanistic.