[epistemic status: that's just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold]

"If you see fraud and do not say fraud, you are a fraud." --- Nasim Taleb

I was talking with a colleague the other day about an AI organization that claims:

  1. AGI is probably coming in the next 20 years.
  2. Many of the reasons we have for believing this are secret.
  3. They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

His response was (paraphrasing): "Wow, that's a really good lie! A lie that can't be disproven."

I found this response refreshing, because he immediately jumped to the most likely conclusion.

Near predictions generate more funding

Generally, entrepreneurs who are optimistic about their project get more funding than ones who aren't. AI is no exception. For a recent example, see the Human Brain Project. The founder, Henry Makram, predicted in 2009 that the project would succeed in simulating a human brain by 2019, and the project was already widely considered a failure by 2013. (See his TED talk, at 14:22)

The Human Brain project got 1.3 billion Euros of funding from the EU.

It's not hard to see why this is. To justify receiving large amounts of money, the leader must make a claim that the project is actually worth that much. And, AI projects are more impactful if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.

Fear of an AI gap

The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US.

Similarly, there's historical precedent for an AI gap lie used to justify more AI development. Fifth Generation Computer Systems was an ambitious 1982 project by the Japanese government (funded for $400 million in 1992, or $730 million in 2019 dollars) to create artificial intelligence through massively parallel logic programming.

The project is widely considered to have failed. From a 1992 New York Times article:

A bold 10-year effort by Japan to seize the lead in computer technology is fizzling to a close, having failed to meet many of its ambitious goals or to produce technology that Japan's computer industry wanted.

...

That attitude is a sharp contrast to the project's inception, when it spread fear in the United States that the Japanese were going to leapfrog the American computer industry. In response, a group of American companies formed the Microelectronics and Computer Technology Corporation, a consortium in Austin, Tex., to cooperate on research. And the Defense Department, in part to meet the Japanese challenge, began a huge long-term program to develop intelligent systems, including tanks that could navigate on their own.

...

The Fifth Generation effort did not yield the breakthroughs to make machines truly intelligent, something that probably could never have realistically been expected anyway. Yet the project did succeed in developing prototype computers that can perform some reasoning functions at high speeds, in part by employing up to 1,000 processors in parallel. The project also developed basic software to control and program such computers. Experts here said that some of these achievements were technically impressive.

...

In his opening speech at the conference here, Kazuhiro Fuchi, the director of the Fifth Generation project, made an impassioned defense of his program.

"Ten years ago we faced criticism of being too reckless," in setting too many ambitious goals, he said, adding, "Now we see criticism from inside and outside the country because we have failed to achieve such grand goals."

Outsiders, he said, initially exaggerated the aims of the project, with the result that the program now seems to have fallen short of its goals.

Some American computer scientists say privately that some of their colleagues did perhaps overstate the scope and threat of the Fifth Generation project. Why? In order to coax more support from the United States Government for computer science research.

(emphasis mine)

This bears similarity to some conversations on AI risk I've been party to in the past few years. The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it's safe, because Others won't make sure it's safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don't get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they're based on scary stories, not historically validated threat models)

The claim that others will develop weapons and kill us with them by default implies a moral claim to resources, and a moral claim to be justified in making weapons in response. Such claims, if exaggerated, justify claiming more resources and making more weapons. And they weaken a community's actual ability to track and respond to real threats (as in The Boy Who Cried Wolf).

How does the AI field treat its critics?

Hubert Dreyfus, probably the most famous historical AI critic, published "Alchemy and Artificial Intelligence" in 1965, which argued that the techniques popular at the time were insufficient for AGI. Subsequently, he was shunned by other AI researchers:

The paper "caused an uproar", according to Pamela McCorduck. The AI community's response was derisive and personal. Seymour Papert dismissed one third of the paper as "gossip" and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing "politics" so that he could attach the prestigious RAND name to his ideas. Simon said, "what I resent about this was the RAND name attached to that garbage."

Dreyfus, who taught at MIT, remembers that his colleagues working in AI "dared not be seen having lunch with me." Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he recalls "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being."

This makes sense as anti-whistleblower activity: ostracizing, discrediting, or punishing people who break the conspiracy to the public. Does this still happen in the AI field today?

Gary Marcus is a more recent AI researcher and critic. In 2012, he wrote:

Deep learning is important work, with immediate practical applications.

...

Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like "sibling" or "identical to." They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems ... use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.

In 2018, he tweeted an article in which Yoshua Bengio (a deep learning pioneer) seemed to agree with these previous opinions. This tweet received a number of mostly-critical replies. Here's one, by AI professor Zachary Lipton:

There's a couple problems with this whole line of attack. 1) Saying it louder ≠ saying it first. You can't claim credit for differentiating between reasoning and pattern recognition. 2) Saying X doesn't solve Y is pretty easy. But where are your concrete solutions for Y?

The first criticism is essentially a claim that everybody knows that deep learning can't do reasoning. But, this is essentially admitting that Marcus is correct, while still criticizing him for saying it [ED NOTE: the phrasing of this sentence is off (Lipton publicly agrees with Marcus on this point), and there is more context, see Lipton's reply].

The second is a claim that Marcus shouldn't criticize if he doesn't have a solution in hand. This policy deterministically results in the short AI timelines narrative being maintained: to criticize the current narrative, you must present your own solution, which constitutes another narrative for why AI might come soon.

Deep learning pioneer Yann LeCun's response is similar:

Yoshua (and I, and others) have been saying this for a long time.
The difference with you is that we are actually trying to do something about it, not criticize people who don't.

Again, the criticism is not that Marcus is wrong in saying deep learning can't do certain forms of reasoning, the criticism is that he isn't presenting an alternative solution. (Of course, the claim could be correct even if Marcus doesn't have an alternative!)

Apparently, it's considered bad practice in AI to criticize a proposal for making AGI without presenting on alternative solution. Clearly, such a policy causes large distortions!

Here's another response, by Steven Hansen (a research scientist at DeepMind):

Ideally, you'd be saying this through NeurIPS submissions rather than New Yorker articles. A lot of the push-back you're getting right now is due to the perception that you haven't been using the appropriate channels to influence the field.

That is: to criticize the field, you should go through the field, not through the press. This is standard guild behavior. In the words of Adam Smith: "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."

(Also see Marcus's medium article on the Twitter thread, and on the limitations of deep learning)

[ED NOTE: I'm not saying these critics on Twitter are publicly promoting short AI timelines narratives (in fact, some are promoting the opposite), I'm saying that the norms by which they criticize Marcus result in short AI timelines narratives being maintained.]

Why model sociopolitical dynamics?

This post has focused on sociopolotical phenomena involved in the short AI timelines phenomenon. For this, I anticipate criticism along the lines of "why not just model the technical arguments, rather than the credibility of the people involved?" To which I pre-emptively reply:

  • No one can model the technical arguments in isolation. Basic facts, such as the accuracy of technical papers on AI, or the filtering processes determining what you read and what you don't, depend on sociopolitical phenomena. This is far more true for people who don't themselves have AI expertise.
  • "When AGI will be developed" isn't just a technical question. It depends on what people actually choose to do (and what groups of people actually succeed in accomplishing), not just what can be done in theory. And so basic questions like "how good is the epistemology of the AI field about AI timelines?" matter directly.
  • The sociopolitical phenomena are actively making technical discussion harder. I've had a well-reputed person in the AI risk space discourage me from writing publicly about the technical arguments, on the basis that getting people to think through them might accelerate AI timelines (yes, really).

Which is not to say that modeling such technical arguments is not important for forecasting AGI. I certainly could have written a post evaluating such arguments, and I decided to write this post instead, in part because I don't have much to say on this issue that Gary Marcus hasn't already said. (Of course, I'd have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them)

What I'm not saying

I'm not saying:

  1. That deep learning isn't a major AI advance.
  2. That deep learning won't substantially change the world in the next 20 years (through narrow AI).
  3. That I'm certain that AGI isn't coming in the next 20 years.
  4. That AGI isn't existentially important on long timescales.
  5. That it isn't possible that some AI researchers have asymmetric information indicating that AGI is coming in the next 20 years. (Unlikely, but possible)
  6. That people who have technical expertise shouldn't be evaluating technical arguments on their merits.
  7. That most of what's going on is people consciously lying. (Rather, covert deception hidden from conscious attention (e.g. motivated reasoning) is pervasive; see The Elephant in the Brain)
  8. That many people aren't sincerely confused on the issue.

I'm saying that there are systematic sociopolitical phenomena that cause distortions in AI estimates, especially towards shorter timelines. I'm saying that people are being duped into believing a lie. And at the point where 73% of tech executives say they believe AGI will be developed in the next 10 years, it's a major one.

This has happened before. And, in all likelihood, this will happen again.

New to LessWrong?

New Comment
105 comments, sorted by Click to highlight new comments since: Today at 5:42 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

1. For reasons discussed on comments to previous posts here, I'm wary of using words like "lie" or "scam" to mean "honest reporting of unconsciously biased reasoning". If I criticized this post by calling you a liar trying to scam us, and then backed down to "I'm sure you believe this, but you probably have some bias, just like all of us", I expect you would be offended. But I feel like you're making this equivocation throughout this post.

2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you're criticizing in this post are AI professors. Unless you got your timelines from industry, which I don't think many people here did, them being stupid isn't especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they're wrong doesn't change anything.

3.... (read more)

Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. "This new thing has great opportunities and great risks; you need guidance to navigate and govern it" is a great advertisement for universities and think tanks. Which doesn't mean AI, narrow or strong, doesn't actually have great opportunities and risks! But nonprofits and academics aren't immune from the incentives to exaggerate.

Re: 4: I have a different perspective. The loonies who go to the press with "did you know psychiatric drugs have SIDE EFFECTS?!" are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public&... (read more)

  1. I don't actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)

  2. I'm not making the argument "business is stupid about AI timelines therefore the opposite is right".

  3. Yes, this is a reason to expect distortion in favor of mainstream opinions (including medium-long timelines). It can be modeled along with the other distortions.

  4. Regardless of whether Gary Marcus is "bad" (what would that even mean?), the concrete criticisms aren't ones that imply AI timelines are short, deep learning can get to AGI, etc. They're ones that sometimes imply the opposite, and anyway, ones that systematically distort narratives towards short timelines (as I spelled out). If it's already widely known that deep learning can't do reasoning, then... isn't that reason not to expect short AI timelines, and to expect that many of the non-experts who think so (including tech execs and rationalists) have been duped?

If you think I did the modeling wrong, and have concrete criticisms (such as

... (read more)

On 3, I notice this part of your post jumps out to me:

Of course, I'd have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them

One possibility behind the "none at all" is that 'disagreement leads to writing posts, agreement leads to silence', but another possibility is 'if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.'

My sense is it's more the latter, which makes this seem weirdly 'bad faith' to me. That is, suppose I know Alice doesn't want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice's concerns are overblown and so write a post about how actually it's very hard to use biological weapons and we shouldn't waste money on countermeasures. Alice won't respond with "look, it's not hard, you just do A, B, C and then you kill thousands of people," because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.

It is not obvious what the right path is here. Obviously, we can't let anyone hijack the group epistemology by having concerns about what can and can't be made public knowledge, but also it seems like we shouldn't pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.

5jessicata5y
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.

Do we have those generally trusted arbiters? I note that it seems like many people who I think of as 'generally trusted' are trusted because of some 'private information', even if it's just something like "I've talked to Carol and get the sense that she's sensible."

I don't think there are fully general trusted arbiters, but it's possible to bridge the gap with person X by finding person Y trusted by both you and X.

5Tomáš Gavenčiak5y
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. "it is really easily doable" would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).

1. It sounds like we have a pretty deep disagreement here, so I'll write an SSC post explaining my opinion in depth sometime.

2. Sorry, it seems I misunderstood you. What did you mean by mentioning business's very short timelines and all of the biases that might make them have those?

3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they'd seen campaign ads, etc. These biases could certainly exist. But if I didn't even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I'm not sure this would qualify as sociopolitical analysis.

4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I'm not sure what you mean by "concrete criticisms". You ... (read more)

  1. Okay, that might be useful. (For a mainstream perspective on this I have agreement with, see The Scams Are Winning).

  2. The argument for most of the post is that there are active distortionary pressures towards short timelines. I mentioned the tech survey in the conclusion to indicate that the distortionary pressures aren't some niche interest, they're having big effects on the world.

  3. Will discuss later in this comment.

  4. By "concrete criticisms" I mean the Twitter replies. I'm studying the implicit assumptions behind these criticisms to see what it says about attitudes in the AI field.

I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way.

I think this is the main thrust of your criticism, and also the main thrust of point 3. I do think lots of things are scams, and I could have written about other things instead, but I wrote about short timelines, because I can't talk about everything in one essay, and this one seems important.

I couldn't have written an equally compelling essay

... (read more)
[-]dxu5y240

I agree that it's difficult (practically impossible) to engage with a criticism of the form "I don't find your examples compelling", because such a criticism is in some sense opaque: there's very little you can do with the information provided, except possibly add more examples (which is time-consuming, and also might not even work if the additional examples you choose happen to be "uncompelling" in the same way as your original examples).

However, there is a deeper point to be made here: presumably you yourself only arrived at your position after some amount of consideration. The fact that others appear to find your arguments (including any examples you used) uncompelling, then, usually indicates one of two things:

  1. You have not successfully expressed the full chain of reasoning that led you to originally adopt your conclusion (owing perhaps to constraints on time, effort, issues with legibility, or strategic concerns). In this case, you should be unsurprised at the fact that other people don't appear to be convinced by your post, since your post does not present the same arguments/evidence that convinced you yourself to believe your position.

  2. You do, in fact, find the raw examp

... (read more)

Scott's post explaining his opinion is here, and is called 'Against Lie Inflation'.

4Eli Tyre3y
Minor, unconfident, point: I'm not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own "look how manipulative the other guys are" essays. If the target is thoughtful people trying to figure things out, they'll want to hear about both sides, no?
2Dagon5y
I think courts spend a fair bit of effort not just in evaluating strength of case, but standing and impact of the case. not "what else could you argue?", but "why does this complaint matter, to whom?" IMO, you're absolutely right that there's lots of pressures to make unrealistically short predictions for advances, and this causes a lot of punditry, and academia and industry, to ... what? It's annoying, but who is harmed and who has the ability to improve things? Personally, I think timeline for AGI is a poorly-defined prediction - the big question is what capabilities satisfy the "AGI" definition. I think we WILL see more and more impressive performance in aspects of problem-solving and prediction that would have been classified as "intelligence" 50 years ago, but that we probably won't credit with consciousness or generality.
-1countingtoten5y
Then perhaps you should start here.
I don't actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme.

The eponymous Charles Ponzi had a plausible arbitrage idea backing his famous scheme; it's not unlikely that he was already in over his head (and therefore desperately trying to make himself believe he'd find some other way to make his investors whole) by the time he found out that transaction costs made the whole thing impractical.

8quanticle5y
Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then. I think this is an important point to make, since the implicit lesson I'm reading here is that there's no difference between giving false information intentionally ("lying") and giving false information unintentionally ("being wrong"). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.
2jessicata5y
Of course there's an important difference between lying and being wrong. It's a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely. If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren't, and get tricked into having worse beliefs than if they had used an uninformative prior. I am not trying to punish people, I am trying to make agent-based models. (Regarding Madoff, what you present is suggestive, but it doesn't prove that he was conscious that he had no plans to trade and was deceiving his investors. We don't really know what he was conscious of and what he wasn't.)
8Fluttershy5y
When someone is systematically trying to convince you of a thing, do not be like, "nice honest report", but be like, "let me think for myself whether that is correct".
but be like, "let me think for myself whether that is correct".

From my perspective, describing something as "honest reporting of unconsciously biased reasoning" seems much more like an invitation for me to think for myself whether it's correct than calling it a "lie" or a "scam".

Calling your opponent's message a lie and a scam actually gets my defenses up that you're the one trying to bamboozle me, since you're using such emotionally charged language.

Maybe others react to these words differently though.

4Eli Tyre3y
This comment is such a good example of managing to be non-triggering in making the point. It stands out to me amongst all the comments above it, which are at least somewhat heated.
2ESRogs3y
Thanks!
0Fluttershy5y
It's a pretty clear way of endorsing something to call it "honest reporting".

Sure if you just call it "honest reporting". But that was not the full phrase used. The full phrase used was "honest reporting of unconsciously biased reasoning".

I would not call trimming that down to "honest reporting" a case of honest reporting! ;-)

If I claim, "Joe says X, and I think he honestly believes that, though his reasoning is likely unconsciously biased here", then that does not at all seem to me like an endorsement of X, and certainly not a clear endorsement.

I agree with:

  • Most people trying to figure out what's true should be mostly trying to develop views on the basis of public information and not giving too much weight to supposed secret information.
  • It's good to react skeptically to someone claiming "we have secret information implying that what we are doing is super important."
  • Understanding the sociopolitical situation seems like a worthwhile step in informing views about AI.
  • It would be wild if 73% of tech executives thought AGI would be developed in the next 10 years. (And independent of the truth of that claim, people do have a lot of wild views about automation.)

I disagree with:

  • Norms of discourse in the broader community are significantly biased towards short timelines. The actual evidence in this post seems thin and cherry-picked. I think the best evidence is the a priori argument "you'd expect to be biased towards short timelines given that it makes our work seem more important." I think that's good as far as it goes but the conclusion is overstated here.
  • "Whistleblowers" about long timelines are ostracized or discredited. Again, the evidence in your post seems thin and cherry-pic
... (read more)
[-]Vika5y530

Definitely agree that the AI community is not biased towards short timelines. Long timelines are the dominant view, while the short timelines view is associated with hype. Many researchers are concerned about the field losing credibility (and funding) if the hype bubble bursts, and this is especially true for those who experienced the AI winters. They see the long timelines view as appropriately skeptical and more scientifically respectable.

Some examples of statements that AGI is far away from high-profile AI researchers:

Geoffrey Hinton: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/

Yann LeCun: https://www.facebook.com/yann.lecun/posts/10153426023477143 https://futurism.com/conscious-ai-decades-away https://www.facebook.com/yann.lecun/posts/10153368458167143

Yoshua Bengio: https://www.lesswrong.com/posts/4qPy8jwRxLg9qWLiG/yoshua-bengio-on-ai-progress-hype-and-risks

Rodney Brooks: https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ https://rodneybrooks.com/agi-has-been-delayed/

Hi Jessica. Nice post and I agree with many of your points. Certainly, I believe—as you do—that a number of bad actors are wielding the specter of AGI sloppily and irresponsibly, either to consciously defraud people or on account of buying into something that speaks more to the messianic than to science. Perhaps ironically, one frequent debate that I have had with Gary in the past is that while he is vocally critical of exuberance over deep learning, he is himself partial to speaking rosily of nearish-term AGI, and of claiming progress (or being on the verge of progress) towards it. On the other hand, I am considerably more skeptical.

While I enjoyed the post and think we agree on many points, if you don’t mind I would like to respectfully note that I’ve been quoted here slightly out of context and would like to supply that missing context. To be sure, I think your post is written well and with honest intentions, and I know how easy it is to miss some context in Twitter threads [especially as it seems that many tweets have been deleted from this thread].

Regarding my friend Gary Marcus. I like Gary and we communicate fairly regularly, but we don’t always agree on the science or ... (read more)

Thanks a lot for the clarification, and sorry I took the quote out of context! I've added a note linking to this response.

Basically, AI professionals seem to be trying to manage the hype cycle carefully.

Ignorant people tend to be more all-or-nothing than experts. By default, they'll see AI as "totally unimportant or fictional", "a panacea, perfect in every way" or "a catastrophe, terrible in every way." And they won't distinguish between different kinds of AI.

Currently, the hype cycle has gone from "professionals are aware that deep learning is useful" (c. 2013) to "deep learning is AI and it is wonderful in every way and you need some" (c. 2015?) to "maybe there are problems with AI? burn it with fire! Nationalize! Ban!" (c. 2019).

Professionals who are still working on the "deep learning is useful for certain applications" project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from "wonderful panacea" to "burn it with fire." When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategie... (read more)

2Eli Tyre3y
What? How exactly is this a way of dealing with the hype bubble bursting? It seems like if it bursts for AI, it bursts for "AI governance"? Am I missing something?
2Eli Tyre3y
Never mind. It seems like I should have just kept reading.

[Note: this, and all comments on this post unless specified otherwise, is written with my 'LW user' hat on, not my 'LW Admin' or 'MIRI employee' hat on, and thus is my personal view instead of the LW view or the MIRI view.]

As someone who thinks about AGI timelines a lot, I find myself dissatisfied with this post because it's unclear what "The AI Timelines Scam" you're talking about, and I'm worried if I poke at the bits it'll feel like a motte and bailey, where it seems quite reasonable to me that '73% of tech executives thinking that the singularity will arrive in <10 years is probably just inflated 'pro-tech' reasoning,' but also it seems quite unreasonable to suggest that strategic considerations about dual use technology should be discussed openly (or should be discussed openly because tech executives have distorted beliefs). It also seems like there's an argument for weighting urgency in planning that could lead to 'distorted' timelines while being a rational response to uncertainty.

On the first point, I think the following might be a fair description of some thinkers in the AGI s... (read more)

But my simple sense is that openly discussing whether or not nuclear weapons were possible (a technical claim on which people might have private information, including intuitions informed by their scientific experience) would have had costs and it was sensible to be secretive about it. If I think that timelines are short because maybe technology X and technology Y fit together neatly, then publicly announcing that increases the chances that we get short timelines because someone plugs together technology X and technology Y. It does seem like marginal scientists speed things up here.

I agree that there are clear costs to making extra arguments of the form "timelines are short because technology X and technology Y will fit together neatly". However, you could still make public that your timelines are a given probability distribution D, and the reasons which led you to that conclusion are Z% object-level views which you won't share, and (100-Z)% base rate reasoning and other outside-view considerations, which you will share.

I think there are very few costs to declaring which types of reasoning you're most persuaded by. There are some costs to actually making the out... (read more)

I mostly agree with your analysis; especially the point about 1 (that the more likely I think my thoughts are to be wrong, the lower cost it is to share them).

I understand that there are good reasons for discussions to be private, but can you elaborate on why we'd want discussions about privacy to be private?

Most examples here have the difficulty that I can't share them without paying the costs, but here's one that seems pretty normal:

Suppose someone is a student and wants to be hired later as a policy analyst for governments, and believes that governments care strongly about past affiliations and beliefs. Then it might make sense for them to censor themselves in public under their real name because of potential negative consequences of things they said when young. However, any statement of the form "I specifically want to hide my views on X" made under their real name has similar possible negative consequences, because it's an explicit admission that the person has something to hide.

Currently, people hiding their unpopular opinions to not face career consequences is fairly standard, and so it's not that damning to say "I think this norm is sensible" or maybe even "I follow this norm," but it seems like it would have been particularly awkward to be first person to explicitly argue for that norm.

Tangent:

...if you think both an urgent concern and a distant concern are possible, almost all of your effort goes into the urgent concern instead of the distant concern (as sensible critical-path project management would suggest).

This isn't obvious to me. And I would be interested in a post laying out the argument, in general or in relation to AI.

The standard cite is Owen CB’s paper Allocating Risk Mitigation Across Time. Here’s one quote on this topic:

Suppose we are also unsure about when we may need the problem solved by. In scenarios where the solution is needed earlier, there is less time for us to collectively work on a solution, so there is less work on the problem than in scenarios where the solution is needed later. Given the diminishing returns on work, that means that a marginal unit of work has a bigger expected value in the case where the solution is needed earlier. This should update us towards working to address the early scenarios more than would be justified by looking purely at their impact and likelihood.

[...]

There are two major factors which seem to push towards preferring more work which focuses on scenarios where AI comes soon. The first is nearsightedness: we simply have a better idea of what will be useful in these scenarios. The second is diminishing marginal returns: the expected effect of an extra year of work on a problem tends to decline when it is being added to a larger total. And because there is a much larger time horizon in which to solve it (and in a wealthier world), the problem of

... (read more)

Specifically, 'urgent' is measured by the difference between the time you have and the time it will take to do. If I need the coffee to be done in 15 minutes and the bread to be done in an hour, but if I want the bread to be done in an hour I need to preheat the oven now (whereas the coffee only takes 10 minutes to brew start to finish) then preheating the oven is urgent whereas brewing the coffee has 5 minutes of float time. If I haven't started the coffee in 5 minutes, then it becomes urgent. See critical path analysis and Gantt charts and so on.

This might be worth a post? It feels like it'd be low on my queue but might also be easy to write.

It also seems like there's an argument for weighting urgency in planning that could lead to 'distorted' timelines while being a rational response to uncertainty.

It's important to do the "what are all the possible outcomes and what are the probabilities of each" calculation before you start thinking about weightings of how bad/good various outcomes are.

3ESRogs5y
Could you say more about what you mean here? I don't quite see the connection between your comment and the point that was quoted. I understand the quoted bit to be pointing out that if you don't know when a disaster is coming you _might_ want to prioritize preparing for it coming sooner rather than later (e.g. since there's a future you who will be available to prepare for the disaster if it comes in the future, but you're the only you available to prepare for it if it comes tomorrow). Of course you could make a counter-argument that perhaps you can't do much of anything in the case where disaster is coming soon, but in the long-run your actions can compound, so you should focus on long-term scenarios. But the quoted bit is only saying that there's "an argument", and doesn't seem to be making a strong claim about which way it comes out in the final analysis. Was your comment meaning to suggest the possibility of a counter-argument like this one, or something else? Did you interpret the bit you quoted the same way I did?
9Fluttershy5y
Basically, don't let your thinking on what is useful affect your thinking on what's likely.
4jessicata5y
While there are often good reasons to keep some specific technical details of dangerous technology secret, keeping strategy secret is unwise. In this comment, by "public" I mean "the specific intellectual public who would be interested in your ideas if you shared them", not "the general public". (I'm arguing for transparency, not mass-marketing) Either you think the public should, in general, have better beliefs about AI strategy, or you think the public should, in general, have worse beliefs about AI strategy, or you think the public should have exactly the level of epistemics about AI strategy that it does. If you think the public should, in general, have better beliefs about AI strategy: great, have public discussions. Maybe some specific discussions will be net-negative, but others will be net-positive, and the good will outweigh the bad. If you think the public should, in general, have worse beliefs about AI strategy: unless you have a good argument for this, the public has reason to think you're not acting in the public interest at this point, and are also likely acting against it. There are strong prior reasons to think that it's better for the public to have better beliefs about AI strategy. To the extent that "people doing stupid things" is a risk, that risk comes from people having bad strategic beliefs. Also, to the extent that "people not knowing what each other is going to do and getting scared" is a risk, the risk comes from people not sharing their strategies with each other. It's common for multiple nations to spy on each other to reduce the kind of information asymmetries that can lead to unnecessary arms races, preemptive strikes, etc. This doesn't rule out that there may come a time when there are good public arguments that some strategic topics should stop being discussed publicly. But that time isn't now.
[-]dxu5y170

There are strong prior reasons to think that it's better for the public to have better beliefs about AI strategy.

That may be, but note that the word "prior" is doing basically all of the work in this sentence. (To see this, just replace "AI strategy" with practically any other subject, and notice how the modified statement sounds just as sensible as the original.) This is important because priors can easily be overwhelmed by additional evidence--and insofar as AI researcher Alice thinks a specific discussion topic in AI strategy has the potential to be dangerous, it's worth realizing Alice probably has some specific inside view reasons to believe that's the case. And, if those inside view arguments happen to require an understanding of the topic that Alice believes to be dangerous, then Alice's hands are now tied: she's both unable to share information about something, and unable to explain why she can't share that information.

Naturally, this doesn't just make Alice's life more difficult: if you're someone on the outside looking in, then you have no way of confirming if anything Alice says is true, and you're forced to resort to just trusting Alice. If you don't have a whole lot

... (read more)
3jessicata5y
If someone's claiming "topic X is dangerous to talk about, and I'm not even going to try to convince you of the abstract decision theory implying this, because this decision theory is dangerous to talk about", I'm not going to believe them, because that's frankly absurd. It's possible to make abstract arguments that don't reveal particular technical details, such as by referring to historical cases, or talking about hypothetical situations. It's also possible for Alice to convince Bob that some info is dangerous by giving the info to Carol, who is trusted by both Alice and Bob, after which Carol tells Bob how dangerous the info is. If Alice isn't willing to do any of these things, fine, there's a possible but highly unlikely world where she's right, and she takes a reputation hit due to the "unlikely" part of that sentence. (Note, the alternative hypothesis isn't just direct selfishness; what's more likely is cliquish inner ring dynamics)

I haven't had time to write my thoughts on when strategy research should and shouldn't be public, but I note that this recent post by Spiracular touches on many of the points that I would touch on in talking about the pros and cons of secrecy around infohazards.

The main claim that I would make about extending this to strategy is that strategy implies details. If I have a strategy that emphasizes that we need to be careful around biosecurity, that implies technical facts about the relative risks of biology and other sciences.

For example, the US developed the Space Shuttle with a justification that didn't add up (ostensibly it would save money, but it was obvious that it wouldn't). The Soviets, trusting in the rationality of the US government, inferred that there must be some secret application for which the Space Shuttle was useful, and so developed a clone (so that when the secret application was unveiled, they would be able to deploy it immediately instead of having to build their own shuttle from scratch then). If in fact an application like that had existed, it seems likely that the Soviets could have found it by reasoning through "what do they know that I don't?" when they might not have found it by reasoning from scratch.

For reference, the Gary Marcus tweet in question is:

“I’m not saying I want to forget deep learning... But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world .” - Yoshua Bengio, not unlike what I have been saying since 2012 in The New Yorker.

I think Zack Lipton objected to this tweet because it appears to be trying to claim priority. (You might have thought it's ambiguous whether he's claiming priority, but he clarifies in the thread: "But I did say this stuff first, in 2001, 2012 etc?") The tweet and his writings more generally imply that people in the field have recently changed their view to agree with him, but many people in the field object strongly to this characterization.

The tweet is mostly just saying "I told you so." That seems like a fine time for people to criticize him about making a land grab rather than engaging on the object level, since the tweet doesn't have much object-level content. For example:

"Saying it louder ≠ saying it first. You can't claim credit for differentiating between reasoning and pattern recognition." [...] is essentially a claim that everybody k
... (read more)

I also read OP as claiming that Yann LeCun is defending the field against critiques that AGI isn’t near. My current from-a-distance impression is indeed that LeCun wants to protect the field from aggressive/negative speculation in the news / on Twitter, but that he definitely cannot be accused of scamming people about how near AGI is. Quote:

I keep repeating this whenever I talk to the public: we’re very far from building truly intelligent machines. All you’re seeing now — all these feats of AI like self-driving cars, interpreting medical images, beating the world champion at Go and so on — these are very narrow intelligences, and they’re really trained for a particular purpose. They’re situations where we can collect a lot of data.

So for example, and I don’t want to minimize at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant process towards general intelligence, it’s wrong. It just isn’t. it’s not because there’s a machine that can beat people at Go, there’ll be intelligent robots running round the streets. It doesn’t even help with that problem, it’s completely separate. Ot

... (read more)
I also read OP as claiming that Yann LeCun is defending the field against critiques that AGI isn’t near.

Same. In particular, I read the "How does the AI field treat its critics" section as saying that "the AI field used to criticize Dreyfus for saying that AGI isn't near, just as it now seems to criticize Marcus for saying that AGI isn't near". But in the Dreyfus case, he was the subject of criticism because the AI field thought that he was wrong and AGI was close. Whereas Marcus seems to be the subject of criticism because the AI field thinks he's being dishonest in claiming that anyone seriously thinks AGI to be close.

Whereas Marcus seems to be the subject of criticism because the AI field thinks he’s being dishonest in claiming that anyone seriously thinks AGI to be close.

Note, this looks like a dishonest "everybody knows" flip, from saying or implying X to saying "everybody knows not-X", in order to (either way) say it's bad to say not-X. (Clearly, it isn't the case that nobody believes AGI to be close!)

(See Marcus's medium article for more details on how he's been criticized, and what narratives about deep learning he takes issue with)

See Marcus's medium article for more details on how he's been criticized

Skimming that post it seems like he mentions two other incidents (beyond the thread you mention).

First one:

Gary Marcus: @Ylecun Now that you have joined the symbol-manipulating club, I challenge you to read my arxiv article Deep Learning: Critical Appraisal carefully and tell me what I actually say there that you disagree with. It might be a lot less than you think.
Yann LeCun: Now that you have joined the gradient-based (deep) learning camp, I challenge you to stop making a career of criticizing it without proposing practical alternatives.
Yann LeCun: Obviously, the ability to criticize is not contingent on proposing alternatives. However, the ability to get credit for a solution to a problem is contingent on proposing a solution to the problem.

Second one:

Gary Marcus: Folks, let’s stop pretending that the problem of object recognition is solved. Deep learning is part of the solution, but we are obviously still missing something important. Terrific new examples of how much is still be solved here: #AIisHarderThanYouThink
Critic: Nobody is pretending it is solved. However, some people are claiming tha
... (read more)
9jessicata5y
Ok, I added a note to the post to clarify this.

Hi Paul. Thanks for lucid analysis and generosity with your time to set the record straight here.

in part because I don't have much to say on this issue that Gary Marcus hasn't already said.

It would be interesting to know which particular arguments made by Gary Marcus you agree with, and how you think they relate to arguments about timelines.

In this preliminary doc, it seems like most the disagreement is driven by saying there is a 99% probability that training a human-level AI would take more than 10,000x more lifetimes than AlphaZero took games of go (while I'd be at more like 50%, and have maybe 5-10% chance that it will take many fewer lifetimes). Section 2.0.2 admits this is mostly guesswork, but ends up very confident the number isn't small. It's not clear where that particular number comes from, the only evidence gestured at is "the input is a lot bigger, so it will take a lot more lifetimes" which doesn't seem to agree with our experience so far or have much conceptual justification. (I guess the point is that the space of functions is much bigger? but if comparing the size of the space of functions, why not directly count parameters?) And why is this a lower bound?

Overall this seems like a place you disagree confidently with many people who entertain shorter timelines, and it seems unrelated to anything Gary Marcus says.

I agree with essentially all of the criticisms of deep learning in this paper, and I think these are most relevant for AGI:

  • Deep learning is data hungry, reflecting poor abstraction learning
  • Deep learning doesn't transfer well across domains
  • Deep learning doesn't handle hierarchical structure
  • Deep learning has trouble with logical inference
  • Deep learning doesn't learn causal structure
  • Deep learning presumes a stable world
  • Deep learning requires problem-specific engineering

Together (and individually), these are good reasons to expect "general strategic action in the world on a ~1-month timescale" to be a much harder domain for deep learning to learn how to act in than "play a single game of Go", hence the problem difficulty factor.

Exercise for those (like me) who largely agreed with the criticism that the usage of "scam" in the title was an instance of the noncentral fallacy (drawing the category boundaries of "scam" too widely in a way that makes the word less useful): do you feel the same way about Eliezer Yudkowsky's "The Two-Party Swindle"? Why or why not?

I like this question.

To report my gut reaction, not necessarily endorsed yet. I am sharing this to help other people understand how I feel about this, not as a considered argument:

I have a slight sense of ickyness, though a much weaker one. "Swindle" feels less bad to me, though I also haven't really heard the term used particularly much in recent times, so it's associations feel a lot less clear to me. I think I would have reacted less bad to the OP had it used "swindle" instead of "scam" but I am not super confident.

The other thing is that the case for the "two party swindle" feels a lot more robust than the case for the "AI timelines scam". I don't think you should never call something a scam or swindle, but if you do you should make really sure it actually is. Though I do still think there is a noncentral fallacy thing going on with calling it a swindle (though again it feels less noncentral for "swindle" instead of "scam").

The third thing is that the word "swindle" only shows up in the title of the post, and is not reinforced with words like "fraud" or trying to accuse ... (read more)

4Ruby5y
Agree, good question. In was going to say the much the same. I think it kind of is a noncentral fallacy too, but not one that strikes me as problematic. Perhaps I'd add that I feel the argument/persuasion being made by Eliezer doesn't really rest on trying to import my valence towards "swindle" over to this. I don't have that much valence to a funny obscure word. I guess it has to be said that it's a noncentral noncentral fallacy.
4ESRogs5y
I see two links in your comment that are both linking to the same place -- did you mean for the first one (with the text: "the criticism that the usage of "scam" in the title was an instance of the noncentral fallacy") to link to something else?
5Zack_M_Davis5y
Yes, thank you; the intended target was the immortal Scott Alexander's "Against Lie Inflation" (grandparent edited to fix). I regret the error.

I almost wrote a post today with roughly the following text. It seems highly related so I guess I'll write a short version.

The ability to sacrifice today for tomorrow is one of the hard skills all humans must learn. To be able to make long-term plans is not natural. Most people around me seem to be able to think about today and the next month, and occasionally the next year (job stability, housing stability, relationship stability, etc), but very rarely do I see anyone acting on plans with timescales of decades (or centuries). Robin Hanson has written about how humans naturally think in a very low-detail and unrealistic mode in far (as opposed to near) thinking, and I know that humans have a difficult time with scope sensitivity.
It seems to me that is is a very common refrain in the community to say "but timelines are short" in response to someone's long-term thinking or proposal, to suggest somewhere in the 5-25 year range until no further work matters because an AGI Foom has occured. My epistemic state is that even if this is true (which it very well may be), most people who are thinking this way are not in fact making 10 year plans. They are continuing to mak
... (read more)
There is a two-step form of judo required to first learn to make 50 year plans and then secondarily restrict yourself to shorter-term plans. It is not one move, and I often see "but timelines are short" used to prevent someone from learning the first move.

Is there a reason you need to do 50 year plans before you can do 10 year plans? I'd expect the opposite to be true.

(I happen to currently have neither a 50 nor 10 year plan, apart from general savings, but this is mostly because it's... I dunno kinda hard and I haven't gotten around to it or something, rather than anything to do with timelines.)

Is there a reason you need to do 50 year plans before you can do 10 year plans?

No.

It’s often worth practising on harder problems to make the smaller problems second nature, and I think this is a similar situation. Nowadays I do more often notice plans that would take 5+ years to complete (that are real plans with hopefully large effect sizes), and I’m trying to push it higher.

Thinking carefully about how things are built that have lasted decades or centuries (science, the American constitution, etc) I think is very helpful for making shorter plans that still require coordination of 1000s of people over 10+ years.

Relatedly I don’t think anyone in this community working on AI risk should be devoting 100% of their probability mass to things in the <15 year scale, and so should think about plans that fail gracefully or are still useful if the world is still muddling along at relatively similar altitudes in 70 years.

6Raemon5y
Ah, that all makes sense.
2Eli Tyre3y
I think you do need to learn how to make plans that can actually work, at all, before you learn how to make plans with very limited resources. And I think that people fall into the habit of making "plans", that they don't inner sim actually leading to success, because they condition themselves into thinking that things are desperate and the best action will only be the best action "in expected value" eg that the "right" action should look like a moonshot. This seems concerning to me. It seems like you should be, first and foremost, figuring out how you can get any plan that works at all, and then secondarily, trying to figure out how to make it work in the time allotted. Actual, multi-step strategy shouldn't mostly feel like "thinking up some moon-shots".
8jessicata5y
Strongly agreed with what you have said. See also the psychology of doomsday cults.

Thinking more, my current sense is that this is not an AI-specific thing, but a broader societal problem where people fail to think long-term. Peter Thiel very helpfully writes about it as a distinction between “definite” and “indefinite” attitudes to the future, where in the former it is understandable and lawful and in the latter it will happen no matter what you do (fatalism). My sense is that when I have told myself to focus on short-timelines, if it’s been unhealthy it’s been a general excuse for not having to look at hard problems.

[purely personal view]

It seems quite easy to imagine similarly compelling socio-political and subconscious reasons why people working on AI could be biased against short AGI timelines. For example

  • short timelines estimates make broader public agitated, which may lead to state regulation or similar interference [historical examples: industries trying to suppress info about risks]
  • researchers mostly want to work on technical problems, instead of thinking about nebulous future impacts of their work; putting more weight on short timelines would force some people to pause and think about responsibility, or suffer some cognitive dissonance, which may be unappealing/unpleasant for S1 reasons [historical examples: physicists working on nuclear weapons]
  • fears claims about short timelines would get pattern-matched as doomsday fear-mongering / sensationalist / subject of scifi movies ...

While I agree motivated reasoning is a serious concern, I don't think it's clear how do the incentives sum up. If anything, claims like "AGI is unrealistic or very far away, however practical applications of narrow AI will be profound" seems to capture most of the purported benefits (AI is important) and avoid the negatives (no need to think).


Planned summary:

This post argues that AI researchers and AI organizations have an incentive to predict that AGI will come soon, since that leads to more funding, and so we should expect timeline estimates to be systematically too short. Besides the conceptual argument, we can also see this in the field's response to critics: both historically and now, criticism is often met with counterarguments based on "style" rather than engaging with the technical meat of the criticism.

Planned opinion:

I agree with the conceptual argument, and I think it does hold in practice, quite strongly. I don't really agree that the field's response to critics implies that they are biased towards short timelines -- see these comments. Nonetheless, I'm going to do exactly what this post critiques, and say that I put significant probability on short timelines, but not explain my reasons (because they're complicated and I don't think I can convey them, and certainly can't convey them in a small number of words).

8orthonormal5y
Is there any group of people who reliably don't do this? Is there any indication that AI researchers do this more often than others?

¯\_(ツ)_/¯

Note that even if AI researchers do this similarly to other groups of people, that doesn't change the conclusion that there are distortions that push towards shorter timelines.

I see clear parallels with the treatment of Sabine Hossenfelder blowing the whistle on the particle physics community pushing for a new $20B particle accelerator. She has been going through the same adversity as any high-profile defector from a scientific community, and the arguments against her are the same ones you are listing.

This is a cogent, if sparse, high-level analysis of the epistemic distortions around megaprojects in AI and other fields.

It points out that projects like the human brain project and the fifth generation computer systems project made massive promises, raised around a billion dollars, and totally flopped. I don't expect this was a simple error, I expect there were indeed systematic epistemic distortions involved, perpetuated at all levels.

It points out that similar scale projects are being evaluated today involving various major AI companies globally, and points out that the sorts of distortionary anti-epistemic tendencies can still be observed. Critics of the ideas that are currently getting billions of dollars (deep learning leading to AGI) are met with replies that systematically exclude the possibility of 'stop, halt, and catch fire' but instead only include 'why are you talking about problems and not solutions' and 'do this through our proper channels within the field and not in this unconstrained public forum', which are clearly the sorts you'd expect to see when a megaproject is protecting itself.

The post briefly also addresses why it's worth modeling the sociopolitical argume... (read more)

Hubert Dreyfus, probably the most famous historical AI critic, published "Alchemy and Artificial Intelligence" in 1965, which argued that the techniques popular at the time were insufficient for AGI.

That is not at all what the summary says. Here is roughly the same text from the abstract:

Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer. Attempts to simulate cognitive processes on computers have, however, run into greater difficulties than anticipated. An examination of these difficulties reveals that the attempt to analyze intelligent behavior in digital computer language systematically excludes three fundamental human forms of information processing (fringe consciousness, essence/accident discrimination, and ambiguity tolerance). Moreover, there are four distinct types of intelligent activity, only two of which do not presuppose these human forms of information processi

... (read more)
4Benquo5y
It seems to me like that's pretty much what those quotes say - that there wasn't, at that time, algorithmic progress sufficient to produce anything like human intelligence.
9countingtoten5y
Again, he plainly says more than that. He's challenging "the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer." He asserts as fact that certain types of cognition require hardware more like a human brain. Only two out of four areas, he claims, "can therefore be programmed." In case that's not clear enough, here's another quote of his: He does not say that better algorithms are needed for Area IV, but that digital computers must fail. He goes on to falsely predict that clever search together with "newer and faster machines" cannot produce a chess champion. AFAICT this is false even if we try to interpret him charitably, as saying more human-like reasoning would be needed.
2Benquo5y
The doc Jessicata linked has page numbers but no embedded text. Can you give a page number for that one? Unlike your other quotes, it at least seems to say what you're saying it says. But it appears to start mid-sentence, and in any case I'd like to read it in context.
5countingtoten5y
Assuming you mean the last blockquote, that would be the Google result I mentioned which has text, so you can go there, press Ctrl-F, and type "must fail" or similar. You can also read the beginning of the PDF, which talks about what can and can't be programmed while making clear this is about hardware and not algorithms. See the first comment in this family for context.
2jessicata5y
And also that the general methodology/assumptions/paradigm of the time was incapable of handling important parts of intelligence.

The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US

This statement is not supported by the link used as a reference.  Was it a lie?  The reference speaks to failed intelligence and political manipulation using the perceived gap. The phrasing above suggests conspiracy.

2jessicata1y
This implies an intentional, coordinated falsehood ("kept the American public in the dark", plus denying relevant true information).
1[anonymous]1y
So by presenting a "potential" number of missiles as "the soviets had this many" what were the consequences? This led to more funding for the USA to build weapons, which in turn caused the soviets to build more? Or did it deter a "sneak attack" where the soviets could build in secret far more missiles and win a nuclear war? Basically until the USA had enough arms for "assured destruction" this was a risk. A more realistic view of how many missiles the Soviets probably had extends the number of years there isn't enough funding to pay for "assured destruction". Then again maybe the scenario where by the 1980s, civilization ending numbers of missiles were possessed by each side could have been avoided. My point here is that a policy of honesty may not really work in a situation where the other side is a bad actor.
4jessicata1y
Read The Doomsday Machine. The US Air Force is way less of a defensive or utilitarian actor than you are implying, e.g. for a significant period of time the only US nuclear war plan (which was hidden from Kennedy) involved bombing as many Chinese cities as possible even if it was Russia who had attacked the US. (In general I avoid giving the benefit of the doubt to dishonest violent criminals even if they call themselves "the government", but here we have extra empirical data)
1[anonymous]1y
I am not arguing that. I know the government does bad things and I read other books on that era. I was really just noting the consequences of an alternate policy might not have been any better.

This post was very helpful to me when I read it, in terms of engaging more with this hypothesis. The post isn't very rigorous and I think doesn't support the hypothesis very well, but nonetheless it was pretty helpful to engage with the perspective (I also found the comments valuable), so I'm nominating it for its positive effects for me personally.

I think a lot of the animosity that Gary Markus drew was less that some of his points were wrong, and more that he didn't seem to have a full grasp of the field before criticizing it. Here's an r/machinelearning thread on one of his papers. Granted, r/ML is not necessarily representative of the AI community, especially now, but you see some people agreeing with some of his points, and others claiming that he's not up to date with current ML research. I would recommend people take a look at the thread, to judge for themselves.

I'm also not inclined to take a

... (read more)

This was important to the discussions around timelines at the time, back when the talk about timelines felt central. This felt like it helped give me permission to no longer consider them as central, and to fully consider a wide range of models of what could be going on. It helped make me more sane, and that's pretty important.

It was also important for the discussion about the use of words and the creation of clarity. There's been a long issue of exactly when and where to use words like "scam" and "lie" to describe things - when is it accurate, when is it ... (read more)

I liked the comments on this post more than I liked the post itself. As Paul commented, there's as much criticism of short AGI timelines as there is of long AGI timelines; and as Scott pointed out, this was an uncharitable take on AI proponents' motives.

Without the context of those comments, I don't recommend this post for inclusion.

4Ben Pace3y
My guess is we agree that talk of being able to build AGI soon has lead to substantial increased funding in the AGI space (e.g. involved in the acquisition of DeepMind and the $1billion from Microsoft to OpenAI)? Naturally it's not the sole reason for funding, but I imagine it was a key part of the value prop, given that both of them describe themselves as 'building AGI'. Given that, I'm curious to what extent you think that such talk, if it was responsible, has been open for scrutiny or whether it's been systematically defended from skeptical analysis?
4orthonormal3y
I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.) Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.

At the time, I argued pretty strongly against parts of this post, and I still think my points are valid and important. That said, I think in retrospect this post had a large impact; I think it kicked off several months of investigation of how language works and what discourse norms should be in the presence of consequences. I'm not sure it was the best of 2019, but it seems necessary to make sense of 2019, or properly trace the lineage of ideas?

Which is not to say that modeling such technical arguments is not important for forecasting AGI. I certainly could have written a post evaluating such arguments, and I decided to write this post instead, in part because I don’t have much to say on this issue that Gary Marcus hasn’t already said.

Is he an AI researcher though? Wikipedia says he's a psychology professor, and his arXiv article criticizing deep learning doesn't seem to have much math. If you have technical arguments, maybe you could state them?

Yes he is, see his publications. (For technical arguments, see my response to Paul.)

The key sentiment of this post that I currently agree with:

  • There's a bit of a short timelines "bug" in the Berkeley rationalist scene, where short timelines have become something like the default assumption (or at least are not unusual). 
  • There don't seem to be strong, public reasons for this view. 
  • It seems like most people who are sympathetic to short timelines are sympathetic to it mainly as the result of a social proof cascade. 
  • But this is obscured somewhat, because some folks who opinions are being trusted, don't show their work (rightly or wrongly), because of info security considerations.
5habryka3y
I think Gwern has now made a relatively decent public case? Or at least I feel substantially less confused about the basic arguments, which I think I can relatively accurately summarize as "sure seems like there is a good chance just throwing more compute at the problem will get us there", with then of course a lot of detail about why that might be the case.
3Daniel Kokotajlo3y
Is it really true that most people sympathetic to short timelines are thus mainly due to social proof cascade? I don't know any such person myself; the short-timelines people I know are either people who have thought about it a ton and developed detailed models, or people who just got super excited about GPT-3 and recent AI progress basically. The people who like to defer to others pretty much all have medium or long timelines, in my opinion, because that's the respectable/normal thing to think.

This reminds me of related questions around slowing down AI, discussing AI with a mass audience, or building public support for AI policy (e.g. https://forum.effectivealtruism.org/posts/pm6Mn4a3h4oekCCay/two-strange-things-about-ai-safety-policy, http://www.zachgroff.com/2017/08/does-ai-safety-and-effective-altruist.html). A lot of the arguments against doing these things have this same motivation that we are concerned about the others for reasons that are somewhat abstruse. Where would these "sociopolitical" considerations get us on these questi... (read more)

Yeah, 10/10 agreement on this. Like it'd be great if you could "just" donate to some AI risk org and get the promised altruistic benefits, but if you actually care about "stop all the fucking suffering I can", then you should want to believe AI risk research is a scam if it is a scam.

At which point you go oh fuck, I don't have a good plan to save the world anymore. But not having a better plan shouldn't change your beliefs on whether AI risk research is effective.

1Flaglandbase3y
Quite a few folks "believe" in a rapid AI timeline because it's their only hope to escape a horrible fate. They may have some disease that's turning their brain into mush from the inside out, and know there is exactly zero chance the doctors will figure something out within the next century. Only superhuman intelligence could save them. My impression is that technological progress is MUCH slower than most people realize. 

Can you name any of these people? I can't think of anyone who's saying, "I'm dying, so let's cure death / create AGI now". Mostly what people do, is get interested in cryonics. 

6Adele Lopez3y
Really? My impression is that rapid AI timelines make things increasingly "hopeless" because there's less time to try to prevent getting paperclipped, and that this is the default view of the community.
2Teerth Aloke3y
I tilt towards rapid timeline - but I promise, my brain is not turning into mush. I have no terminal disease
1Teerth Aloke3y
Ageing?