After critical event W happens, they still won't believe you

by Eliezer Yudkowsky2 min read13th Jun 2013107 comments

44

Forecasting & PredictionFuturism
Frontpage

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Example 1:  "After a 2-year-old mouse is rejuvenated to allow 3 years of additional life, society will realize that human rejuvenation is possible, turn against deathism as the prospect of lifespan / healthspan extension starts to seem real, and demand a huge Manhattan Project to get it done."  (EDIT:  This has not happened, and the hypothetical is mouse healthspan extension, not anything cryonic.  It's being cited because this is Aubrey de Grey's reasoning behind the Methuselah Mouse Prize.)

Alternative projection:  Some media brouhaha.  Lots of bioethicists acting concerned.  Discussion dies off after a week.  Nobody thinks about it afterward.  The rest of society does not reason the same way Aubrey de Grey does.

Example 2:  "As AI gets more sophisticated, everyone will realize that real AI is on the way and then they'll start taking Friendly AI development seriously."

Alternative projection:  As AI gets more sophisticated, the rest of society can't see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them.  The same people who were talking about robot overlords earlier continue to talk about robot overlords.  The same people who were talking about human irreproducibility continue to talk about human specialness.  Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone's previous ideological commitment to a basic income guarantee, inequality reduction, or whatever.  The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before.  If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it's on the way, e.g. Hugo de Garis, Ben Goertzel, etc.

Consider the situation in macroeconomics.  When the Federal Reserve dropped interest rates to nearly zero and started printing money via quantitative easing, we had some people loudly predicting hyperinflation just because the monetary base had, you know, gone up by a factor of 10 or whatever it was.  Which is kind of understandable.  But still, a lot of mainstream economists (such as the Fed) thought we would not get hyperinflation, the implied spread on inflation-protected Treasuries and numerous other indicators showed that the free market thought we were due for below-trend inflation, and then in actual reality we got below-trend inflation.  It's one thing to disagree with economists, another thing to disagree with implied market forecasts (why aren't you betting, if you really believe?) but you can still do it sometimes; but when conventional economics, market forecasts, and reality all agree on something, it's time to shut up and ask the economists how they knew.  I had some credence in inflationary worries before that experience, but not afterward...  So what about the rest of the world?  In the heavily scientific community you live in, or if you read econblogs, you will find that a number of people actually have started to worry less about inflation and more about sub-trend nominal GDP growth.  You will also find that right now these econblogs are having worry-fits about the Fed prematurely exiting QE and choking off the recovery because the elderly senior people with power have updated more slowly than the econblogs.  And in larger society, if you look at what happens when Congresscritters question Bernanke, you will find that they are all terribly, terribly concerned about inflation.  Still.  The same as before.  Some econblogs are very harsh on Bernanke because the Fed did not print enough money, but when I look at the kind of pressure Bernanke was getting from Congress, he starts to look to me like something of a hero just for following conventional macroeconomics as much as he did.

That issue is a hell of a lot more clear-cut than the medical science for human rejuvenation, which in turn is far more clear-cut ethically and policy-wise than issues in AI.

After event W happens, a few more relatively young scientists will see the truth of proposition X, and the larger society won't be able to tell a damn difference.  This won't change the situation very much, there are probably already some scientists who endorse X, since X is probably pretty predictable even today if you're unbiased.  The scientists who see the truth of X won't all rush to endorse Y, any more than current scientists who take X seriously all rush to endorse Y.  As for people in power lining up behind your preferred policy option Z, forget it, they're old and set in their ways and Z is relatively novel without a large existing constituency favoring it.  Expect W to be used as argument fodder to support conventional policy options that already have political force behind them, and for Z to not even be on the table.

44

107 comments, sorted by Highlighting new comments since Today at 12:14 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I do tend to think that Aubrey de Grey's argument holds some water. That is, it's not so much general society that will be influenced as wealthy elites. Elites seem more likely to update when they read about a 2x mouse. I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics? But cryonics is a lot harder to believe than life extension. You need to buy pattern identity theory and nanotechnology and Hanson's value of life calculations. In the case of LE, all you have to believe is that the techniques that worked on the mouse will, likely, be useful in treating human senescence. And anyway, Aubrey hopes to first convince the gerontology community and then the public at large. This approach has worked for climate science and a similar approach may work for AI risk.

I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics?

LessWrongers, and high-karma LessWrongers, on average seem to think cryonics won't work, with mean odds of 5:1 or more against cryonics (although the fact that they expect it to fail doesn't stop an inordinate proportion from trying it for the expected value).

On the other hand, if mice or human organs were cryopreserved and revived without brain damage or loss of viability, people would probably become a lot more (explicitly and emotionally) confident that there is no severe irreversible information loss. Much less impressive demonstrations have been enough to create huge demand to enlist in clinical trials before.

5Paul Crowley7yThat number is the total probability of being revived taking into account x-risk among other things. It would be interesting to know how many people think it's likely to be technically feasable to revive future cryo patients.
1Dentin7yX-risk is a fairly unimportant factor in my survivability equation. Odds of dying due to accident and/or hardware failure trump it by a substantial margin. At my age, hardware failure is my most probable death mode. That's why I have the Alcor paperwork in progress even as we speak, and why I'm donating a substantial fraction of my income to SENS and not CFAR. It's not that X-risk is unimportant. It's that it's not of primary importance to me, and I suspect that a lot of LW people hold the same view.
1Paul Crowley7yWhen you say "hardware failure", could you give an example of the sort of thing you have in mind?
2Leonhart7yI imagine he means cancer, heart disease, &c.
1GeraldMonroe7yAlas, cryonics may be screwed with regards to this. It simply may not be physically possible to freeze something as large and delicate as a brain without enough damage to prevent you from thawing it and have it still work. This is of course is no big deal if you just want the brain for the pattern it contains. You can computationally reverse the cracks and to a lesser extent some of the more severe damage the same way we can computationally reconstruct a shredded document. The point is, I think in terms of relative difficulty, the order is : 1. Whole brain emulation 2. Artificial biological brain/body 3. Brain/body repaired via MNT 4. Brain revivable with no repairs. Note that even the "easiest" item on this list is extremely difficult.

For big jumpy events, look at the reactions to nuclear chain reactions, Sputnik, ENIGMA, penicillin, the Wright brothers, polio vaccine...

Then consider the process of gradual change with respect to the Internet, solar power, crop yields...

6jefftk7ySome amount of bias (selection? availability?) there, in that part of why we think of your first paragraph examples is because they did make major news. There were probably others that were mostly ignored and so are much harder to think of. (Invention of the bifurcated needle, used for smallpox inoculations? What else has turned out to be really important in retrospect?)

You mention Deep Blue beating Kasparov. This sounds look a good test case. I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess - Wikipedia gives the example of a 1960s MIT professor who claimed that "no computer program could defeat even a 10-year-old child at chess". And it seems to me that by the time Deep Blue beat Kasparov, most people in the know agreed it would happen someday even if they didn't think Deep Blue itself would be the winner. A quick Google search doesn't pull up enough data to allow me to craft a full narrative of "people gradually became more and more willing to believe computers could beat grand masters with each incremental advance in chess technology", but it seems like the sort of thing that probably happened.

I think the economics example is a poor analogy, because it's a question about laws and not a question of gradual creeping recognition of a new technology. It also ignores one of the most important factors at play here - the recategorization of genres from "science fiction nerdery" to "something that will happen eventually" to "something that might happen in my lifetime and I should prepare for it."

I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess

Douglas Hofstadter being one on the wrong side: well, to be exact, he predicted (in his book GEB) that any computer that could play superhuman chess would necessarily have certain human qualities, e.g., if you ask it to play chess, it might reply, "I'm bored of chess; let's talk about poetry!" which IMHO is just as wrong as predicting that computers would never beat the best human players.

I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:

To conclude this Chapter, I would like to present ten "Questions and Speculations" about AI. I would not make so bold as to call them "Answers" - these are my personal opinions. They may well change in some ways, as I learn more and as AI develops more...

Question: Will there be chess programs that can beat anyone?

Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity - that is, a programmed ability to "jump out of the system", so to speak - at least roughly to the extent that we have that ability. Once that is present, you can't contain the program; it's gone beyond that certain critical point, and you just have to face

... (read more)
7Nominull7yI suspect the thermostat is closer to the human mind than his conception of the human mind is.
0Houshalter7yTo be fair, people expected a chess playing computer to play chess in the same way a human does, thinking about the board abstractly and learning from experience and all that. We still haven't accomplished that. Chess programs work by inefficiently computing every possible move, so many moves ahead, which seemed impossible before computers got exponentially faster. And even then, deep blue was a specialized super-computer and had to use a bunch of little tricks and optimizations to get it just barely past human grand master level.
4BlueSun7yI was going to point that out too as I think it demonstrates an important lesson. They were still wrong. Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It's quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there's another path they're not considering, we could still end up with the same result. Just like a Chess program doesn't think like a human, but can still beat one and an airplane doesn't fly like a bird, but can still fly.

Yes, people now believe that computers can beat people at chess.

6CarlShulman7yI.e., they didn't update to expecting HAL immediately after, and they were right for solid reasons. But I think that the polls, and moreso polls of experts do respond to advancements in technology, e.g. on self-driving cars or solar power.
4Eliezer Yudkowsky7yDo we have any evidence that they updated to expecting HAL in the long run? Normatively, I agree that ideal forecasters shouldn't be doing their updating on press releases, but people sometimes argue that press release W will cause people to update to X when they didn't realize X earlier.
2Thomas7yIt was on our national television, few months ago. Kasparov was here, opened some international chess center for young players in Maribor. He gave an interview and among other things, he told us how fishy was the Deep Blue victory and not real in fact. At least a half of the population believed him.
7Eliezer Yudkowsky7yI notice I am confused (he said politely). Kasparov is not stupid and modern chess programs on a home computer e.g. Deep Rybka 3.0 are overwhelmingly more powerful than Deep Blue, there should be no reasonable way for anyone to delude themselves that computer chess programs are not crushingly superior to unassisted humans.
6Vaniver7yI seem to recall that there was some impoliteness surrounding the Deep Blue game specifically- basically, it knew every move Kasparov had ever played, but Kasparov was not given any record of Deep Blue's plays to learn how it played (like he would have had against any other human chessplayer who moved up the chess ranks); that's the charitable interpretation of what Kasparov meant by the victory being fishy. (This hypothetical Kasparov would want to play many matches against Deep Rybka 3.0 before the official matches that determine which of them is better- but would probably anticipate losing at the end of his training anyway.)
0[anonymous]7yThat's not everything he said. [http://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29#Aftermath]
2fezziwig7yNowadays, sure, but Deep Blue beat Kasparov in 1997. Kasparov has always claimed that IBM cheated during the rematch, supplementing Deep Blue with human insight. As far as I know there's no evidence that he's right, but he's suspected very consistently for the last 15 years.
0[anonymous]7yWell, for that matter he also believes this stuff [http://en.wikipedia.org/wiki/New_Chronology_%28Fomenko%29].
-1John_Maxwell7yHm [http://en.wikipedia.org/wiki/Rybka#Odds_matches_versus_grandmasters]?
5Eliezer Yudkowsky7yThose were matches with Rybka handicapped (an odds match is a handicapped match) and Deep Rybka 3.0 is a substantial improvement over Rybka. The referenced "Zappa" which played Rybka evenly is another computer program. Read the reference carefully.
2John_Maxwell7yThanks.
-2Qiaochu_Yuan7yRequest that Thomas be treated as a troll. I'm not sure if he's actually a troll, but he's close enough. Edit: This isn't primarily based on the above comment, it's primarily based on this comment [http://lesswrong.com/r/discussion/lw/hph/link_the_selected_papers_network/960x] .
2Kawoomba7yActually, starting at and around the 30 minute mark in this video [http://youtu.be/zccGK2wE9sY] -- an interview with Kasparov done in Maribor, a couple months ago, no less -- he whines about the whole human versus machine match up a lot, suggests new winning conditions (human just has to win one game of a series to show superiority, since the "endurance" aspect is the machine "cheating") which would redefine the result etcetera. Honi soit qui mal y pense.
1Qiaochu_Yuan7yI looked this up but I don't understand what it was intended to mean in this context.
5Kawoomba7y"Shame on him, who suspects illicit motivation" is given as one of the many possible translations. Don't take the "shame" part too literally, but there is some irony in pointing out someone as a troll when the one comment you use for doing so turns out to be true, and interesting to boot (Kasparov engaging in bad-loser-let's-warp-the-facts behavior). I'm not taking a stance on the issue whether Thomas is or isn't a troll, you were probably mostly looking for a good-seeming place to share your opinion about him. (Like spotting a cereal thief in a supermarket, day after day. Then when you finally hold him and call the authorities, it turns out that single time he didn't steal.)
0[anonymous]7ySo why did you write that here rather than there? Ah, right, the karma toll.
2Qiaochu_Yuan7yI thought it would be more likely to be seen by Eliezer if I responded to Eliezer.
0Eliezer Yudkowsky7yHm. A brief glance at Thomas's profile makes it hard to be sure. I will be on the lookout.

I don't find either example convincing about the general point. Since I'm stupid I'll fail to spot that the mouse example uses fictional evidence and is best ignored

We are all pretty sick of seeing a headline "Cure for Alzheimer's disease!!!" and clicking through to the article only to find that it is cured in mice, knock-out mice, with a missing gene, and therefore suffering from a disease a little like human Alzheimer. The treatment turns out to be injecting them with the protein that the missing gene codes for. Relevance to human health: zero.

Mice are very short lived. We expect big boosts in life span by invoking mechanisms already present in humans and already working to provide humans with much longer life spans than mice. We don't expect big boosts in the life span of mice to herald very much for human health. Cats would be different. If pet cats started living 34 years instead of 17, their owners would certainly be saying "I want what Felix is getting."

The sophistication of AI is a tricky thing to measure. I think that we are safe from unfriendly AI for a few years yet, not so much because humans suck at programming computers, but because they suck in a ... (read more)

Example 1: "After a 2-year-old mouse is rejuvenated to allow 3 years of additional life, society will realize that human rejuvenation is possible, turn against deathism as the prospect of lifespan / healthspan extension starts to seem real, and demand a huge Manhattan Project to get it done."

A quick and dirty Google search reveals:

Cost of Manhattan Project in 2012 dollars: 30 billion

Pharma R&D budget in 2012: 70 billion

http://www.fiercebiotech.com/special-reports/biopharmas-top-rd-spenders-2012

http://nuclearsecrecy.com/blog/2013/05/17/the-price-of-the-manhattan-project/

5CarlShulman7yI think this is a good point, but I'd mention that real U.S. GDP is ~7 times higher now than then, aging as such isn't the focus of most pharma R&D (although if pharma companies thought they could actually make working drugs for it they would), and scientists' wages are higher now due to Baumol's cost disease [http://en.wikipedia.org/wiki/Baumol's_cost_disease].
4Locaha7yThe point was, Pharma is spending enormous sums of money trying to fight individual diseases (and failing like 90% of the time), and people are proposing a Manhattan project to do something far more ambitious. But a Manhattan project is in the same order of magnitude as an individual drug. IOW, a Manhattan project won't be enough. In any case, 3 years of additional life to a mouse won't be enough, because people can always claim that the intervention is not proportional to life span. What will do the trick is an immortalized mouse, as young at 15 years as it was at 0.5.

As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc. (I was claiming a few year's warning in the piece you are responding to, which is pretty minimal). Do you think there are counterexamples? You are claiming that a completely unprecedented will happen with very high probability. If you don't think that requires strong arguments to justify than I am confused, and if you think you've provided strong arguments I'm confused too.

I agree that AI has the potential to develop extremely quickly, in a way that only a handful of other technologies did. As far as I can tell the best reason to suspect that AI might be a surprise is that it is possible that only theoretical insights are needed, and we do have empirical evidence that sometimes people will be blindsided by a new mathematical proof. But again, as far as I know that has never resulted in a surprising economic impact, not even a modest one (and even in the domain of proofs, most of them don't blindside people, and there are strong arguments that AI is a harder problem than the problems that one per... (read more)

5Eliezer Yudkowsky7yIs the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
5paulfchristiano7yRegarding atomic weapons: * Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what's up to other people). * Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing? I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn't surprising (except perhaps because of military secrecy). I don't think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don't think you have argued against that much. I don't think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do---e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that's a far cry from thinking that other people's plans don't matter, or even that my plans matter much more than everyone else's taken together.
5Wei_Dai7yI think I may be missing a relevant part of the previous discussion between you and Eliezer. By "people" do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what's the strategic relevance of the question? Which piece? Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you're probably right, as it's hard to think of another similar example. There was some discussion about this here [http://lesswrong.com/lw/h8z/bitcoins_are_not_digital_greenbacks/8tga].)
3paulfchristiano7yI mean if you suggested "Technology X will have a huge economic impact in the near future" to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact. The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society's capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn't much matter, and that if lone heroes don't resolve the problems then there isn't much hope. On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that. * The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly. * I don't think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction. * Bitcoin is probably still unusually extreme. If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count. I guess this part does depend a bit on context, since "surprising" depends on timescale. But Eliezer was referring to predictions of "a few years" of warning (which I think is on the very short end, and he thinks is on the very long end).
2Wei_Dai7yMy own range would be a few years to a decade, but I guess unlike you I don't think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?
0jsteinhardt7yFor what it's worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.
1John_Maxwell7yCould we get a link to this? Maybe EY could add it to the post?

My version of Example 2 sounds more like "at some point, Watson might badly misdiagnose a human patient, or a bunch of self-driving cars might cause a terrible accident, or more inscrutable algorithms will do more inscrutable things, and this sort of thing might cause public opinion to turn against AI entirely in the same way that it turned against nuclear power."

1CarlShulman7yI think that people will react more negatively to harms than they react positively to benefits, but I would still expect the impacts of broadly infrahuman AI to be strongly skewed towards the positive. Accidents might lead to more investment in safety, but a "turn against AI entirely" situation seems unlikely to me.
3Eliezer Yudkowsky7yYou could say the same about nuclear power. It's conceivable that with enough noise about "AI is costing jobs" the broad positive impacts could be viewed as ritually contaminated a la nuclear power. Hm, now I wonder if I should actually publish my "Why AI isn't the cause of modern unemployment" writeup.
0Yosarian27yI don't know about that; I think that a lot of the people who think that AI is "costing jobs" view that as a positive thing.
0[anonymous]7yI don't think that's a good analogy. The cold war had two generations of people living under the very real prospect of nuclear apocalypse.Grant Morrison wrote once about how, at like age five, he was concretely visualizing nuclear annihilation regularly. By his early twenties, pretty much everyone he knew figured civilization wasn't going to make it out of the cold war--that's a lot of trauma, enough to power a massive ugh field. Vague complaints of "AI is costing jobs" just can't compare to the bone-deep terror that was pretty much universal during the cold war.

At the Edge question 2009 6 people spoke of immortality (de Grey not included). and 17 people spoke of superintelligence/humans 2.0.

This seems like evidence for Aubrey's point of view.

Of all the things that 151 top scientists could think they'd live to see, that more than 10% converged on that without previous communication in that stuff is perplexing for anyone who was a transhumanist in 2005.

5Douglas_Knight7yNo, these people all have long term relationships with Brockman/Edge, which even holds parties bringing them together.
0gwern7yIndeed, when I was looking at Edge's tax filings [http://lesswrong.com/r/discussion/lw/7gy/case_study_reading_edges_financial_filings/] , it seemed to me that the entire point of Edge was basically funding their parties.
[-][anonymous]7y 4

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Sir Karl Popper came to the same conclusion in his 1963 book Conjectures and Refutations. So did Harold Walsby in his 1947 book The Domain of Ideologies. You're in good company.

I agree that almost no actual individual will change his or her mind. But humanity as a whole can change its mind, as young impressionable scientists look around, take in the available evidence, and then commit themselves to a position and career trajectory based on that evidence.

7jsteinhardt7yNote that this would be a pretty slow change and is likely not fast enough if we only get, say, 5 years of prior warning.

I do think there is a lot of truth to that. Reminds me of the people who said in the 1990's "Well, as soon as the arctic ice cap start to melt, then the climate deniers will admit that climate change is real", but of course that haven't happened either.

I do wonder, though, if that's equally true for all of those fields. For example, in terms of anti-aging technology, it seems to me that the whole status quo is driven by a very deep and fundamental sense that aging is basically unchangeable, and that that's the only thing that makes it accepta... (read more)

"When the author of the original data admits that he fabricated it, people will stop believing that vaccines cause autism."

[This comment is no longer endorsed by its author]Reply
5satt7yDid Wakefield ever admit his MMR & autism paper was a fraud [http://www.bmj.com/content/342/bmj.c7452]? I know he's acknowledged [http://news.bbc.co.uk/1/hi/health/7342618.stm] taking blood samples from children without complying with ethical guidelines, and failing to disclose a conflict of interest, but I don't recall him saying the paper's results were BS.
4Douglas_Knight7yNo, I was completely mistaken about this case. For some reason I thought that the 12 children didn't even exist.
4satt7yI should add that although you were mistaken about the details, I basically agree with your example. Plenty of people still reckon [http://usatoday30.usatoday.com/yourlife/health/medical/autism/2011-01-22-poll-vaccine-autism_N.htm] vaccines cause autism.

Funny that I just hit this with my comment from yesterday:

http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/95hm

Idea: robots can be visibly unfriendly without being anywhere near a FOOM. This will help promote awareness.

I think this is different from your examples.

A mouse living to the ripe old age of 5? Well, everyone has precedent for animals living that long, and we also have precedent for medicine doing all sorts of amazing things for the health of mice that seem never to translate into treatments for people.

Economic meltdowns? ... (read more)

Huh, my first thought on seeing the title was that this would be about Richard Stallman.

The two examples here seem to not have alarming/obvious enough Ws. It seems like you are arguing against a straw-man who makes bad predictions, based on something like a typical mind fallacy.




Making a mouse live for 5 years isn't going to get anyone's attention. When they can make a housecat live to be 60, then we'll talk.

0Paul Crowley7ySo we won't be talking for at least thirty years? That's quite a while to wait.
0CronoDAS7yI know. It's a real problem with this kind of research. For example, it took a long time before we got the results of the caloric restriction study on primates.
1gwern7yAs far as I knew, the recent studies weren't even the final results, they were just interim reports about the primates which had died up to that point and hobbled by the small sample size that implies.

I wonder how a prolonged event Y, something with enough marketability to capture the public's eye for some time, might change opinions on the truth of proposition X. Something along the lines of the calculus, Gutenberg's printing press, the advent of manoeuvrable cannons, flintlocks, quantum mechanics, electric stoves (harnessed electricity), the concept of a national debt, etcetera.

I'd be interested in what if any effects the American National Security Agency scandal, or a worldwide marketing campaign by Finland advertising their healthcare system will or would have to this end.

1Eliezer Yudkowsky7yProlonged events with no clearly defined moment of hitting the newspapers all at once seem to me to have lower effects on public opinion. Contrast the long, gradual, steady incline of chessplaying power going on for decades earlier, vs. the Deep Blue moment.

Presumably the best you can do solution-wise is to try and move policy options through a series of "middle stages" towards either optimal results, or more likely the best result you can realistically get?

EDIT: Also- how DID the economists figure it out anyway? I would have thought that although circumstances can incease or reduce it inflationary effects would be inevitable if you increased the money supply that much.

4CronoDAS7yWhen interest rates are virtually zero, cash and short-term debt become interchangeable. There's no incentive to lend your cash on a short-term basis, so people (and corporations) start holding cash as a store of value instead of lending it. (After all, you can spend cash - or, more accurately, checking account balances - directly, but you can't spend a short-term bond.) Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it. See also. [http://krugman.blogs.nytimes.com/2011/10/09/is-lmentary/]
0PeterDonis7ySorry for the late comment but I'm just running across this thread. But as far as I know the mainstream economists like the Fed did not predict that this would happen; they thought quantitative easing would start banks (and others with large cash balances) lending again. If banks had started lending again, by your analysis (which I agree with), we would have seen significant inflation because of the growth in the money supply. So it looks to me like the only reason the Fed got the inflation prediction right was that they got the lending prediction wrong. I don't think that counts as an instance of "we predicted critical event W".
4Eliezer Yudkowsky7yDemand for extremely safe assets increased (people wanted to hold more money), the same reason Treasury bonds briefly went to negative returns; demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking; the shadow banking sector contracted so financial entities had to use money instead of collateral; etc.
0PeterDonis7ySorry for the late comment but I'm just running across this thread. This is an interesting comment which I haven't seen talked about much on econblogs (or other sources of information about economics, for that matter). I understand the logic: fractional reserve banking is basically using loans as a money multiplier, so fewer loans means less multiplication, hence effectively less money supply. But it makes me wonder: what happens when the loan demand goes up again? Do you then have to reverse quantitative easing and effectively retire money to keep things in balance? Do any mainstream economists talk about that?

What would be examples of a critical event in AI?

2AlanCrowe7yI'm not connected to the Singularity Institute or anything, so this is my idiosyncratic view. Think about theorem provers such as Isabelle [http://www.cl.cam.ac.uk/research/hvg/Isabelle/] or ACL2 [http://www.cs.utexas.edu/~moore/acl2/]. They are typically structured a bit like an expert system with a rule base and an inference engine. The axioms play the role of rule base and the theorem prover plays the role of the inference engine. While it is easy to change the axioms, this implies a degree of interpretive overhead when it come to trying to prove a theorem. One way to reduce the interpretative overhead is to use a partial evaluator [http://books.google.co.uk/books?id=B1UTK2j8rksC&pg=PA8&lpg=PA8&dq=partial+evaluation+to+specialise+a+theorem+prover+to+a+set+of+axioms&source=bl&ots=GI22ERycVf&sig=vfRc9Z3tAlYhBhGXFCn99iBtqF4&hl=en&sa=X&ei=u9y8Uaa8GIWZhQf2lIGgDg&ved=0CBsQ6AEwAA#v=onepage&q=partial%20evaluation%20to%20specialise%20a%20theorem%20prover%20to%20a%20set%20of%20axioms&f=false] to specialize the prover to the particular set of axioms. Indeed, if one has a self-applicable partial evaluator one could use the second Futamura projection [http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html] and, specializing the partial evaluator to the theorem prover, produce a theorem prover compiler. Axioms go in, an efficient theorem prover for those axioms comes out. Self-applicable partial evaluators are bleeding-edge software technology and current ambitions are limited to stripping out interpretive overhead. They only give linear speed ups. In principle a partial evaluator could recognise algorithmic inefficiencies and, rewriting the code more aggressively produce super-linear speed ups. This is my example of a critical event in AI: using a self-applicative partial evaluator and the second Futamura projection to obtain a theorem prover compiler with a super-linear speed up compared to proving theorems in interpretive mode. This would convince me
1LEmma7yVampire [http://en.wikipedia.org/wiki/Vampire_%28theorem_prover%29] uses specialisation according to wikipedia:
1lukeprog7yBTW, the term for this is AGI Sputnik moment [http://wiki.lesswrong.com/wiki/AGI_Sputnik_moment].
1shminux7yNeat. I guess Eliezer's point is that there will not be one until it's too late.
-3Halfwit7yBy "the term" do you mean something Ben Ben Goertzel said once on SL4, or is this really a thing?
1Kaj_Sotala7yI don't know what you mean by "really a thing", but it has been used more than once [https://www.google.com/search?q=%22agi+sputnik%22], including some academic papers.
-1[anonymous]7yI just found it amusing that lukeprog was designating it technical term, as I believe I first read the phrase in the Sl4 archives. And it seemed a casual thing.
0Eliezer Yudkowsky7yI don't know, actually. I'm not the one making these forecasts. It's usually described as some broad-based increase of AI competence but not cashed out any further than that. I'll remark that if there isn't a sharp sudden bit of headline news, chances of a significant public reaction drop even further.
1shminux7ySorry, what I meant is, what would you consider an event that ought to be taken seriously but won't be? Eh, that's not right, presumably that's long past, like Deep Blue or maybe first quine. What would you consider an even that an AI researcher not sold on AI x-risks ought to take seriously but likely will not? A version of Watson which can write web apps from vague human instructions? A perfect simulation of C.elegans? A human mind upload?
5Eliezer Yudkowsky7yEven I think they'd take a mind upload seriously - that might really produce a huge public update though probably not in any sane direction - though I don't expect that to happen before a neuromorphic UFAI is produced from the same knowledge base. They normatively ought to take a spider upload seriously. Something passing a restricted version of a Turing test might make a big public brouhaha, but even with a restricted test I'm not sure I expect any genuinely significant version of that before the end of the world (unrestricted Turing test passing should be sufficient unto FOOM). I'm not sure what you 'ought' to take seriously if you didn't take computers seriously in the first place. Aubrey was very specific in his prediction that I disagree with, people who forecast watershed opinion-changing events for AI are less so at least as far as I can recall.

unrestricted Turing test passing should be sufficient unto FOOM

I don't think this is quite right. Most humans can pass a Turing test, even though they can't understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.

6Eliezer Yudkowsky7yThe problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn't be just as smart as a pocket calculator.
7novalis7yHigh speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.
3TheOtherDave7yIf I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth. But, sure, it's far from zero. As you say, it's not a guarantee.
4Locaha7yI thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.
0ShardPhoenix7yThe biggest reason humans can't do this is that we don't implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.
0novalis7ySure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.
4DanielVarga7yI tend to agree, but I have to note the surface similarity with Hofstadter's disproved "No, I'm bored with chess. Let's talk about poetry." [http://lesswrong.com/r/discussion/lw/hp5/after_critical_event_w_happens_they_still_wont/95pa] prediction.
3gjm7yConsider first of all a machine that can pass an "AI-focused Turing test", by which I mean convincing one of the AI team that built it that it's a human being with a comparable level of AI expertise. I suggest that such a machine is almost certainly "sufficient unto FOOM", if the judge in the test is allowed to go into enough detail. An ordinary Turing test doesn't require the machine to imitate an AI expert but merely a human being. So for a "merely" Turing-passing AI not to be "sufficient unto FOOM" (at least as I understand that term) what's needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert. It seems unlikely that there's a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn't close to being FOOM-ready, it seems like what's needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn't "scale up" to harder problems like the ordinary human architecture apparently does. Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can't be entirely ruled out, but it feels much harder to believe than a "narrow" chess-playing AI was even at the time of Hofstadter's prediction. Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it's a physicist, competent literary critics that it's a novelist, civil rights activists that it's a black person who's suffered from racial discrimination, etc.

Exactly. This is part of the reason I will win the bet, i.e. it is the reason the first super intelligent AI will be programmed without attention to Friendliness.

1wedrifid7yUnfortunately being right isn't sufficient for winning a bet. You also have to not have been torn apart and used as base minerals for achieving whatever goals a uFAI happens to have.
0Unknowns7yTrue, that's why it's only a part of the reason.
0wedrifid7yThis implies that you have some other strategy for winning a bet despite strong UFAI existing. At very least this requires that some entity with which you are able to make a bet now will be around to execute behaviours that conditionally reward you after the event has occurred. This seems difficult to arrange.
0Normal_Anomaly7yActually, all he needs is someone who believes the first AI will be friendly and who think they'll have a use for money after a FAI exists. Then they could make an apocalypse bet [http://lesswrong.com/lw/ie/the_apocalypse_bet/] where Unknowns gets paid now and then pays their opponent back if and when the first AI is built and it turns out to be friendly.

if you look at what happens when Congresscritters question Bernanke, you will find that they are all terribly, terribly concerned about inflation

I imagine that Congress is full of busy people who don't have time to follow blogs that cover every topic they debate. Did they get any testimony from expert economists when questioning Bernanke?

"After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Isn't this just a standard application of Bayesianism? I.e. after event W happens, people will consider proposition X to be somewhat more likely, thereby making them more favorable to Y and Z. The stronger evidence event W is, the more people will update and the further they will update. But no one piece of evidence is likely to totally convince everyone immediately, nor should it.

For instance, if "a 2-y... (read more)

This is the very definition of the status quo bias.

I'd missed the mouse "rejuvenation for 3 more years of life" result (did you mean cryo freeze -> revive, or something else?). Could you supply a cite?

6ChristianKl7yAubrey de Grey thinks that it's worthwhile to fund a big price for the first group who achieves that result. That's one of the main strategies he advocates to convince everyone to take aging seriously.
3Jonathan_Graehl7yOh. IRCers point out that I misread, and this is only a hypothetical :( Too bad, I was about to perform a massive update :)