From the final chapter of his new book Cowards, titled "Adapt or Die: The Coming Intelligence Explosion."

The year is 1678 and you’ve just arrived in England via a time machine. You take out your new iPhone in front of a group of scientists who have gathered to marvel at your arrival.

“Siri,” you say, addressing the phone’s voice-activated artificial intelligence system, “play me some Beethoven.”

Dunh-Dunh-Dunh-Duuunnnhhh! The famous opening notes of Beethoven’s Fifth Symphony, stored in your music library, play loudly.

“Siri, call my mother.”

Your mother’s face appears on the screen, a Hawaiian beach behind her. “Hi, Mom!” you say. “How many fingers am I holding up?”

“Three,” she correctly answers. “Why haven’t you called more—”

“Thanks, Mom! Gotta run!” you interrupt, hanging up.

“Now,” you say. “Watch this.”

Your new friends look at the iPhone expectantly.

“Siri, I need to hide a body.”

Without hesitation, Siri asks: “What kind of place are you looking for? Mines, reservoirs, metal foundries, dumps, or swamps?” (I’m not kidding. If you have an iPhone 4S, try it.)

You respond “Swamps,” and Siri pulls up a satellite map showing you nearby swamps.

The scientists are shocked into silence. What is this thing that plays music, instantly teleports video of someone across the globe, helps you get away with murder, and is small enough to fit into a pocket?

At best, your seventeenth-century friends would worship you as a messenger of God. At worst, you’d be burned at the stake for witchcraft. After all, as science fiction author Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Now, imagine telling this group that capitalism and representative democracy will take the world by storm, lifting hundreds of millions of people out of poverty. Imagine telling them their descendants will eradicate smallpox and regularly live seventy-five or more years. Imagine telling them that men will walk on the moon, that planes, flying hundreds of miles an hour, will transport people around the world, or that cities will be filled with buildings reaching thousands of feet into the air.

They’d probably escort you to the madhouse.

Unless, that is, one of the people in that group had been a man named Ray Kurzweil.

Kurzweil is an inventor and futurist who has done a better job than most at predicting the future. Dozens of the predictions from his 1990 book The Age of Intelligent Machines came true during the 1990s and 2000s. His follow-up book, The Age of Spiritual Machines, published in 1999, fared even better. Of the 147 predictions that Kurzweil made for 2009, 78 percent turned out to be entirely correct, and another 8 percent were roughly correct. For example, even though every portable computer had a keyboard in 1999, Kurzweil predicted that most portable computers would lack a keyboard by 2009. It turns out he was right: by 2009, most portable computers were MP3 players, smartphones, tablets, portable game machines, and other devices that lacked keyboards.

Kurzweil is most famous for his “law of accelerating returns,” the idea that technological progress is generally “exponential” (like a hockey stick, curving up sharply) rather than “linear” (like a straight line, rising slowly). In nongeek-speak that means that our knowledge is like the compound interest you get on your bank account: it increases exponentially as time goes on because it keeps building on itself. We won’t experience one hundred years of progress in the twenty-first century, but rather twenty thousand years of progress (measured at today’s rate).

Many experts have criticized Kurzweil’s forecasting methods, but a careful and extensive review of technological trends by researchers at the Santa Fe Institute came to the same basic conclusion: technological progress generally tends to be exponential (or even faster than exponential), not linear.

So, what does this mean? In his 2005 book The Singularity Is Near, Kurzweil shares his predictions for the next few decades:

  • In our current decade, Kurzweil expects real-time translation tools and automatic house-cleaning robots to become common.
  • In the 2020s he expects to see the invention of tiny robots that can be injected into our bodies to intelligently find and repair damage and cure infections.
  • By the 2030s he expects “mind uploading” to be possible, meaning that your memories and personality and consciousness could be copied to a machine. You could then make backup copies of yourself, and achieve a kind of technological immortality.



Age of the Machines?

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us.”

—Jaan Tallinn, co-creator of Skype and Kazaa



If any of that sounds absurd, remember again how absurd the eradication of smallpox or the iPhone 4S would have seemed to those seventeenth-century scientists. That’s because the human brain is conditioned to believe that the past is a great predictor of the future. While that might work fine in some areas, technology is not one of them. Just because it took decades to put two hundred transistors onto a computer chip doesn’t mean that it will take decades to get to four hundred. In fact, Moore’s Law, which states (roughly) that computing power doubles every two years, shows how technological progress must be thought of in terms of “hockey stick” progress, not “straight line” progress. Moore’s Law has held for more than half a century already (we can currently fit 2.6 billion transistors onto a single chip) and there’s little reason to expect that it won’t continue to.

But the aspect of his book that has the most far-ranging ramifications for us is Kurzweil’s prediction that we will achieve a “technological singularity” in 2045. He defines this term rather vaguely as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”

Part of what Kurzweil is talking about is based on an older, more precise notion of “technological singularity” called an intelligence explosion. An intelligence explosion is what happens when we create artificial intelligence (AI) that is better than we are at the task of designing artificial intelligences. If the AI we create can improve its own intelligence without waiting for humans to make the next innovation, this will make it even more capable of improving its intelligence, which will . . . well, you get the point. The AI can, with enough improvements, make itself smarter than all of us mere humans put together.

The really exciting part (or the scary part, if your vision of the future is more like the movie The Terminator) is that, once the intelligence explosion happens, we’ll get an AI that is as superior to us at science, politics, invention, and social skills as your computer’s calculator is to you at arithmetic. The problems that have occupied mankind for decades— curing diseases, finding better energy sources, etc.— could, in many cases, be solved in a matter of weeks or months.

Again, this might sound far-fetched, but Ray Kurzweil isn’t the only one who thinks an intelligence explosion could occur sometime this century. Justin Rattner, the chief technology officer at Intel, predicts some kind of Singularity by 2048. Michael Nielsen, co-author of the leading textbook on quantum computation, thinks there’s a decent chance of an intelligence explosion by 2100. Richard Sutton, one of the biggest names in AI, predicts an intelligence explosion near the middle of the century. Leading philosopher David Chalmers is 50 percent confident an intelligence explosion will occur by 2100. Participants at a 2009 conference on AI tended to be 50 percent confident that an intelligence explosion would occur by 2045.

If we can properly prepare for the intelligence explosion and ensure that it goes well for humanity, it could be the best thing that has ever happened on this fragile planet. Consider the difference between humans and chimpanzees, which share 95 percent of their genetic code. A relatively small difference in intelligence gave humans the ability to invent farming, writing, science, democracy, capitalism, birth control, vaccines, space travel, and iPhones— all while chimpanzees kept flinging poo at each other.



Intelligent Design?

The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us. What if the machines don’t want the same things we do?

The truth, unfortunately, is that every kind of AI we know how to build today definitely would not want the same things we do. To build an AI that does, we would need a more flexible “decision theory” for AI design and new techniques for making sense of human preferences. I know that sounds kind of nerdy, but AIs are made of math and so math is really important for choosing which results you get from building an AI.

These are the kinds of research problems being tackled by the Singularity Institute in America and the Future of Humanity Institute in Great Britain. Unfortunately, our silly species still spends more money each year on lipstick research than we do on figuring out how to make sure that the most important event of this century (maybe of all human history)— the intelligence explosion— actually goes well for us.



Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them. Imagine living out your own personal fantasy in a different virtual world every day. Imagine exploring the galaxy at near light speed, with a few backup copies of your mind safe at home on earth in case you run into an exploding supernova. Imagine a world where resources are harvested so efficiently that everyone’s basic needs are taken care of, and political and economic incentives are so intelligently fine-tuned that “world peace” becomes, for the first time ever, more than a Super Bowl halftime show slogan.

With self-improving AI we may be able to eradicate suffering and death just as we once eradicated smallpox. It is not the limits of nature that prevent us from doing this, but only the limits of our current understanding. It may sound like a paradox, but it’s our brains that prevent us from fully understanding our brains.


Turf Wars

At this point you might be asking yourself: “Why is this topic in this book? What does any of this have to do with the economy or national security or politics?”

In fact, it has everything to do with all of those issues, plus a whole lot more. The intelligence explosion will bring about change on a scale and scope not seen in the history of the world. If we don’t prepare for it, things could get very bad, very fast. But if we do prepare for it, the intelligence explosion could be the best thing that has happened since . . . literally ever.

But before we get to the kind of life-altering progress that would come after the Singularity, we will first have to deal with a lot of smaller changes, many of which will throw entire industries and ways of life into turmoil. Take the music business, for example. It was not long ago that stores like Tower Records and Sam Goody were doing billions of dollars a year in compact disc sales; now people buy music from home via the Internet. Publishing is currently facing a similar upheaval. Newspapers and magazines have struggled to keep subscribers, booksellers like Borders have been forced into bankruptcy, and customers are forcing publishers to switch to ebooks faster than the publishers might like.

All of this is to say that some people are already witnessing the early stages of upheaval firsthand. But for everyone else, there is still a feeling that something is different this time; that all of those years of education and experience might be turned upside down in an instant. They might not be able to identify it exactly but they realize that the world they’ve known for forty, fifty, or sixty years is no longer the same.

There’s a good reason for that. We feel it and sense it because it’s true. It’s happening. There’s absolutely no question that the world in 2030 will be a very different place than the one we live in today. But there is a question, a large one, about whether that place will be better or worse.

It’s human nature to resist change. We worry about our families, our careers, and our bank accounts. The executives in industries that are already experiencing cataclysmic shifts would much prefer to go back to the way things were ten years ago, when people still bought music, magazines, and books in stores. The future was predictable. Humans like that; it’s part of our nature.

But predictability is no longer an option. The intelligence explosion, when it comes in earnest, is going to change everything— we can either be prepared for it and take advantage of it, or we can resist it and get run over.

Unfortunately, there are a good number of people who are going to resist it. Not only those in affected industries, but those who hold power at all levels. They see how technology is cutting out the middlemen, how people are becoming empowered, how bloggers can break national news and YouTube videos can create superstars.

And they don’t like it.


A Battle for the Future

Power bases in business and politics that have been forged over decades, if not centuries, are being threatened with extinction, and they know it. So the owners of that power are trying to hold on. They think they can do that by dragging us backward. They think that, by growing the public’s dependency on government, by taking away the entrepreneurial spirit and rewards and by limiting personal freedoms, they can slow down progress.

But they’re wrong. The intelligence explosion is coming so long as science itself continues. Trying to put the genie back in the bottle by dragging us toward serfdom won’t stop it and will, in fact, only leave the world with an economy and society that are completely unprepared for the amazing things that it could bring.

Robin Hanson, author of “The Economics of the Singularity” and an associate professor of economics at George Mason University, wrote that after the Singularity, “The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.”

That is unfathomable. But even if the rate were much slower, say a doubling of the world economy in two years, the shock-waves from that kind of growth would still change everything we’ve come to know and rely on. A machine could offer the ideal farming methods to double or triple crop production, but it can’t force a farmer or an industry to implement them. A machine could find the cure for cancer, but it would be meaningless if the pharmaceutical industry or Food and Drug Administration refused to allow it. The machines won’t be the problem; humans will be.

And that’s why I wanted to write about this topic. We are at the forefront of something great, something that will make the Industrial Revolution look in comparison like a child discovering his hands. But we have to be prepared. We must be open to the changes that will come, because they will come. Only when we accept that will we be in a position to thrive. We can’t allow politicians to blame progress for our problems. We can’t allow entrenched bureaucrats and power-hungry executives to influence a future that they may have no place in.

Many people are afraid of these changes— of course they are: it’s part of being human to fear the unknown— but we can’t be so entrenched in the way the world works now that we are unable to handle change out of fear for what those changes might bring.

Change is going to be as much a part of our future as it has been of our past. Yes, it will happen faster and the changes themselves will be far more dramatic, but if we prepare for it, the change will mostly be positive. But that preparation is the key: we need to become more well-rounded as individuals so that we’re able to constantly adapt to new ways of doing things. In the future, the way you do your job may change four to five or fifty times over the course of your life. Those who cannot, or will not, adapt will be left behind.

At the same time, the Singularity will give many more people the opportunity to be successful. Because things will change so rapidly there is a much greater likelihood that people will find something they excel at. But it could also mean that people’s successes are much shorter-lived. The days of someone becoming a legend in any one business (think Clive Davis in music, Steven Spielberg in movies, or the Hearst family in publishing) are likely over. But those who embrace and adapt to the coming changes, and surround themselves with others who have done the same, will flourish.

When major companies, set in their ways, try to convince us that change is bad and that we must stick to the status quo, no matter how much human inquisitiveness and ingenuity try to propel us forward, we must look past them. We must know in our hearts that these changes will come, and that if we welcome them into our world, we’ll become more successful, more free, and more full of light than we could have ever possibly imagined.

Ray Kurzweil once wrote, “The Singularity is near.” The only question will be whether we are ready for it.

The citations for the chapter include:

  • Luke Muehlhauser and Anna Salamon, "Intelligence Explosion: Evidence and Import"
  • Daniel Dewey, "Learning What to Value"
  • Eliezer Yudkowsky, "Artificial Intelligence as a Positive and a Negative Factor in Global Risk"
  • Luke Muehlhauser and Louie Helm, "The Singularity and Machine Ethics"
  • Luke Muehlhauser, "So You Want to Save the World"
  • Michael Anissimov, "The Benefits of a Successful Singularity"


181 comments, sorted by Click to highlight new comments since: Today at 5:54 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Does anyone remember when that one commenter freaked out and declared he would be attempting to marginally increase existential risk by sending right-wingers information about the singularity?


Clearly Waitingforgodel was talking about, y'know conservative people - with a conservative general mindset that extends into politics - and not progress-loving, ultra-capitalist right-wingers that get lumped in as "Conservatives". The distinction looks obvious enough to me.

And I'm not at all convinced that we should prefer the latter's enthusiasm to the former's anger.

Clearly Waitingforgodel was talking about, y'know conservative people - with a conservative general mindset that extends into politics - and not progress-loving, ultra-capitalist right-wingers that get lumped in as "Conservatives". The distinction looks obvious enough to me.

That's because you're a Burkean socialist. That distinction is not obvious to a large part of the population.

We ought to be more savvy than the general public in most regards, though, shouldn't we?
My point is that the distinction might not have been obvious to Waitingforgodel.
Yeah, well, in that case it's more evidence that he wasn't acting very sensibly :)
Glenn Beck is hardly someone whose enthusiasm you should welcome. He has a creationist agenda that he has found a way to support with the ideas surrounding the topic of the Singularity.
Yeah, well, I view a compartmentalized belief in something like Creationism as practically harmless - unless its holder is directly involved with the relevant fields of science, ofc. No, I was expressing concern about his ideological stance and his apparent view of society, which I find much less wise and much more harmful than, say, traditional European-style conservatism. You can tell that a commie like me is deeply alarmed when I start praising the latter! (But hush, this is veering off.)
[-][anonymous]10y 51

Glenn Beck also openly states that his books are ghost written. So maybe he's heard of Kurzweil and wants to talk about the singularity, but I'm guessing he's not the one reading SI publications.

Does it matter? People read Glenn Beck's books; this both raises awareness about the Singularity and makes it a more "mainstream" and popular thing to talk about.

This is a surprising amount of information relevant to the Singularity that Glenn Beck is familiar with, including people like Luke, Anna, Eliezer, and Michael.

Could Glenn be part of Lesswrong? Is there a possibility that Glenn Beck has read the LW sequences?

"...smiling down from their huge piles of utility..." Oh my God.

"...smiling down from their huge piles of utility..."

I thought of that too. O.o

Glenn Beck being Eliezer's Finest Apprentice .... God must be a fanfic writer.

Yes, but fanfic of what? I'd hate to learn I was living in a Lovecraft pastiche... or worse, a Lifetime Original Movie.
Three [] relevant [] TV tropes []. (Also keep in mind that by hypothesis the author is superintelligent. You shouldn't really expect to understand anything better than a caveman would grasp the subtleties of Dostoyevsky.)
Here [] are [] some [] relevant [] TV Tropes [].

Say what you will about Glenn Beck, he is a fairly well read person. It doesn't surprise me so much that he knows of these people, from what I know of him.

I have been impressed on his understanding of Hayek. He knows his Road to Serfdom.

That's on my list of things I didn't expect to see today.

The question now becomes what is Glenn Beck's username on LessWrong?

Every Right Wing Neo Reactionary Lesswrongian is now under suspicion.

Damn straight, comrade! We'd better uphold our political vigilance!
You seem to be suggesting a Not-Glenn-Beck Turing-like Test.
I bet US$1000 [] that Glenn Beck is one of User:Will_Newsome's sock-puppets.

Hmm, wow. This poses all sorts of interesting questions. How should this make me update on

  • Beck's overall rationality?
  • Beck's overall sincerity and what his agenda actually is?
  • Whether the Singularity/existential risk is being taken more seriously by a mainstream audience?
  • Whether this is good or bad press for SI?

Too provincial. The Singularity isn't important because thinking about it clearly and doing something to make it come out well were things SI/LW were involved in early on and this is a feather in our cap that speaks well for our "rationality" and may speak well for others if "they" agree with "us"... because the really interesting things aren't the particular current players, but where the game is headed.

SI/LW were and are important to the degree that they are an educated and good faith source of optimization pressure on the outcome of the singularity. Think of a sort of consequentialist litany... If it brings about better aggregate time-integrated outcomes for mainstream audiences to take the singularity seriously then I want mainstream audiences to take the singularity seriously. If it brings about worse aggregate time-integrated outcomes for mainstream audiences to take the singularity seriously then I do not want mainstream audiences to take the singularity seriously.

The real question is how and when contact with "real politics" will affect the trajectory and outcome of the singularity. If "serious" attention is predicted by... (read more)

Not much to almost any of these except possibly the last. Singularity related ideas are getting a lot more press than they did a few years ago especially through a Kurzweillian lens. Kurzweil's book was a best seller.

Wow! the author has read a lot of SI material, including some not meant for a popular audience, and seems to understand it. That is the best mainstream write-up of SI I have seen so far.

I realize that Beck's job is political pontification (I had to look up whether he was 'right' or 'left', not that it matters), but this is a very good sign.

Bringing FAI into mass discussion is not valuable in itself, and may be harmful, but social acceptability for the ideas is important in 'allowing' the smart and rich people to get involved -- not every smart or rich person is willing to swim against the current.

I struggle to think of an idea that Beck's endorsement would improve the social acceptability of. He is a fringe political figure (to the point where he lost his Fox News show). Associating AGI with Beck makes the idea less socially acceptable in the communities most smart and rich people are found.

Edit: So this comment is getting downvotes. Which is interesting because I intended to be making the exact same argument that I made in this part of the thread in comments that are heavily upvoted. I'm guessing the rhetoric in this comment somehow comes off as politically mind-dead in a way those other comments don't. That's either accidental or leaking from cached thoughts I developed when I was heavily political. I don't care about the downvotes but I am really curious to know what word choices resulted in that overly-political impression since I don't think the point in this comment is denotationally different from those in the other two.

I think your post was entirely reasonable. At first I was like, "Why would Glen Beck haters care if Glen Beck endorses the SIAI? But then I remembered that I wasn't a Glen Beck hater. For me, the equivilant of Glen Beck would be someone like Roissy in DC. If Roissy endorsed the SIAI... well I'd probably keep giving but it would take away some of the joy. Donors affiliate. []
I don't have anything against Beck personally (although I think his specific views are hilariously silly & short-sighted & ethically blind), but I too would understand if many people about as left of center as me ideologically would be repelled by such a connection. We must tread carefully; LWers suck at politics.
LWers are great at politics. It's just that politics suck for LWers. Edit: Since my meaning wasn't clear: Mind-killing is a feature not a bug of politics. It is not a truth-seeking activity and getting caught up in the signaling, the motivated thinking and the tribalism is not "being bad at politics". It's the opposite.
I think the comment that LWer suck at Politics is the more apt description. Politics is the art of the possible, and that it deals with WHAT IS, regardless of whether that is "rational." And attempting to demand that it conform to rationality standards dictated by this community guarantees that this community will lack political clout. Especially if it becomes known that the main beneficiaries and promoters of the Singularity have a particularly pathological politics. Peter Thiel may well be a Libertarian Hero, but his name is instant death in even mainstream GOP circles, and he is seen as a fascist by the progressives. Glenn Beck is seen as a dangerous and irrationally delusional ideologue by mainstream politicians. That sort of endorsement isn't going to help the cause if it becomes well known. It will tar the Singularity as an ideological enclave of techno-supremists. NO ONE at Less Wrong seems to be aware of the stigma attached to the Singularity after the performance of David Rose at the "Human Being in an Inhuman World" conference at Bard College in 2010. I was there, and got to witness the reactions of academics and political analysts from New York and Washington DC (some very powerful people in policy circles) who sat, mouths hanging aghast, at what David Rose was saying. When these people discover that Glenn Beck is promoting the Singularity (and Glenn Beck has some very specific agendas in promoting it, that are very selfish and probably pretty offensive to the ideals of Less Wrong) these people will be even more convinced that the Singularity is a techno-cult composed of some very dangerous individuals.
Peter Theil [] the current main donor probably dosen't mind at all. I think potential very rich donors are more like Thiel and less like say Buffet or Gates.
Jack, I get your point and agree with you. I have seen reasonable positions get hijacked by the sincere endorsement of people whose politics is distasteful to others. Without reference to what we personally think about Beck, it is possible that his endorsement will polarize people as they decide to be for or against SI according to their politics.

Not sure this constitutes good publicity.

Why? The guy is far from stupid and he preaches to the segment of the US population not generally interested in browsing this forum.

He has a extremely poor reputation among a large pool of potential donors, employers, policy-makers, academics and 'serious' people. If you want your ideas taken seriously he is not the figure to have introduce them to the general public. This is true regardless of what his reputation ought to be.

But the vast majority of people reading his book will have a positive opinion of him.

Yes, but most people reading his book are considerably less important (at least, for my purposes) than people watching Laurence O'Donnell when he spends five minutes next week making fun of Glen Beck's book. Or Maureen Dowd's column when she does the same. Obviously if this is just a few pages in a little noticed book there is no problem with it at all (it's basically neutral). If Beck decides to spend a lot of time talking about it, to the point that it becomes identifiable as "one of Beck's ideas", that would be bad. I don't have a well-calibrated idea of how much publicity Beck gets these days or how often he puts out books which is why I'm unsure of the effect.

These seem like low probability concerns

  • Beck is on the air hours a day, and he has put out about 20 books, with no end in sight. He was a wild-morning radio disc jokey and still uses that bombastic style, and never scrips out what he is going to say. His opponents just cut out tiny samples- the least politically correct stuff, to slam him with. Its very unlikely that Blue team commentators would ever get around to something this serious, when there are far more juicy bits.

  • This section mentions Kurzweil enough times (over 10 times), with other names, to make it very clear to anyone that this isn't 'beck's' idea.

  • Beck's main 'sciencey' project he constantly promotes, funds, and fundraisers for is a cancer treatment where the patient is injected with metal nano particles that bond to the malignant tissue. The particles heat up under radio waves, bursting theses cells, leaving the rest of the body untouched. I've seen no evidence of other media outlets connecting this project with him, or any mention of this association at all.

It is my prediction that this will be ignored, as his critics have more to gain by ignoring any pro-sciencey Beck association.

This evidence is sufficient for me to update to your level of concern.

He puts out books fairly frequently, I believe. I can't attest to anything else. It's been years since I paid attention to him.
It does, however, constitute good evidence of the success of other publicity.
I really enjoyed the debate following this comment; both in that I felt it was necessary, and also in that it was resolved very quickly and elegantly.
Is this "Oy, we're in trouble, just like Jack says," or "Oy, Jack just said something stupid"?
I don't know what "Oy" means. Can you clarify?
"Oy"-ignorance is what identifies you as a non-NY LessWrongian :-p
Traditionally, "an exclamation typically expressing mild frustration or expressing feelings of uncertainty or concern."
I had thought that exclamation was spelled "Oi!"
Nah, that means like "hey!", I think. You might've reversed the two.
2Jack10y []
I follow American politics pretty closely, but am tone-deaf to some of the responses to this post. Jack apparently is saying "Beck has a particular ideology and a mass audience and so this will embroil us in politics." Is Carl is saying "Leave off the politics already, it's good to have the write-up," or "Yes, you're right"?

This breaks my model. I mean Beck is a libertarian and libertarians are more friendly than average to transhumanist ideas. Also he is a Mormon, another group memetically compatible with transhumanism.


I would like confirmation of this text, it may be a hoax.

I would have thought a LessWrongian would at least not provide a clear and obvious case of confirmation bias.

No, confirmation bias would be if the comment thread were full of people saying "of course Glenn Beck signs off on Singularitarian ideas, this makes perfect sense according to my model of politics" rather than "what the fuck, this breaks my model, I think I'm being trolled somehow".

(It might be worth reiterating that Glenn Beck is Mormon and into eschatology, so his singularitarianism is almost expected. I think the only big surprise is that SingInst's publication strategy has apparently been rather successful.)
Yes, this. It's also worth noting that because Glenn Beck is a Mormon, we have ample evidence that his belief in the singularity (if he believes in it at all -- this could still very well be a ghostwriter talking) is not a carefully considered belief. If he can convince himself of Mormonism, he is very well capable of convincing himself of just about anything.
Eh. Few "beliefs" and "belief-systems"—or more accurately, decision-policy-systems and social-signaling-systems—are as attractive as Mormonism. I don't think being Mormon is a sign of low epistemic standards so much as a sign of high instrumental rationality. Furthermore I think Glenn Beck or his ghostwriter understands the Singularity's political situation better than most LWers. That said I've never heard or read anything by Glenn Beck except the above excerpt.
Without a single exception, every convert to Mormonism I've met (and this includes several colleagues) professes to have converted because they find Mormons to be kind, conscientious, and positively directed people. I suspect this is how most people evaluate membership in a community. The "beliefs" and "belief-systems" are somewhat arbitrary to most, like the colors on the jerseys of sports teams. In fact, in a population where most professed "beliefs" and "belief-systems" are mainly signals of group affiliation and less subjects for serious thought, this is arguably a more rational way of evaluating the desirability of group membership.
I grew up Mormon and attended BYU for a few years, and a lot of descriptions of Mormons I read on here are completely foreign to me. Knowing that the LDS Church was literally true was always an extremely important aspect of the religion when I grew up--it wasn't just about the community. I suspect that the types of Mormons that people on LessWrong tend to come in contact with are very much outside the mainstream. While I can see that Mormon theology can be twisted to support a sort of trans-humanism, in my experience the typical Utah Mormon would find this very bizzarre.
I think this is rather typical and also age and personal development related. As a child, I was a staunch believer of my Catholic Sunday school teachings, to the point where I found my parents to be alarmingly lax and contrary. This changes for lots of people as they get older. In my 30's I had a girlfriend who described herself as a "cultural Catholic" and basically went to church because it's what most of the people she knew did. In any case, at the local Hackerspace, I found that most everyone professed beliefs in the importance of science and rationality, but much of this is indistinguishable from professing a preference for a genre of music or a type of gaming. As far as people rigorously applying rationally grounded beliefs to their own lives, I don't think much was done which couldn't be comfortably explained as people generally doing what their peers do. My Mormon colleagues were all working for software companies. The typical human being finds trans-humanism bizarre. Most people are just doing what those around them are doing, saying what those around them are saying, and generally just getting on with their day to day lives. Perhaps we should hold our leaders to a higher standard.
You're claiming all religious beliefs reduce to decision policies and social signals? That's pretty cynical, even for you. It can't be both? (Not that I see its "high instrumental rationality" either.) Bleh. If I wanted to argue the merit of X's thought to people who hadn't read X, I'd go harass XiXiDu on G+. Consider me tapped out.
Not all, just a pretty big chunk, especially among Mormons. I guess I didn't think of it as "cynical". That's a weird word. 'Course it can. But the existence of two causal factors makes it hard to determine which of the two causal factors contributed most of the causal juices to our observation, such that "low epistemic standards" isn't quite as obviously a big factor. I edited my comment to "Singularity's political situation". I didn't mean to imply Beck has a good political model more generally. Priors say he doesn't.
How is Mormonism attractive? You don't even get multiple wives anymore. And most people think you're crazy.
But orthonormal, your example displays hindsight bias rather than confirmation bias! I interpret billswift's comment to mean: (Or possibly it was meant the other way around?) In any case, I agree that billswift's comment is off-base, because GLaDOS' comment does not actually show confirmation bias.

Hm. This is very interesting to me. I think I am more surprised by Beck coming out for this than I should be. I used to listen to him, when he was far less popular and before he started predicting the end times and being generally insane. I don't necessarily think that Beck believes all of the things that he says, but that he knows that being outlandish brings him more success. And, accurate or not, the singularity is considered outlandish to the average person right now.

But I'm glad for the publicity. It's not like he said anything in that excerpt that I particularly disagree with - which surprised me, given the topic and his religious proclivities. And the vast majority of people reading his book will have a positive opinion of him, as opposed to a negative one, which will lead to more respect and interest in SI, not less.

Let's just make sure if anyone comes to LW referred by Glenn Beck we point to to Politics is the Mindkiller ASAP.

Edited to add: It would have been really interesting to see people's opinions of this if you had withheld the author and not told us until later.

It would have been really interesting to see people's opinions of this if you had withheld the author and not told us until later.

Somehow, I read the excerpt without noting the author. I got about halfway through it and thought it must be a piece that some relatively new Less Wronger had gotten published in a local paper somewhere (i.e. half-decent but not really high-quality, and obviously edited for mainstream publication). Then I looked back at the top and thought,


Similarly. My previous beliefs about Glen Beck points towards a devout Christian fundamentalist. I would not have considered the fact that he would support the singularity, much less take it seriously. It seems I have to update my beliefs quite a bit.

Beck is a Mormon, and Mormons generally seem a lot friendlier to transhumanist-type ideas than standard Christians.

That's definitely true [].
He believes that the Singularity is proof that the Universe was created by an Intelligent Creator (who happens to be the Christian God), and that it is further evidence of YEC.

With self-improving AI we may be able to eradicate suffering and death just as we once eradicated smallpox.

It's one thing to be for technological progress - quite another to be for eradicating Death. Sign that man up for the Bayesian Conspiracy.

People talking about how low status Glenn Beck is need to realize that numerically far more people take Glenn Beck seriously than Kurzweil. Just because the Brahmin (Moldbug's terminology) hate him, dosen't mean he isn't influential and popular in among the class of people who find themselves vulnerable to be mislead to react badly to the Singularity.

We realize that. We just care a lot more about the the views of the Brahmin than your second class of people. Which class of people has the larger microphone? Which class has the money? Which class produces mathematicians? Etc.

I believe you mean "the Broheim."
Than you for catching the typo. Also I find it refreshingly honest that you acknowledge reality of this. Yes they by definition don't control the media or academia, their material resources aren't trivial but it is impossible for them to coordinate, since any organization they create to defend their interests will be subverted or marginalized. The class produces its fair share of mathematicians. In absolute numbers I'd say more than the upper classes.
I'm not sure the Moldbuggian taxonomy carves reality at the joints but I have no problem employing it as needed. I doubt it (IQ, heredity), unless you're including Asian immigrant populations. But in any case mathematicians usually spend a good amount of time in university which means, wherever they grew up, by the time they're looking for jobs they're usually firmly entrenched in a brahmin social circle.
Ah I fear I misunderstood you and took "produces" rather too literally. Don't forget that STEM fields are attractive to those who want to climb socially but lack the graces because of their upbringing. Obviously they retain few of those who go to become mathematicians.
Which class has any incentive to change the status quo? When planning what amounts to a massive reorganization of society, a.k.a., a revolution, you what the support of "the people" since the current upper class likes things the way they are.
There may or may not be something here but Brahmin are mostly progressives trying to continue ongoing work reorganizing society while the second group (someone remind of what Moldbug calls white people who don't hate Glen Beck?), in so far as they are political, are reactionary and would prefer to return to how society used to be organized.
Being influential is not necessarily a good thing. Especially when Glenn Beck's influence is in delusional conspiracy theories, and evangelical christianity, and Young Earth Creationism.

The chapter begins with a pretty delightful infelicity, since in 1678 Beethoven's Fifth Symphony was still 130 years away from its premiere. Granted, this is very specialized knowledge available only to professional musicologists like myself and I doubt Beck's publisher can afford my consulting fees.

(I can just imagine the English scientists standing around wondering why this lunatic is inflicting this cacophony on them and looking at them so expectantly.)

The chapter begins with a pretty delightful infelicity, since in 1678 Beethoven's Fifth Symphony was still 130 years away from its premiere.

If Siri made the journey back in time, why are you surprised that an mp3 of Beethoven's 5th made the journey? Siri was created slightly later than Beethoven's 5th.

Somewhat more amazing is that this iPhone has cellular service in the 17th century, and can make video calls to future people. It must be on Verizon.

Well, at the risk of explaining my joke, I only meant to suggest that the opening of the chapter makes it sound like Beck thinks Beethoven's Fifth would have been "famous" and instantly recognizable to Englishmen in 1678. Maybe I should charitably assume that Beck originally had it as "the latest church anthem by Purcell" but his editors made him change it.

The wording seems to be ambiguous as to whether it's saying Beethoven's Fifth is "famous" to the readers or "famous" to people in 1678.
This wouldn't violate Einstein's relativity with a wormhole with openings appropriately located in space-time. Ao Apple and Verizon have wormholes in this scenario.

pretty sure beethoven's fifth would be impressive coming out of a phone whether or not you knew what it was.

I see it as being like the Chuck Berry scene in Back to the Future.

Or where the "scientists" would come from since that term wouldn't exist for another 150 years or so.
He should have brought Archimedes's Chronophone with him instead! (ugh, I'm sorry for that.)
Umm, actually he doesn't say that they recognized it, just that they would be amazed that it makes music. That seems pretty plausible to me. Seems like you got mind-killed pretty hard here.
I think we need something like Wikipedia:Assume_the_assumption_of_good_faith [].
[-][anonymous]10y 8

your seventeenth-century friends would worship you

“Your seventeenth-century friends”? My last-year self didn't fully realize how awesome it would be to have a high-res camera, a satellite navigator, a flashlight with a strobe effect, and an Internet connection always with me. I was so proud to hold onto my Nokia 3330. How silly of me.

(Wasn't this comment at -3 shortly after I published it? Or did I hallucinate that?)
It was. Many of my comments on this thread underwent a similar reversal.

Finally, a palpable sign of success! I'm so happy that you guys are finally getting your message across :o)

This is not exactly "success." There are some populations that will pervert the things they get in their hands.

This is interesting in what it suggests for the future.

Romney is a conservative mormon, for example.

Beck's rant proposing that the political left is aligned with a nebulous big-government/big-business anti-technological movement may be mostly rhetorical hot air, but it did make me wonder . ..

Well before AGI is super-intelligent, weaker AGI and stronger narrow AI will likely lead to a hugely disruptive socio-economic disruption. This isn't being discussed much (outside of perhaps a lone blog and book or two).

Actually, this transition is already under way. ... (read more)

I get the same feeling with Thiel noticing our lack of techno-optimism in our culture compared with our optimism at 1950. Doesn't he largely blame entrenched interests putting up regulation to stop new start ups?
I failed to spark good discussion of that subject with this post on "semi-general" AIs [].
Meandering from that post, came across this graph [] of productivity versus employment. I'm fairly convinced technology is the leading factor in the divergence, even though others mention the financial sector and probably politically-motivated concerns about different presidents. Not sure if we will experience another industrial revolution scenario of labor devaluation or whether this change will be qualitatively different. I wonder who will benefit fiscally from booms in non-human productivity, and whether monetary gain will still mean the same thing it recently has.
I have noticed it too: [] Right at the cusp of change, humans will have tremendous ability to exercise choices about where this all ends up. Often the choices of the masses in a crisis result in more crisis. I think it's wisest to completely avoid situations where the masses are making choices in crisis.

Beck probably has ghostwriters. That excerpt was probably written by one of them, and could just be something that Beck signed off on because he's vaguely singularity-positive from having interviewed Kurzweil; it might be that Beck never read any of the references, and if that excerpt is is suprisingly high quality, we shouldn't automatically attribute that to him.


These are the kinds of research problems being tackled by the Singularity Institute in America and the Future of Humanity Institute in Great Britain. Unfortunately, our silly species still spends more money each year on lipstick research than we do on figuring out how to make sure that the most important event of this century (maybe of all human history)— the intelligence explosion— actually goes well for us.

Woo woo!

I wonder if this content of Glenn Beck's book with so much SI stuff in it is what sent Will Newsome into the deep end without a paddle recently?

Despite the obviously low base rate, I intuitively felt it was more plausible that Will Newsome is Glenn Beck's ghostwriter than that Beck wrote that excerpt himself.

Many years ago, a friend shared with me the putative fact that the Vatican was, at the time, the largest single stockholder in Johnson and Johnson, which was at the time the world's largest manufacturer of prophylactics. (I have no idea if either of these putative facts were ever actually true.) He went on to explain that, when he learned this, everything else began to make sense: whenever he encountered some X that might otherwise seem senseless, he could assure himself that in a world where the Vatican was the world's largest supplier of prophylactics, X made perfect sense.

I feel similarly about the world where Will Newsome is Glenn Beck's ghostwriter.

No causal connection as such, but you never know, there might be a latent node.

A (dated) montage on AI from Beck's Radio show. Much less Singularity-friendly, but perhaps he's changed since then.

Unfortunately Kurtweil’s vision, as with that of other adherents of the transhumanist cult, is grossly distorted.

His efforts to underline the very clear exponential pattern of technological development (which has been apparent to many of us for more than half a century and even quantified by Gordon Moore 40 years back) are certainly to be commended.

Nevertheless, Kurtzweil remains completely oblivious to the clear and inevitable extension of biological evolution which, by a process of self-assembly rather than direct human design, which will, within decades... (read more)

Glenn Beck was one of the first TV personalities to interview Ray.

The Interview is on YouTube, and is very informative as to Glenn's objectives and Agenda.

Primarily, he wishes to use the ideology behind the Singularity as support for "Intelligent Design." In the inteview, he makes an explicit statement to that effect.

Glenn Beck is hardly "rational" as per the definition of "Less Wrong."

Intelligent Design?

The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us.

Not the best formulated example. From what I've read in accounts of chimpanzee owners and minders, chimpanzees do negotiate with people. From what I've read and heard about dog owners, it seems to me that dogs also negotiate with their owners.

I suspect that the ability to negotiate requires less int... (read more)

Also, don't forget that humans will be improving just as rapidly as the machines. My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.
So the real threat to humanity are the machines that humanity will become. (Is in the process of becoming.)

This is a long quote. Excerpt moar.

I'd prefer he write a summary of it - I am quite glad that I was able to read the whole thing.

[+][anonymous]10y -5