A monthly feature. Note that I generally don’t include very recent writing here, such as the latest blog posts (for those, see my Twitter digests); this is for my deeper research.

AI

First, various historical perspectives on AI, many of which were quite prescient:

Alan Turing, “Intelligent Machinery, A Heretical Theory (1951). A short, informal paper, published posthumously. Turing anticipates the field of machine learning, speculating on computers that “learn by experience”, through a process of “education” (which we now call “training”). This line could describe current LLMs:

They will make mistakes at times, and at times they may make new and very interesting statements, and on the whole the output of them will be worth attention to the same sort of extent as the output of a human mind.

Like many authors who came before and after him, Turing speculates on the machines eventually replacing us:

… it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.

(I excerpted Butler’s “Darwin Among the Machines” in last month’s reading update.)

Irving John Good, “Speculations Concerning the First Ultraintelligent Machine (1965). Good defines an “ultraintelligent machine” as “a machine that can far surpass all the intellectual activities of any man however clever,” roughly our current definition of “superintelligence.” He anticipated that machine intelligence could be achieved through artificial neural networks. He foresaw that such machines would need language ability, and that they could generate prose and even poetry.

Like Turing and others, Good thinks that such machines would replace us, especially since he foresees the possibility of recursive self-improvement:

… an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind…. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

(See also Verner Vinge on the Singularity, below.)

Commenting on human-computer symbiosis in chess, he makes this observation on imagination vs. routine, which applies to LLMs today:

… a large part of imagination in chess can be reduced to routine. Many of the ideas that require imagination in the amateur are routine for the master. Consequently the machine might appear imaginative to many observers and even to the programmer. Similar comments apply to other thought processes.

He also has a fascinating theory on meaning as an efficient form of compression—see also the article below on Solomonoff induction.

The Edge 2015 Annual Question: “What do you think about machines that think?” with replies from various commenters. Too long to read in full, but worth skimming. A few highlights:

  • Demis Hassabis and a few other folks from DeepMind say that “the ‘AI Winter’ is over and the spring has begun.” They were right.
  • Bruce Schneier comments on the problem of AI breaking the law. Normally in such cases we hold the owners or operators of a machine responsible; what happens as the machines gain more autonomy?
  • Nick Bostrom, Max Tegmark, Eliezer Yudkowsky, and Jaan Tallin all promote AI safety concerns; Sam Harris adds that the fate of humanity should not be decided by “ten young men in a room… drinking Red Bull and wondering whether to flip a switch.”
  • Peter Norvig warns against fetishizing “intelligence”as “a monolithic superpower… reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted.”
  • Steven Pinker gives his arguments against AI doom, but also thinks that “we will probably never see the sustained technological and economic motivation that would be necessary” to create human-level AI. (Later that year, OpenAI was founded.) If AI is created, though, he thinks it could help us study consciousness itself.
  • Daniel Dennett says it’s OK to have machines do our thinking for us as long as “we don’t delude ourselves” about their powers and that we don’t grow too cognitively weak as a result; he thinks the biggest danger is “clueless machines being ceded authority far beyond their competence.”
  • Freeman Dyson believes that thinking machines are unlikely in the foreseeable future and begs out entirely.

Eliezer Yudkowsky, “A Semitechnical Introductory Dialogue on Solomonoff Induction (2015). How could a computer process raw data and form explanatory theories about it? Is such a thing even possible? This article argues that it is possible and explains an algorithm that would do it. The algorithm is completely impractical, because it requires roughly infinite computing power, but it helps to formalize concepts in epistemology such as Occam’s Razor. Pair with I. J. Good’s article (above) for the idea that “meaning” or “understanding” could emerge as a consequence of seeking efficient, compact representations of information.

Ngo, Chan, and Mindermann, “The alignment problem from a deep learning perspective (2022). A good overview paper of current thinking on AI safety challenges.

The pace of change

Alvin Toffler, “The Future as a Way of Life (1965). Toffler coins the term “future shock,” by analogy with culture shock; claims that the future is rushing upon us so fast that most people won’t be able to cope. Rather than calling for everything to slow down, however, he calls for improving our ability to adapt: his suggestions include offering courses on the future, training people in prediction, creating more literature about the future, and generally making speculation about the future more respectable.

Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era (1993). Vinge speculates that when greater-than-human intelligence is created, it will cause “change comparable to the rise of human life on Earth.” This might come about through AI, the enhancement of human intelligence, or some sort of network intelligence arising among humans, computers, or a combination of both. In any case, he agrees with I. J. Good (see above) on the possibility of an “intelligence explosion,” but unlike Good he sees no hope for us to control it or to confine it:

Any intelligent machine of the sort he describes would not be humankind’s “tool”—any more than humans are the tools of rabbits, robins, or chimpanzees.

I mentioned both of these pieces in my recent essay on adapting to change.

Early automation

A Twitter thread on labor automation gave me some good reading recommendations, including:

Van Bavel, Buringh, and Dijkman, “Mills, cranes, and the great divergence (2017). Investigates the divergence in economic growth between western Europe and the Middle East by looking at investments in mills and cranes as capital equipment. (h/t Pseudoerasmus)

John Styles, “Re-fashioning Industrial Revolution. Fibres, fashion and technical innovation in British cotton textiles, 1600-1780 (2022). Claims that mechanization in the cotton industry was driven in significant part by changes in the market and in particular the demand for certain high-quality cotton goods. “That market, moreover, was a high-end market for variety, novelty and fashion, created not by Lancastrian entrepreneurs, but by the English East India Company’s imports of calicoes and muslins from India.” (h/t Virginia Postrel)

Other

Ross Douthat, The Decadent Society (2020). “Decadent” not in the sense of “overly indulging in hedonistic sensual pleasures,” but in the sense of (quoting from the intro): “economic stagnation, institutional decay, and cultural and intellectual exhaustion at a high level of material prosperity and technological development.” Douthat says that the US has been in a period of decadence since about 1970, which seems about right and matches with observations of technological stagnation. He quotes Jacques Barzun (From Dawn to Decadence) as saying that a decadent society is “peculiarly restless, for it sees no clear lines of advance,” which I think describes the US today.

Richard Cook, “How Complex Systems Fail (2000). “Complex systems run as broken systems”:

The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior ‘proto-accidents’ that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

Therefore:

ex post facto accident analysis of human performance is inaccurate. The outcome knowledge poisons the ability of after-accident observers to recreate the view of practitioners before the accident of those same factors. It seems that practitioners “should have known” that the factors would “inevitably” lead to an accident.

And:

This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized.

Ed Regis, “Meet the Extropians (1994), in WIRED magazine. A profile of a weird, fun community that used to advocate “transhumanism” and far-future technologies such as cryonics and nanotech. I’m still researching this, but from what I can tell, the Extropian community sort of disbanded without directly accomplishing much, although it inspired a diaspora of other groups and movements, including the Rationalist community and the Foresight Institute.

New Comment