Follow me as I read through an essay on AI Theology and give my comments. Good Shepherds (O’Gieblyn, 2019)
It’s not a human move,” said one former champion. “I’ve never seen a human play this move.” Even AlphaGo’s creator could not explain the algorithm’s choice. But it proved decisive.
Obligatory reference to AlphaGo. Could have mentioned AlphaZero, but it was not nearly as popular as AlphaGo.
While Elon Musk and Bill Gates maunder on about AI nightmare scenarios—self-replication, the singularity—the smartest critics recognize that the immediate threat these machines pose is not existential but epistemological.
By "immediate" means "now". Reasonable. AI apocalypse is not scheduled until perhaps 100 years later.
Noah Harari has argued that the religion of “Dataism” will soon undermine the foundations of liberalism. “Just as according to Christianity we humans cannot understand God and His plan,” he writes, “so Dataism declares that the human brain cannot fathom the new master algorithms.”... Our role as humans is not to question the algorithmic logic but to submit to it.
"Dataism" does not necessarily undermine liberalism. It is possible to augment human thought to keep up with the data deluge, and preserve (trans)human liberalism. Although this is not guaranteed.
Job assumes the role of a prosecuting attorney and demands a cosmic explanation for his suffering. God dutifully appears in court, but only to humiliate Job with a display of divine supremacy. Where were you when I laid the foundation of the earth? he thunders, then poses a litany of questions that no human can possibly answer. Job is so flummoxed, he denounces his own ability to reason. “Therefore I have declared that which I did not understand,” he says, “Things too wonderful for me, which I did not know.”
The problem of Theodicy is amusing. And the trials of God can be total dark humor. My favorite is
In a concentration camp, one evening after work, a rabbi called together three of his colleagues and convoked a special court. Standing with his head held high before them, he spoke as follows: “I intend to convict God of murder, for he is destroying his people and the law he gave to them ... I have irrefutable proof in my hands. Judge without fear or sorrow or prejudice. Whatever you have to lose has long since been taken away.” The trial proceeded in due legal form, with witnesses for both sides with pleas and deliberations. The unanimous verdict: “Guilty.”
Back to the essay.
Throughout the Middle Ages, Christians viewed him in quite a different light. Theology was still inflected with Platonism and rested on the premise that both God and the natural world were comprehensible to human reason. “All things among themselves possess an order, and this is the form that makes the universe like God,”... It wasn’t until the fourteenth century that theologians began to argue that God was not limited by rational laws; he was free to command whatever he wanted, and whatever he decreed became virtuous simply because he decreed it. This new doctrine, nominalism, reached its apotheosis in the work of the Reformers. Like Calvin, Martin Luther believed that God’s will was incomprehensible. Divine justice, he wrote, is “entirely alien to ourselves.”
I do think that if a God made this world, then yes, They are clearly entirely alien and probably more interested in quantum mechanics than justice. Also, one should note that the Reformation probably didn't help or suppress science. See History: Science and the Reformation (David Wootton, 2017).
I often felt myself to be an ant in a network of vast global structures—the market, technology—that exceeded my powers of comprehension. To my mind, even contemporary physics (the little I’d read), with its hypotheses on multiverses and other dimensions, echoed Calvin’s view that our bodies were faulty instruments ill-equipped to understand the absolute.
No need to invoke the more speculative parts. Just the verified Standard Model is strange enough. Or the endless technology stacks in the electronics...
The same year he received his sentence, a ProPublica report found that the software was far more likely to incorrectly assign higher recidivism rates to black defendants than to white defendants. The algorithm suffers from a problem that has become increasingly common in these models—and that is, in fact, inherent to them. Because the algorithms are trained on historical data (for example, past court decisions), their outcomes often reflect human biases.
- There are many standards of statistical fairness, and they are incompatible. COMPAS is unfair in one statistical sense but (almost) fair in another statistical sense. The reason is extremely simple and involves base rate.
- Essentially, the charges of unfairness boils down to "Base rates (Blacks recide more than Whites) are biases, and biases are morally wrong."
- Human judges do "training on historical data" too, and call it "common law". From what I know, English law is a giant tarball of history and really hard to change, just like an old code base. Despite this, it's still used and has been defended for centuries as really good for some reasons. I wonder if the defences of the case law system could be used against the author?
From what I've observed, "bias" and "prior" are distinguished entirely by moral judgments. "Bias" is immoral base rate belief. "Prior" is moral. As such, whether a prior is thought as "bias" or "prior" can easily be manipulated by framing a base rate belief in moral language.
According to research by Haidt, moral feelings are 6 kinds:
- Care / Harm
- Fairness / Cheating
- Liberty / Oppression
- Loyalty / Betrayal
- Authority / Subversion
- Sanctity / Degradation
For the distinction between "bias" and "prior", the most relevant kinds are the first three. For example, the persistent denial of the effectiveness of IQ tests is motivated reasoning, based on moral rejection about how it could be used to justify oppression of low-IQ people, unfairly allowing high-IQ children to get into elite schools, and cause harm of many kinds. By this moral tainting, any prior based on IQ test results becomes immoral prior, thus "bias".
For more on the moralization of base rate beliefs, see for example The base rate principle and the fairness principle in social judgment (J Cao, MR Banaji, 2016) and The Psychology of the Unthinkable (Tetlock et al, 2000).
Or just contemplate how strange it is that sexual orientation is not discrimination, but friendship orientation might be. I can only be friends with females, and a male acquaintance (who really wants to be my friend) once wondered if that's discrimination.
Some have developed new methods that work in reverse to suss out data points that may have triggered the machine’s decisions. But these explanations are, at best, intelligent guesses. (Sam Ritchie, a former software engineer at Stripe, prefers the term narratives, since the explanations are not a step-by-step breakdown of the algorithm’s decision-making process but a hypothesis about reasoning tactics it may have used.)
A toy example is how WolframAlpha manages to show you how to solve an integral "step by step". What it actually does is to internally use a general algorithm that's too hard for humans to understand, then separately use an expert system that looks at the result and the problem, and try to make up a chain of integration tricks that a human could probably have thought up.
Humans are also prone to such guessing. When they introspect their decision process, they might say that they simply knew how they arrived at the decision, but in reality they are trying to infer it using folk theories of psychology. This is the lesson from (Nisbett, 1977) or The unbearable automaticity of being (Bargh & Chartrand, 1999).
As Yuval Noah Harari points out in his book Homo Deus, humanism has always rested on the premise that people know what’s best for themselves and can make rational decisions about their lives by listening to their “inner voice.” If we decide that algorithms are better than we are at predicting our own desires, it will compromise not only our autonomy but also the larger assumption that individual feelings and convictions are the ultimate source of truth. ... “Whereas humanism commanded: ‘Listen to your feelings!’” Harari argues, “Dataism now commands: ‘Listen to the algorithms! They know how you feel.’”
The inner voice, as noted above, is a social voice. It exists to explain the actions of a human to other humans. It does not perceive accurately. An asocial species probably has no inner voice, for there is no benefit of evolving that.
If this Dataism prediction comes to pass, then the inner voice would simply come from the outside. Like, I would think, "What do I like to do today?" [datastream comes from some hidden decision module located somewhere overseas] "Oh yes, write an essay!"
Instead of experiencing a kind of "me listening to the clever robot", it would be like "me listening to me", except the "me" would be weird and spill outside the skull.
The effect would be the same, but the first raises the moral alarm: it has the potential to become immoral by the "Liberty / Oppression" rule.
Kaczynski argues [in Unabomber Manifesto] that the common sci-fi scenarios of machine rebellion are off base; rather, humans will slowly drift into a dependence on machines, giving over power bit by bit in gradual acquiescence. “As society and the problems that face it become more and more complex and machines become more and more intelligent,” he predicts, “people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.”
Kaczynski writes with the precision of a mathematician (he did complex analysis back in school), and his manifesto sets out his primitive humanist philosophy clearly.
This vision seems unlikely to me, simply because it is too benign. People might adapt and survive, but I doubt they would be humans. I believe the future is vomittingly strange.
Kurzweil, who is now a director of engineering at Google, claims he agrees with much of Kaczynski’s manifesto but that he parts ways with him in one crucial area. While Kaczynski feared these technologies were not worth the gamble, Kurzweil believes they are: “Although the risks are quite real, my fundamental belief is that the potential gains are worth the risk.”
Science, as Fromm said, cannot give humanity the sense of meaning it ultimately craves. He believed that ethical and moral systems must be rooted in human nature and human interests, and that science, which searches for objective reality, is merely a way of evading this responsibility.
There is a third way. Science can reveal what humans feel as meaningful, and using that, meaning can be mass-produced at an affordable price. Positive psychology, for example, has shown that there are three cores of the feeling of meaning (F Martela, MF Steger, 2016):
- Coherence: is what I observe about the world understandable?
- Purpose: is there a valued and clearly defined goal for the world and me?
- Significance: can I make a difference in achieving that goal?
It has also, incidentally, found that people feel like their life is pretty meaningful (SJ Heintzelman, LA King, 2014).
The problem of "Yes, it is the feeling of meaning, but is it really meaning?" is not of practical significance. Presumably humans, by their habit of self-denial (they hate to become predictable, even to themselves), would rebel against guaranteed meaning if they recognize it. That appears unlikely (most people welcome guaranteed health and shelter as a human right), but if it does happen, the meaning manufacturing industry can simply become invisible and employ artists who subscribe to existentialism (meaning can only be constructed).
a truly humanistic faith demands a deity with such limits. This doctrine requires that humans relinquish their need for certainty—and for an intelligence who can provide definitive answers—and instead accept life as an irreducible mystery. If science persists in its quest for superintelligence, it might learn much from this tradition.
Typical of these essays to insert a meaningful conclusion at the end. It does not occur to the author that they could also accept the algorithms as irreducible mysteries.
I prefer posthumanism. I do not have much sympathy with humanists' rigid attempts to circumscribe what is human, anyway. If I somehow end up becoming posthuman, okay.
This was a pretty enjoyable read. Meandering, but in a nice relaxing way, without being overly dogmatic. More like musing.
Thanks. I had hoped it to be informative and entertaining. Think of it as "let's play" but for nerds.
Excellent post! (I do not entirely agree with your stance on posthumanism, but that is a secondary matter…)
Re: the externalization of the inner voice: I am reminded of the parable of the Whispering Earring.
Re: manufactured meaning as a pervasive mode of human existence: Karl Schroeder’s excellent novel Lady of Mazes has a lot to say on this topic.
The Whispering Earring is interesting. It appears that the earring provides a kind of slow mind-uploading, but more noninvasive than most other approaches. The author of the story seems to consider it to be bad for some reason, perhaps because of triggering of "Liberty / Oppression" and "Purity / Sanctity" (of the inside-skull self) moral alarms.
Unfortunately I dislike reading novels. Would you kindly summarize the relevant parts?
This is only true if whatever (hyper)computation the earring is using to make recommendations contains a model of the wearer. Such a model could be interpreted as a true upload, in which case it would be true that the wearer's mind is not actually destroyed.
However, if the earring's predictions are made by some other means (which I don't think is impossible even in real life--predictions are often made without consulting a detailed, one-to-one model of the thing being predicted), then there is no upload, and the user has simply been taken over like a mindless puppet.
This wades deep into the problem of what makes something feel conscious. I believe (and Scott Aaronson also wrote about it), that to have such a detailed understanding of a consciousness, one must also have a consciousness-generating process in it. That is, to fully understand a mind, it's necessary to recreate the mind.
If the Earring merely does the most satisfactory decisions according to some easy-to-compute universal standards (like to act morally according to some computationally efficient system), then the takeover makes sense to me, but otherwise it seems like a refusal to admit multiple realizations of a mind.
Part of the story is that !> it tells you you are better off taking it off. Given that it's always as good/better than you at making decisions, leaving it on is a bad idea. <!
I think it admits the possibility that such a thing may be to your detriment. (Perhaps it only contains one model (a human mind?), and uses that knowledge to destroy, rather than upload, human minds.)
EDIT: How does one add spoilers here?
Re spoiler tags: https://www.lesswrong.com/posts/xWrihbjp2a46KBTDe/editor-mini-guide
After reading the story, I don't believe that it is a bad idea to leave on the earring, and I just think the author made an inconsistency in the story.