I find it hard to argue with Philip K. Dick’s admission, on behalf of all of us (sci-fi writers), that we “really do not know anything.” Often when scientists read my books, they call me out for my erroneous assumptions about how technology works. “Never mind that,” I say, “It’s just an excuse to put you in a place where you can’t normally be, to make you feel things you don’t get a chance to, to rattle you with wild notions, to help you see how people might behave, were they in such a place.”
I get it, some readers, when steeped in a scientific discipline, have trouble suspending disbelief. They can’t invest emotionally in a story predicated on leaky science. But enough of my defensiveness. I’m looking for feedback on an idea. Hopefully you’ll give me a temporary pass if my understanding of AI falls short. My objective in this forum is to see if people who do in fact understand the tech see potential in a solution for the “AIs-are-grown-rather-than-constructed” problem.
I wrote Once a Man largely to get my head around the more threatening aspects of AGI development. As I worked out the story, I homed in on the long-cited need to create friendly AI, but not just for any AI, for what might be considered a singleton — essentially, a beneficent overlord designed to step in and manage a society on the verge of collapse, at its wits’ end.
In Once a Man, developers embed this nascent AI savior in a virtual narrative and instill it with the belief that it is human, compelling it to navigate moral and ethical choices from an embodied human perspective. The theory is, if an AI experiences living as humans do — full of confusion, misinformation, emotional stakes, puzzling relationships, and dire consequences — it would learn to be genuinely empathetic of the human predicament.
A key element of the training program, and perhaps the riskiest part, is the deception: It’s critical that the AI believes it’s human as it matures. There’s the challenge of inhabiting that virtual environment with convincing secondary characters who can run the developer’s playbook, accompanying the AI through both enjoyable and difficult circumstances, shepherding it through valuable life lessons.
It’s quite a conceit, I realize, but here again, it’s fiction. It worked for my purposes as an author, which included the need for an emotionally engaging narrative, as well as a vehicle to explore other themes. At the heart of my question for you: can you imagine a way to give an AI a human experience: to smell wildflowers and musty basements, to feel disappointment, hunger, and desire, to shiver in damp socks? To experience the aggravation and messiness of friendship along with its warmth? To suffer the trauma of a violent confrontation, the loss of a loved one?
For your purposes — assuming you conduct your own thought experiment — my technical solution may not suffice. “Never mind that.” Perhaps you can foresee more plausible ways of putting the AI through training on how to be a good human, in which case, please enlighten me.
I find it hard to argue with Philip K. Dick’s admission, on behalf of all of us (sci-fi writers), that we “really do not know anything.” Often when scientists read my books, they call me out for my erroneous assumptions about how technology works. “Never mind that,” I say, “It’s just an excuse to put you in a place where you can’t normally be, to make you feel things you don’t get a chance to, to rattle you with wild notions, to help you see how people might behave, were they in such a place.”
I get it, some readers, when steeped in a scientific discipline, have trouble suspending disbelief. They can’t invest emotionally in a story predicated on leaky science. But enough of my defensiveness. I’m looking for feedback on an idea. Hopefully you’ll give me a temporary pass if my understanding of AI falls short. My objective in this forum is to see if people who do in fact understand the tech see potential in a solution for the “AIs-are-grown-rather-than-constructed” problem.
I wrote Once a Man largely to get my head around the more threatening aspects of AGI development. As I worked out the story, I homed in on the long-cited need to create friendly AI, but not just for any AI, for what might be considered a singleton — essentially, a beneficent overlord designed to step in and manage a society on the verge of collapse, at its wits’ end.
In Once a Man, developers embed this nascent AI savior in a virtual narrative and instill it with the belief that it is human, compelling it to navigate moral and ethical choices from an embodied human perspective. The theory is, if an AI experiences living as humans do — full of confusion, misinformation, emotional stakes, puzzling relationships, and dire consequences — it would learn to be genuinely empathetic of the human predicament.
A key element of the training program, and perhaps the riskiest part, is the deception: It’s critical that the AI believes it’s human as it matures. There’s the challenge of inhabiting that virtual environment with convincing secondary characters who can run the developer’s playbook, accompanying the AI through both enjoyable and difficult circumstances, shepherding it through valuable life lessons.
It’s quite a conceit, I realize, but here again, it’s fiction. It worked for my purposes as an author, which included the need for an emotionally engaging narrative, as well as a vehicle to explore other themes. At the heart of my question for you: can you imagine a way to give an AI a human experience: to smell wildflowers and musty basements, to feel disappointment, hunger, and desire, to shiver in damp socks? To experience the aggravation and messiness of friendship along with its warmth? To suffer the trauma of a violent confrontation, the loss of a loved one?
For your purposes — assuming you conduct your own thought experiment — my technical solution may not suffice. “Never mind that.” Perhaps you can foresee more plausible ways of putting the AI through training on how to be a good human, in which case, please enlighten me.