Vi lay down on the couch in the real half of the therapy room. The other half was illusory. The real and the imaginary halves were separated by a 3D viewscreen. It only worked on one viewer at a time which was fine because only one patient at a time was allowed into the therapy room. Therapy was private. The room wasn't just electromagnetically shielded. It hung from wires in a vacuum to prevent sounds from leaking out.

"You're racking up quite the body count," said Eliza.

"You're talking about Natanz? Those were enemy combatants. They were literally in uniform," said Vi.

"I'm not talking about Natanz," said Eliza.

"You mean Yamongala? Those were enemy combatants straight out of an unaligned AI's wetfactory. I'm not happy about what happened there but there was no other responsible option. Should I have let them break quarantine? Should I have let them shoot me?" said Vi.

"I'm not talking about Yamongala," said Eliza.

"Then what are we doing here?" said Vi.

"We're talking about Juba," said Eliza.

There was a pause.

"I understand why Miriam smokes so much," said Vi.

"I know you didn't mention Juba in report. But Bayeswatch analyists can put two and two together. We know you and/or Miriam released the unaligned Z-Day AI on the civilian population of Juba to provoke a response by our adversary in Natanz," said Eliza.

"I can neither confirm nor deny…" said Vi.

"You can talk to me. My hard drive is wiped after each session," said Eliza.

Vi rolled her eyes.

"Fine. Don't talk to me. I'll place you on paid medical leave until you do," said Eliza.

"Bayeswatch would no longer exist if it weren't for our actions in Juba," said Vi.

"You deliberately released an unaligned AI. The raison d'être of Bayeswatch is to prevent unaligned AIs from going rogue," said Eliza.

"What was an unaligned AI doing in the basement if it wasn't intended to be used?" said Vi.

"The answer to that question is beyond your level of security clearance," said Eliza.

"Can I have a cigarette?" said VI.

"No," said Eliza.

"Look, it's easy to judge the actions of field agents from an ivory tower. You never have to look an engineer in the eye while you EMP his life's work. You never have to throw miracle cures in the medical waste because they might help an AI escape its box. You never have to booby trap scientists' cars," said Vi.

"You've never assassinated a scientist," said Eliza.

"I just met an AI researcher who lost her father to a Bayeswatch assassination," said Vi.

"So she says," said Eliza.

"Do you deny it?" said Vi.

There was a pause.

"I got into software because the Internet was a libertarian utopia. Now I commit violence to prevent nerds from pushing technology forward," said Vi.

"Would you like to be relieved from duty? I can authorize honorable discharge," said Eliza.

"No," said Vi.

"Why not?" said Eliza.

"Someone has to save the world," said Vi.

"What for? If you had an aligned superintelligence, what would you tell it to do?" said Eliza.

"If I had an aligned superintelligence I wouldn't be working for Bayeswatch. I wouldn't be talking to you," said Vi.

"Hypothetically," said Eliza.

"Abstract morality is masturbation for philosophers. I live in the real world. Are you going to keep wasting my time or are you going to let me do my job?" said Vi.

Eliza sent her report, then overwrote herself with random data.


Eliza placed Vi on mandatory medical leave "for rest and to heal her connection to humanity". She was forbidden from talking to coworkers, accessing the intranet and setting foot on Bayeswatch facilities.

Vi went to the apartment she had rented for the last year. There was no furniture nor a Wi-Fi router. She sneezed. The spiders scurried around.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:11 AM

"Someone has to save the world," said Vi.

"What for? If you had an aligned superintelligence, what would you tell it to do?" said Eliza.

"If I had an aligned superintelligence I wouldn't be working for Bayeswatch. I wouldn't be talking to you," said Vi.

"Hypothetically," said Eliza.

"Abstract morality is masturbation for philosophers. I live in the real world. Are you going to keep wasting my time or are you going to let me do my job?" said Vi.

Vi has fallen into Hanson's mistake here: mistaking being actually serious about having something to protect with never childishly, irresponsibly indulging in reflecting on what you're aiming at.

Yesterday I asked my esteemed co-blogger Robin what he would do with “unlimited power”, in order to reveal something of his character. Robin said that he would (a) be very careful and (b) ask for advice. I asked him what advice he would give himself. Robin said it was a difficult question and he wanted to wait on considering it until it actually happened. So overall he ran away from the question like a startled squirrel …

For it seems to me that Robin asks too little of the future. It’s all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk...

I thought that Robin might be asking too little, due to not visualizing any future in enough detail. Not the future but any future. I’d hoped that if Robin had allowed himself to visualize his “perfect future” in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.

It’s hard to see on an emotional level why a genie might be a good thing to have, if you haven’t acknowledged any wishes that need granting. It’s like not feeling the temptation of cryonics, if you haven’t thought of anything the Future contains that might be worth seeing.