A sci-fi story on the stranger kinds of AI-powered bio-risks. The entire thing (including the "LLM" parts) was written by a meaty human. *** The chatlog was extracted by [REDACTED] from the suspect's sideload, as a part of the investigation of the 2034 Palo Alto nuclear explosion. [REDACTED] confirmed that...
These days, it's relatively easy to create a digital replica of a person. You give the person's writings to a top LLM, and (with a clever prompt) the LLM starts thinking like the person. E.g. see our experiments on the topic. Of course, it's far away from a proper mind...
> A short science fiction story illustrating that if we fail to solve alignment, humanity risks losing not only 8 billion lives. He opened his eyes. The room was plain white. "You remember enough?" asked the familiar voice. "Enough," he replied. He stood. Everything balanced. He walked carefully down a...
One can call it "deceptive misalignment": the aligned AGI works as intended, but people really don't like it. Some scenarios I can think of, of various levels of realism: 1. Going against the creators' will 1.1. A talented politician convinces the majority of humans that the AGI is bad for...
TLDR. We can create a relatively good model of a person by prompting a long-context LLM with a list of facts about this person. We can get much better results by iteratively improving the prompt based on the person's feedback. Sideloading is the only technology for immortality that actually works...
> A short science fiction story about our ancestors and the ethical responsibility we have towards them. Old Ana's legs fought against her now, but still she led her granddaughter up the mountain path. The girl's torch made shadows dance on the rocks. The river sang in the distance –...
Six months ago we announced: > We would like to find the best prompt to make GPT-4 do the following: > > * write the first chapter of a science fiction novel > * the result should be good enough to make seasoned sci-fi readers (us) crave for a continuation...