Capybasilisk

Wiki Contributions

Comments

O-risk, in deference to Orwell.

I do believe Huxley's Brave New World is a far more likely future dystopia than Orwell's. 1984 is too tied to its time of writing.

the project uses atomic weapons to do some of the engineering

Automatic non-starter.

Even if by some thermodynamic-tier miracle the Government permitted nuclear weapons for civilian use, I'd much rather they be used for Project Orion.

Isn't that what Eliezer referred to as opti-meh-zation?

Previously on Less Wrong:

Steve Byrnes wrote a couple of posts exploring this idea of AGI via self-supervised, predictive models minimizing loss over giant, human-generated datasets:

Self-Supervised Learning and AGI Safety

Self-supervised learning & manipulative predictions

I'd especially like to hear your thoughts on the above proposal of loss-minimizing a language model all the way to AGI.

I hope you won't mind me quoting your earlier self as I strongly agree with your previous take on the matter:

If you train GPT-3 on a bunch of medical textbooks and prompt it to tell you a cure for Alzheimer's, it won't tell you a cure, it will tell you what humans have said about curing Alzheimer's ... It would just tell you a plausible story about a situation related to the prompt about curing Alzheimer's, based on its training data. Rather than a logical Oracle, this image-captioning-esque scheme would be an intuitive Oracle, telling you things that make sense based on associations already present within the training set.

What am I driving at here, by pointing out that curing Alzheimer's is hard? It's that the designs above are missing something, and what they're missing is search. I'm not saying that getting a neural net to directly output your cure for Alzheimer's is impossible. But it seems like it requires there to already be a "cure for Alzheimer's" dimension in your learned model. The more realistic way to find the cure for Alzheimer's, if you don't already know it, is going to involve lots of logical steps one after another, slowly moving through a logical space, narrowing down the possibilities more and more, and eventually finding something that fits the bill. In other words, solving a search problem.

So if your AI can tell you how to cure Alzheimer's, I think either it's explicitly doing a search for how to cure Alzheimer's (or worlds that match your verbal prompt the best, or whatever), or it has some internal state that implicitly performs a search.

"Story of our species. Everyone knows it's coming, but not so soon."

-Ian Malcolm, Jurassic Park by Michael Crichton.

LaMDA hasn’t been around for long

Yes, in time as perceived by humans.

why has no one corporation taken over the entire economy/business-world

Anti-trust laws?

Without them, this could very well happen.

I've got uBlock Origin. The hover preview works in private/incognito mode, but not regular, even with uBlock turned off/uninstalled. For what it's worth, uBlock doesn't affect hover preview on Less Wrong, just Greater Wrong.

I'm positive issue is with Firefox, so I'll continue fiddling with the settings to see if anything helps.

Load More