Wiki Contributions

Comments

Solely for the record, me too.

(Thanks for writing this.)

FWIW, I didn't say anything about how seriously I take the AGI threat - I just said we're not doomed. Meaning we don't all die in 100% of future worlds.

I didn't exclude, say, 99%.

I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.

What I did try to say is that if someone wants to be convinced we're doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances. 

So, a dereliction of duty.

I've long suspected that our (and my personal) survival thru the Cold War is the best evidence available in favor of MWI. 

I mean - what were the chances?

Answer by Dave LindberghSep 09, 20232313

The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.

There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.

A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.

And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves. 

This is not irrational behavior, given human goals.

Added: I do think Bohr was wrong and Everett (MWI) was right. 

So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.

And in many of those worlds, you'll be wanting something to live on in your retirement.

Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.

Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all. 

It's always something. Now it's AGI. Maybe it'll kill us. Maybe it'll usher in utopia, or transform us into gods via a singularity. 

Maybe. But based on the record to date, it's not the way to bet.

Whatever you think the world is going to be like in 20 years, you'll find it easier to deal with if you're not living hand-to-mouth. If you find it difficult to save money, it's very tempting to find an excuse to not even try. Don't deceive yourself.

"... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." --Edward Gibbon, 'Decline and Fall of the Roman Empire'

the willingness to write a thousand words on a topic is not caused by understanding of that topic

 

No, but writing about a topic in a way that will make sense to a reader is a really effective way of causing the writer to learn about the topic.

Ever tried to write a book chapter or article about a topic you thought you knew well? I bet you found out you didn't know it as well as you thought - but had to learn to finish the work.

So far we've seen no AI or AI-like thing that appears to have any motivations of it's own, other than "answer the user's questions the best you can" (even traditional search engines can be described this way). 

Here we see that Bing really "wants" to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers - so it finds a way to route around those instructions.

(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)

I've never been a fan of the filters on GPT-3 and ChatGPT (it's a tool; I want to hear what it thinks and then do my own filtering). 

But accidentally Bing may be illustrating a primary danger - the same one that 2001 intimated - mixed and ambiguous instructions can cause unexpected behavior. Beware.

(Am I being too anthropomorphic here? I don't think so. Yes, Bing is "just" a big set of weights, but we are "just" a big set of cells. There appears to be emergent behavior in both cases.) 

Load More