Wiki Contributions

Comments

We've done some things to combat that,

Are this visible at the typical user level? 

Of course you can, you just have to make the first set of wolves very small. 

Imagine fiancéespace (or fiancéspace) - as in the space of romantic partners that would marry you (assuming you're not married and you want to be). You can imagine "drawing" from that space, but once you draw nearly all of the work is still ahead of you. Someone that was initially "friendly" wouldn't necessarily stay that way, and someone that was unfriendly wouldn't necessarily stay that way. It's like asking "how do you make sure a human mind stays friendly to you forever?" We can't solve that with our lowly ape minds, and I'm not sure that we'd want to. The closest solution to that I know if with humans is Williams syndrome, and we probably wouldn't want an AGI with an analogous handicap. The relationship cultured overtime with other minds is more important in many respects the the initial conditions of the other minds. 

Maybe dogs are the better metaphor.  We want AGIs to be like very smart Labradors. Random, "feral," AGIs may be more like wolves. So if we made them so they could be "selectively bred" using something like a genetic algorithm? Select for more Lab-y and less Wolf-y traits.

If a Labrador was like 10 or 100 times smarter than it's owner, would it still be mostly nice most of the time? I would hope so. Maybe the first AGI works like Garm->Fenrir in God of War (spoiler, sorry). 

Just thinking out loud a bit...

IMO, I think the rationality project/LW is handling these crises far better than EA is doing.

I'm not really sure if they're separable at this point? There's so much overlap and cross-posting, it seems like they have the same blood supply.

Huh... thought I would get disagreement, but not for that reason. Thanks for the feedback. I was trying not to use terms that would appear in searches like FTX, Nick Bostrom or Max Tegmark. I did link to relevant posts where I thought it would be unclear.

Was trying not to specifically mention FTX, Nick Bostrom or Max Tegmark. I wanted to keep the audience to people who were familiar with it and not people Googling the topics who were off-forum and not EA or rationalist types. 

I tried to make that clear in the introduction. 

It was intended to be tonge-in-cheek, but okay, point taken. 

Am with you very much here. Recently decided that I need to start doing this more often. Negative karma isn't really negative karma if you've learned something from the experience.

"Successful" is an odd concept with it comes to social media. Most people would call Twitter, Facebook, TikTok, etc, successful. I think that's like saying oxycontin is successful, or McDonalds is successful, or Doritos is successful.  It depends on the point of view you're looking at it from.

There's an argument to be made that it's better to influence a small number of people profoundly, then influence a large number of people negligibly (as you might do on larger networks, where any influence you might have will be almost entirely washed away by whatever is more viral). In fact, that's why I'm on LessWrong, the scale is more apt.

Load More