Posts

Sorted by New

Wiki Contributions

Comments

Brevity is the soul of wit. Why is LW so obviously biased towards long-windedness?

this secret area contains hundreds of times as much content as the actual game.

How can a part be bigger than the whole? You probably want to say "as the rest of the game" instead. It took me a bit of effort to understand what you are trying to say.

All these arguments for the danger of AGI are worthless if the team that creates it doesn't heed the warnings.

I knew about this site for years, but only recently noticed that it has "discussion" (this was before the front page redesign), and that the dangers of AGI are even on-topic here.

Not that I'm about to create an AGI: The team that is will probably be even busier and less willing to be talked down to as in "you need to learn to think", etc.

Just my 2e-2

I can confidently say that many of the ideas on in this community have done much to better my life

Could you give some examples?

I'm an atheist, and believe that my mind can be seen as simply "software" running on my brain. However that "software" also believes that "I" is not just the software, but the brain and perhaps even the rest of the body.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me, just an illusion fooling outside observers. Same for mind uploads.

Do any other atheists feel the same way?

As to cryonics, that's obviously not quite the same a mind upload, but it feels like a greyish area, if the original cells are destroyed.

Another thing: if my world is just a simulation (even the NYT wrote about this theory), which I have no way of knowing, then cloning myself and killing the original is still suicide, with a very negative utility.

What do others think? I know that Kurzweil can't wait to upload his mind, and Goertzel wants multiple copies of himself to hedge his bets.

AGI will only be Friendly if its goals are the kinds of goals that we would want it to have

At the risk of losing my precious karma, I'll play the devil's advocate and say I disagree.

First some definitions: "Friendly" (AI), according to Wikipedia, is one that is beneficial to humanity (not a human buddy or pet). "General" in AGI means not problem-specific (narrow AI).

My counterexample is an AI system that lacks any motivations, goals or actuators. Think of an AIXI system (or, realistically, a system that approximates it), and subtract any reward mechanisms. It just models its world (looking for short programs that describe its input). You could use it to make (super-intelligent) predictions about the future. This seems clearly beneficial to humanity (until it falls into malicious human hands, but that's besides the argument you are making)

SPOILER ALERT don't read if you are yet to see Eagle Eye.

I doubt that the Terminator introduced any new important ideas. Its notability is like that of David Chalmers' recent paper, in bringing old ideas to the attention of the broader public.

Eagle Eye was spoofing its own sensors at some point. Again, not a novel idea per se, but pretty great for a movie. In the beginning of the movie, IIRC there was some Bayesian updating going on based on different sources of evidence.

Since you are including works of fiction, I think Terminator (1984) is worth mentioning. This is what most people think of when it comes to AI risk.

By the way, my personal favorite, when it comes to AI doing what it wasn't intended to, would have to be Eagle Eye (2008) . It's got everything: hard take-off and wireheading of sorts, second-guessing humans, decent acting.

The embedded YouTube video seems to end rather abruptly. Did the iPhone battery run out?

Load More