Sorted by New

Wiki Contributions


Intelligence Amplification and Friendly AI

(2) looks awfully hard, unless we can find a powerful IA technique that also, say, gives you a 10% chance of cancer. Then some EAs devoted to building FAI might just use the technique, and maybe the AI community in general doesn’t.

Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.

Help us name a short primer on AI risk!

Risky Machines: Artificial Intelligence as a Danger to Mankind

Writing Style and the Typical Mind Fallacy

I like the your non-fiction style a lot (don't know your fictional stuff). I often get the impression you're in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.

The noncentral fallacy - the worst argument in the world?

Navigating the LW rules is not intended to require precognition.

Well, it was required when (negative) karma for Main articles increased tenfold.

So You Want to Save the World

To be more specific:

I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I'm a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.

Simple theory of IMDB bias

I agree in large parts, but it seems likely that value drift plays a role, too.

So You Want to Save the World

Well, I'm somewhat sure (80%?) that no human could do it, but...let's find out! Original terms are fine.

Load More