Posts

Sorted by New

Wiki Contributions

Comments

On "real" choice: technology doesn't change your decision-making algorithm (yet), but it does change evaluated outcomes and, by consequence, your utility function is reshaped. You might choose the same, but you will choose differently (unless the change is small or you are stubborn enough). Free will explanation comes to mind.

I want a medium sized, talented and rational team who seriously care, not every AI programmer in the world who smells money.

I'd rather pay the rest of AI researchers to play games and browse web, than have them work independently on their seed AIs without friendliness component. As for "to date" I meant "up to the point when I receive the money".

Also, even after years of studying for it I wouldn't trust me, or anyone else for that matter, to make that switch-on decision alone.

Me neither. I, however, would trust myself to make the "don't switch on yet" decision alone.

So if you had $10 trillion, what would you do with it?

My worries, in order of priority, would be:

  1. Someone manipulating/forcing me into giving substantial amount of money away. After all, my decision-making process is the weakest link here.
  2. Existential risks.

I don't know what I'd do for 1. and I won't waste my time thinking about proper course of action for such low-probability scenarios. For 2. I'd hire all AI researchers to date to work under Eliezer and start seriously studying to be able to evaluate myself whether flipping the "on" switch would result in a friendly singularity.