Liso

Posts

Sorted by New

Wiki Contributions

Comments

2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics)

One child could have two parents (and both could answer) so 598 is questionable number.

Oracle AI: Human beliefs vs human values

I like to add some values which I see not so static and which are proably not so much question about morality:

Privacy and freedom (vs) security and power.

Family, society, tradition.

Individual equality. (disparities of wealth, right to have work, ...)

Intellectual properties. (right to own?)

Oracle AI: Human beliefs vs human values

I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable

From this page ->

Human values are, for example:

  • civility, respect, consideration;
  • honesty, fairness, loyalty, sharing, solidarity;
  • openness, listening, welcoming, acceptance, recognition, appreciation;
  • brotherhood, friendship, empathy, compassion, love.

  1. I think none of them we could call belief.

  2. If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)

  3. On the contrary - because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)

  4. I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)

And last but not least - if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.

We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)

Oracle AI: Human beliefs vs human values

Stuart is it really your implicit axiom that human values are static, fixed?

(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)

Oracle AI: Human beliefs vs human values

more of a question of whether values are stable.

or question if human values are (objective and) independent of humans (as subjects who could develop)

or question if we are brave enough to ask questions if answers could change us.

or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom.

I need a protocol for dangerous or disconcerting ideas.

I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc.

About rationality and lesswrong -> could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain?

I need a protocol for dangerous or disconcerting ideas.

You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerous ideas needs not only discussing it but also solve your emotional responses. If you like to sleep well then it could depend more on your emotional stability than on rational knowledge.

Superintelligence 26: Science and technology strategy

Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.

Load More