It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach – theirreach. Their development process has led tofacilitated the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”, such as wishing not to wish to eat so much cake). As such, it seems clear that maybe a developed society should try to become aware, informed, of the root and paths that lead to our current values. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.
It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach – they are the arbitrary product of a blind, natural selection process. Thistheir development process has led to the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”, such as wishing not to wish to eat so much cake). As such, it seems clear that maybe a developed society should try to become aware, informed, of the root and paths that lead to our current values. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.
It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach – they are the arbitrary product of a blind, natural selection process. This development process has led to the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”), such as wishing not to wish to eat so much cake). As such, it seems clear that maybe a developed society should try to become aware, informed, of the root and paths that lead to our current values. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.
Yudkowsky's, through Coherent Extrapolated Volition, has proposed that a extrapolation of our motivation and goals could have advantages when developing the first AI seed. The extrapolation of values themselvesvalues, in a complementary way, seems useful in thinking a set of machines ethics, namely:
It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach – they are the arbitrary product of a blind, natural selection process. This development process has led to the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”). As such, it seems clear that maybe a developed society as we see our own shouldn’t lay its foundations in these accidental terms. Humanity should, on the other hand,should try to become aware, informed, of the root and paths that lead to our current values andvalues. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift them to a set of values intentionally chosen through a state of “reflective equilibrium”.
This extrapolation of our values,Yudkowsky's, through Yudkowsky's Coherent Extrapolated Volition, washas proposed as having somethat a extrapolation of our motivation and goals could have advantages when developing the first AI seed, andseed. The extrapolation of values themselves seems useful in thinking a set of machines ethics, namely:
This extrapolation of our values, through Yudkowsky's Coherent Extrapolated Volition, was proposed as having some advantages when developing the first AI seed, and seems useful in thinking a set of machines ethics, namely:
::* the use of real human values after the reflective process;
::* faster AI moral progress; dissolving preference contradictions;
::* simplification of the human values through elimination of artifacts;
::* a possible solution for human goals’ integration in AI systems;
::* convergence of different human values.
#:
#:
#:
#:
#: the use of real human values after the reflective process;
#: faster AI moral progress; dissolving preference contradictions;
#: simplification of the human values through elimination of artifacts;
#: a possible solution for human goals’ integration in AI systems;
#: convergence of different human values.
*
the use of real human values after the reflective process;
*
faster AI moral progress; dissolving preference contradictions;
*
simplification of the human values through elimination of artifacts;
*
a possible solution for human goals’ integration in AI systems;
*
convergence of different human values.
* the use of real human values after the reflective process;
* faster AI moral progress; dissolving preference contradictions;
* simplification of the human values through elimination of artifacts;
* a possible solution for human goals’ integration in AI systems;
* convergence of different human values.
-
This extrapolation of our values, through Coherent Extrapolated Volition, was proposed as having some advantages when developing the first AI seed, and seems useful in thinking a set of machines ethics, namely: - the use of real human values after the reflective process; - faster AI moral progress; dissolving preference contradictions; - simplification of the human values through elimination of artifacts; - a possible solution for human goals’ integration in AI systems; - convergence of different human values.
This extrapolation of our values [[values, through Coherent Extrapolated Volition] has been proposed] to haveVolition, was proposed as having some advantages when developing the first AI seed, and seems useful in thinking a set of machines ethics, namely: the use of real human values after the reflective process; faster AI moral progress; dissolving preference contradictions; simplification of the human values through elimination of artifacts; a possible solution for human goals’ integration in AI systems; convergence of different human values.
This extrapolation of our values [[Coherent Extrapolated VolitionVolition] has been proposedproposed] to have some advantages when developing the first AI seed, and seems useful in thinking a set of machines ethics, namely: the use of real human values after the reflective process; faster AI moral progress; dissolving preference contradictions; simplification of the human values through elimination of artifacts; a possible solution for human goals’ integration in AI systems; convergence of different human values.
Value extrapolation can be defined as an account of how the human values, moral and desires would be under “ideal circumstances”. These circumstances refer to the access to full information about our motivations, its origins and goals, and are proposed as the model on top of which machine ethics should be developed.
It is the process of determiningwell known that the true valuesorigin of our moral evaluations and motivations are out of our conscious reach – they are the arbitrary product of a person.blind, natural selection process. This task is quite challenging, duedevelopment process has led to the complexityexistence of humandesires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”). As such, it seems clear that maybe a developed society as we see our own shouldn’t lay its foundations in these accidental terms. Humanity should, on the other hand, try to become aware, informed, of the root and paths that lead to our current values and shift them to a set of values intentionally chosen through a state of “reflective equilibrium”.
This extrapolation of our values Coherent Extrapolated Volition is one has been proposed method of determining the values of humanity as a whole for a friendly AI to respect. Coherent Aggregated Volition ishave some advantages when developing the first AI seed, and seems useful in thinking a proposal by Ben Goertzel similar to CEV but which aggregates and averages rather than extrapolatesset of machines ethics, namely: the use of real human values that humans have today. It is intended to be simpler and less controversial than CEV but would solve no inconsistencies in our values and instead cement them permanently.
Phil Goetz has argued thatafter the biasesreflective process; faster AI moral progress; dissolving preference contradictions; simplification of the human mind are so great that there is no meaningful difference betweenvalues through elimination of artifacts; a possible solution for human value and agoals’ integration in AI systems; convergence of different human error, though he believes this can be partially avoided by accepting an arbitrary ethical base. Mitchell Porter optimistically argues that as AGI skepticism fades the field of machine ethics will experience huge growth, and that the public will engage in the problem and develop an aggregate set of values to program an AGI with.
Paul Christiano has suggested that a set of extrapolated values may be created using WBE, and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of existential risks merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.
Value extrapolation can be defined as an account of how
thehuman values,moralmorals, and desires would be under “ideal circumstances”. These circumstances refer to the access to full information about our motivations, its origins and goals, and are proposed as the model on top of which machine ethics should be developed.::*::*::*::*::*