I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:
Relativistic irrationality -> http://www.jame5.com/?p=15
Absolute irrationality -> http://www.jame5.com/?p=45
Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8
Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10
Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11
A more rece...
Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.
You will find that this is pretty much what Kuhn says.
Brilliant post Wei.
Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).
Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?
2) You cannot write a book that will be published under EY's name.
Its called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities)
Your reading into connotation a bit too much.
I do not think so - am just being German :-) about it: very precise and thorough.
In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.
This statement is based on three assumptions: 1) What you are doing instead is in fact more worthy of your attention than your contribution here 2) I could not do what you are doing as least as well as you 3) I do not have other things to do that are at least as worthy of my time
None of those three I am personally willing to grant at this point. But surely that is not the case for all the others around here.
Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.
Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.
My apologies for failing to see that - did not mean to be antagonizing - just trying to be honest and forthright about my state of mind :-)
More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/
Very constructive proposal Kaj. But...
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done?
If Eliezer does not find it a worthwhile investment of his time - why should we?
There is no such thing as an "unobjectionable set of values".
And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so ...
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.
Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)
...Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'l
"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.
Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed - again: detailed critique upcoming.
...No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as...
What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?
The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)
Please realize that I spend 2 years writing my book 'Jame5' before I reached that initial insight that eventually lead to 'compassion is a condition for our existence and u...
If I understand your assertions correctly, I believe that I have developed many of them independently
That would not surprise me
Nothing compels us to change our utility function save self-contradiction.
Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?
Why am I being downvoted?
Sorry for the double post.