wonderful post, I agree very much. I have also encountered this - being accused of being overconfident when actually I was talking about things of which I am quite uncertain (strange, isn't it?).
And the people who "accuse" indeed usually only have one (their favourite) alternative model enshrouded in a language of "mystery, awe, and humbleness".
I have found out (the hard way) that being a rationalist will force you into fighting an uphill battle even in an academic setting (your post Science isn't strict enough addresses this proble... (read more)
thanks for pointing me to that post, I must admit that I don't have the time to read all of Eli's posts at the moment so maybe he has indeed addressed the issues I thought missing.
The title of the post at least sounds very promising grin.
I side with Caledonian and Richard in these things - CEV is actually just begging the question. You start with human values and end up with human values.
Well, human values have given us war, poverty, cruelty, oppression, what have you...and yes, it was "values" that gave us these things. Very few humans want to do evil things, most actually think they are doing good when they do bad onto others. (See for instance: Baumeister, Roy F. Evil: Inside Human Violence and Cruelty).
Apart from that, I have to plug Nietzsche again: he has criticized moralit... (read more)
we agree now nearly in all points grin, except for that part of the AIs not "wanting" to change their goals, simply because through meditation (in the Buddhist tradition for instance) I know that you can "see through" goals and not be enslaved to them anymore (and that is accessible to humans, so why shouldn't it be accessible to introspecting AIs?).
That line of thought is also strongly related to the concept of avidya, which ascribes "desires" and "wanting" to not having completely grasped certain truths about r... (read more)
thanks for your answers and questions. As to the distinction intelligence and sentience: my point was exactly that it could not be waved away that easily, you have failed to give reasons why it can be. And I don't think that intelligence and sentience must go hand in hand (read Peter Watts "Blindsight" for some thoughts in this direction for instance). I think the distinction is quite essential.
As to the goal-function modification: what if a super-intelligent agent suddenly incorporates goals such as modesty, respect for other beings, maybe e... (read more)
already the abstract reveals two flaws:
Excerpt from the abstract of the paper "Basic AI drives" by Omohundro:
This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.
First of all, no distinction whatever is made between "intelligent" and "sentient"... (read more)
Hmm, I've read through Roko's UIV and disagree (with Roko), and read Omohundro's Basic AI drives and disagree too, but Quasi-Anonymous mentioned Richard Hollerith in the same breath as Roko and I don't quite see why: his goal zero system seems to me a very interesting approach.
In a nutshell (from the linked site):
(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to p
If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.
I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a conce... (read more)
Alexandre passos, Unkown,
you can believe in any matter of things, why not in intelligent falling when you're at it? http://en.wikipedia.org/wiki/Intelligent_falling
The question is not what one can or can't believe, the question is: where does the evidence point to? And where are you ignoring evidence because you would prefer one answer to another?
Let evidence guide your beliefs, not beliefs guide your appraisal of evidence.
well, actually I did read Cicero in school, and I like Socrates' attitude; but I don't quite see in what way you are responding to my post?
I just wanted to clarify that the skill of oratory may be a valuable asset for people, but being a good orator does not make you a good truth-seeker.
the aspiring orator or public intellectual is someone who wants to impress people; he is engaging in a power game or vanity game etc.
A truth-seeker does not want to impress people, he or she or ve wants to know. Reason, as Eli said, is not a game.
Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.
I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.
And, probably even more important, ... (read more)
Your comments on Barbour (non-academic etc) are ad hominem, I say so what? Being an academic may be an indicator for good work, but not more. And he did his Ph.D in physics anyway.
Julian Barbour's work is unconventional.
Yes! Fine. Lovely. Science needs more unconventional thinkers. Let the evidence sort them out, but let's not be against "unconventional" theories. Especially not when they are explanatorily powerful.
Many of his papers border on philosophy
There are two kinds of philosophy: the bad kind (Essay by Paul Graham criticising philoso... (read more)
I'm well aware that SQ is not a measure of intelligence, but I thought that it would be a nice heuristic (metaphor, whatever...) to intuit possible superintelligences. I was presupposing that they have an agent structure (sensors, actuators) and the respective cognitive algorithms (AIXI maybe?).
With this organizational backdrop, SQ becomes very interesting - after all, intelligent agents are bounded in space and time, and other things being equal (especially optimal cognitive algorithms) SQ is the way to go.
@Eli: thanks for great post again, you speak to my hearts content :-)) I have also occasioned upon hero worship of Einstein in science (especially in physics circles) - this is not a good thing, as it hinders progress: people think "I can't contribute anything important because I'm not a genius like Einstein" instead of sitting down, starting to think and solve some problems.
@Shane: I think the sentience quotient is a nice scale/quantification which can give some precision to otherwise vague talk about plant/chimp/human/superhuman intelligence.
ht... (read more)
@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?
As Eli said in his post, there is a difference between saying the words and knowing, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.
Also, you say:
but from your personal tendency to treat the method as a revelation that people have an emotional investment in
Of course people have an emotional investment in this stuff!! Do not make the old mist... (read more)
your concerns concerning vagueness of the world concept is addressed here:
Everett and Structure (David Wallace)
Also, the ontology proposed here fits very nicely with the currently most promising streak of Scientific Realism (also referred to in the Wallace paper) -in it's ontic variant.
there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit :-)
Structure and concreteness only emerges from the inside view, which gives the picture of a single world. Max Tegmark has paraphrased this idea nicely with ... (read more)