All of Günther_Greindl's Comments + Replies

Say It Loud

Eli,

wonderful post, I agree very much. I have also encountered this - being accused of being overconfident when actually I was talking about things of which I am quite uncertain (strange, isn't it?).

And the people who "accuse" indeed usually only have one (their favourite) alternative model enshrouded in a language of "mystery, awe, and humbleness".

I have found out (the hard way) that being a rationalist will force you into fighting an uphill battle even in an academic setting (your post Science isn't strict enough addresses this proble... (read more)

Mirrors and Paintings

Vladimir,

thanks for pointing me to that post, I must admit that I don't have the time to read all of Eli's posts at the moment so maybe he has indeed addressed the issues I thought missing.

The title of the post at least sounds very promising grin.

Thanks again, Günther

Mirrors and Paintings

I side with Caledonian and Richard in these things - CEV is actually just begging the question. You start with human values and end up with human values.

Well, human values have given us war, poverty, cruelty, oppression, what have you...and yes, it was "values" that gave us these things. Very few humans want to do evil things, most actually think they are doing good when they do bad onto others. (See for instance: Baumeister, Roy F. Evil: Inside Human Violence and Cruelty).

Apart from that, I have to plug Nietzsche again: he has criticized moralit... (read more)

Invisible Frameworks

Tim,

we agree now nearly in all points grin, except for that part of the AIs not "wanting" to change their goals, simply because through meditation (in the Buddhist tradition for instance) I know that you can "see through" goals and not be enslaved to them anymore (and that is accessible to humans, so why shouldn't it be accessible to introspecting AIs?).

That line of thought is also strongly related to the concept of avidya, which ascribes "desires" and "wanting" to not having completely grasped certain truths about r... (read more)

Invisible Frameworks

Tim,

thanks for your answers and questions. As to the distinction intelligence and sentience: my point was exactly that it could not be waved away that easily, you have failed to give reasons why it can be. And I don't think that intelligence and sentience must go hand in hand (read Peter Watts "Blindsight" for some thoughts in this direction for instance). I think the distinction is quite essential.

As to the goal-function modification: what if a super-intelligent agent suddenly incorporates goals such as modesty, respect for other beings, maybe e... (read more)

Invisible Frameworks

Tim,

already the abstract reveals two flaws:

Excerpt from the abstract of the paper "Basic AI drives" by Omohundro:

This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

First of all, no distinction whatever is made between "intelligent" and "sentient"... (read more)

Invisible Frameworks

Hmm, I've read through Roko's UIV and disagree (with Roko), and read Omohundro's Basic AI drives and disagree too, but Quasi-Anonymous mentioned Richard Hollerith in the same breath as Roko and I don't quite see why: his goal zero system seems to me a very interesting approach.

In a nutshell (from the linked site):

(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to p
... (read more)
When (Not) To Use Probabilities

If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a conce... (read more)

0CynicalOptimist5yThis is excellent advice. I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings". I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try to identify the source of the unease}. Your algorithm might also involve assessing the reliability of your gut feeling; experience and reason might allow you to know that your gut is very reliable in certain matters, and much less reliable in others. The details of the algorithm are up for debate of course. For the purposes of this discussion, i place no importance on the details of the algorithm i described. The point is just that these procedures are helpful for rational thinking, they aren't numerical procedures, and a numerical procedure wouldn't automatically be better just because it's numerical.
Grasping Slippery Things

Alexandre passos, Unkown,

you can believe in any matter of things, why not in intelligent falling when you're at it? http://en.wikipedia.org/wiki/Intelligent_falling

The question is not what one can or can't believe, the question is: where does the evidence point to? And where are you ignoring evidence because you would prefer one answer to another?

Let evidence guide your beliefs, not beliefs guide your appraisal of evidence.

Against Devil's Advocacy

@Frelkins,

well, actually I did read Cicero in school, and I like Socrates' attitude; but I don't quite see in what way you are responding to my post?

I just wanted to clarify that the skill of oratory may be a valuable asset for people, but being a good orator does not make you a good truth-seeker.

Against Devil's Advocacy

Frelkins, the aspiring orator or public intellectual is someone who wants to impress people; he is engaging in a power game or vanity game etc.

A truth-seeker does not want to impress people, he or she or ve wants to know. Reason, as Eli said, is not a game.

Einstein's Superpowers

Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.

I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.

And, probably even more important, ... (read more)

2David Althaus10yNow this is a bit harsh, don't you think?
Timeless Physics

@Jess

Your comments on Barbour (non-academic etc) are ad hominem, I say so what? Being an academic may be an indicator for good work, but not more. And he did his Ph.D in physics anyway.

Julian Barbour's work is unconventional.

Yes! Fine. Lovely. Science needs more unconventional thinkers. Let the evidence sort them out, but let's not be against "unconventional" theories. Especially not when they are explanatorily powerful.

Many of his papers border on philosophy

There are two kinds of philosophy: the bad kind (Essay by Paul Graham criticising philoso... (read more)

My Childhood Role Model

Shane,

I'm well aware that SQ is not a measure of intelligence, but I thought that it would be a nice heuristic (metaphor, whatever...) to intuit possible superintelligences. I was presupposing that they have an agent structure (sensors, actuators) and the respective cognitive algorithms (AIXI maybe?).

With this organizational backdrop, SQ becomes very interesting - after all, intelligent agents are bounded in space and time, and other things being equal (especially optimal cognitive algorithms) SQ is the way to go.

My Childhood Role Model

@Eli: thanks for great post again, you speak to my hearts content :-)) I have also occasioned upon hero worship of Einstein in science (especially in physics circles) - this is not a good thing, as it hinders progress: people think "I can't contribute anything important because I'm not a genius like Einstein" instead of sitting down, starting to think and solve some problems.

@Shane: I think the sentience quotient is a nice scale/quantification which can give some precision to otherwise vague talk about plant/chimp/human/superhuman intelligence.

ht... (read more)

No Safe Defense, Not Even Science

@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?

As Eli said in his post, there is a difference between saying the words and knowing, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.

Also, you say: Caledonian: but from your personal tendency to treat the method as a revelation that people have an emotional investment in

Of course people have an emotional investment in this stuff!! Do not make the old mist... (read more)

Many Worlds, One Best Guess

Mitchell,

your concerns concerning vagueness of the world concept is addressed here:

Everett and Structure (David Wallace) http://arxiv.org/abs/quant-ph/0107144v2

Also, the ontology proposed here fits very nicely with the currently most promising streak of Scientific Realism (also referred to in the Wallace paper) -in it's ontic variant.

http://plato.stanford.edu/entries/structural-realism/

Cheers, Günther

Many Worlds, One Best Guess

Mitchell,

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit :-)

Structure and concreteness only emerges from the inside view, which gives the picture of a single world. Max Tegmark has paraphrased this idea nicely with ... (read more)

0EHeller8ySure, but why is the information content of the current state of the universe something that we would want to minimize? In both many-worlds and alternatives, the complexity of the ALGORITHM is roughly the same.
8Rob Bensinger8yBut MWI is not the doctrine 'everything exists'. This is a change of topic. Yes, if we live in a Tegmark universe and MWI is the simplest theory, then it's likely we live in one of the MWI-following parts of the universe. But if we don't live in a Tegmark universe and MWI is the simplest theory, then it's still likely we live in one of the MWI-following possible worlds. It seems to me that all the work is being done by Ockham, not by Tegmark.
0naasking8yNow THAT's an interesting argument for MWI. It's not a final nail in the coffin for de Broglie-Bohm, but the naturalness of this property is certainly compelling.