Posts

Sorted by New

Wiki Contributions

Comments

Stuart -- Yeah, the line of theoretical research you suggest is worthwhile....

However, it's worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it's finished. We're too consumed with trying to finish the system, which is a long and difficult task in itself...

I will try to find some time in the near term to sketch a couple example arguments of the type you request... but it won't be today...

As a very rough indication for the moment... note that OpenCog has explicit Goal Node objects in its AtomSpace knowledge store, and then one can look at the explicit probabilistic ImplicationLinks pointing to these GoalNodes from various combinations of contexts and actions. So one can actually look, in principle, at the probabilistic relations between (context, action) pairs and goals that OpenCog is using to choose actions.

Now, for a quite complex OpenCog system, it may be hard to understand what all these probabilistic relations mean. But for a young OpenCog doing simple things, it will be easier. So one would want to validate for a young OpenCog doing simple things, that the information in the system's AtomSpace is compatible with 1 rather than 2-4.... One would then want to validate that, as the system gets more mature and does more complex things, there is not a trend toward more of 2-4 and less of 1 ....

Interesting line of thinking indeed! ...

Thanks for sharing your personal feeling on this matter. However, I'd be more interested if you had some sort of rational argument in favor of your position!

The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)

To put it differently: What are the properties you think a mind needs to have, in order for the "raise a nice baby AGI" approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?

Bgoertzel11y130

Stuart: The majority of people proposing the "bringing up baby AGI" approach to encouraging AGI ethics, are NOT making the kind of naive cognitive error you describe here. This approach to AGI ethics is not founded on naive anthropomorphism. Rather, it is based on the feeling of having a mix of intuitive and rigorous understanding of the AGI architectures in question, the ones that will be taught ethics.

For instance, my intuition is that if we taught an OpenCog system to be loving and ethical, then it would very likely be so, according to broad human standards. This intuition is NOT based on naively anthropomorphizing OpenCog systems, but rather based on my understanding of the actual OpenCog architecture (which has many significant differences from the human cognitive architecture).

No one, so far as I know, claims to have an airtight PROOF that this kind of approach to AGI ethics will work. However, the intuition that it will work is based largely on understanding of the specifics of the AGI architectures in question, not just on anthropomorphism.

If you want to counter-argue against this approach, you should argue about it in the context of the specific AGI architectures in question. Or else you should present some kind of principled counter-argument. Just claiming "anthropomorphism" isn't very convincing.

Bgoertzel13y210

So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic.

Like Robin and Eli and perhaps yourself, I've read the heuristics and biases literature also. I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.

It seems more plausible to me to assert that many folks who believe the Scary Idea, are having their judgment warped by plain old EMOTIONAL bias -- i.e. stuff like "fear of the unknown", and "the satisfying feeling of being part a self-congratulatory in-crowd that thinks it understands the world better than everyone else", and the well known "addictive chemical high of righteous indignation", etc.

Regarding your final paragraph: Is your take on the debate between Robin and Eli about "Foom" that all Robin was saying boils down to "la la la I can't hear you" ? If so I would suggest that maybe YOU are the one with the (metaphorical) hearing problem ;p ....

I think there's a strong argument that: "The truth value of "Once an AGI is at the level of a smart human computer scientist, hard takeoff is likely" is significantly above zero." No assertion stronger than that seems to me to be convincingly supported by any of the arguments made on Less Wrong or Overcoming Bias or any of Eli's prior writings.

Personally, I actually do strongly suspect that once an AGI reaches that level, a hard takeoff is extremely likely unless the AGI has been specifically inculcated with goal content working against this. But I don't claim to have a really compelling argument for this. I think we need a way better theory of AGI before we can frame such arguments compellingly. And I think that theory is going to emerge after we've experimented with some AGI systems that are fairly advanced, yet well below the "smart computer scientist" level.

Bgoertzel13y110

I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.

However, I strongly suspect that when the argument is laid out formally, what we'll find is that

-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...

So, I think that the formalization will lead to the conclusion that

-- "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity"

-- "we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity"

I.e., I strongly suspect the formalization

-- will NOT support the Scary Idea

-- will also not support complacency about AGI safety and AGI existential risk

I think the conclusion of the formalization exercise, if it's conducted, will basically be to reaffirm common sense, rather than to bolster extreme views like the Scary Idea....

-- Ben Goertzel

I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see

http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf

if you're curious...

-- Ben Goertzel