Bgoertzel
Bgoertzel has not written any posts yet.

Thanks for sharing your personal feeling on this matter. However, I'd be more interested if you had some sort of rational argument in favor of your position!
The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)
To put it differently: What are the properties you think a mind needs to have, in order for the "raise a nice baby AGI" approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?
Stuart: The majority of people proposing the "bringing up baby AGI" approach to encouraging AGI ethics, are NOT making the kind of naive cognitive error you describe here. This approach to AGI ethics is not founded on naive anthropomorphism. Rather, it is based on the feeling of having a mix of intuitive and rigorous understanding of the AGI architectures in question, the ones that will be taught ethics.
For instance, my intuition is that if we taught an OpenCog system to be loving and ethical, then it would very likely be so, according to broad human standards. This intuition is NOT based on naively anthropomorphizing OpenCog systems, but... (read more)
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic.
Like Robin and Eli and perhaps yourself, I've read the heuristics and biases literature also. I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
It seems more plausible to me to assert that many folks who believe the Scary Idea, are having... (read more)
I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.
However, I strongly suspect that when the argument is laid out formally, what we'll find is that
-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...
So, I think that the formalization will lead to the conclusion that
-- "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity"
-- "we can... (read more)
I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see
http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf
if you're curious...
-- Ben Goertzel
Stuart -- Yeah, the line of theoretical research you suggest is worthwhile....
However, it's worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it's finished. We're too consumed with trying to finish the system, which is a long and difficult task in itself...
I will try to find some time in the near term to sketch a couple example arguments of the... (read more)