Wiki Contributions

Comments

mwaser12y20

If it is true (i.e. if a proof can be found) that "Any sufficiently advanced tool is indistinguishable from agent", then any RPOP will automatically become indistinguishable from an agent once it has self-improved past our comprehension point.

This would seem to argue against Yudkowsky's contention that the term RPOP is more accurate than "Artificial Intelligence" or "superintelligence".

mwaser13y10

Actually, eating a baby bunny is a really bad idea when viewed from a long-term perspective. Sure, it's a tender tasty little morsel -- but the operative word is little. Far better from a long-term view to let it grow up, reproduce and then eat it. And large competent bunnies aren't nearly as cute as baby bunnies, are they? So maybe evo-psych does have it correct . . . . and maybe the short-sighted rationality of tearing apart a whole field by implication because you don't understand how something works doesn't seem as brilliant.

mwaser13y00

MY "objection" to CEV is exactly the opposite of what you're expecting and asking for. CEV as described is not descriptive enough to allow the hypothesis "CEV is an acceptably good solution" to be falsified. Since it is "our wish if we knew more", etc., any failure scenrio that we could possibly put forth can immediately be answered by altering the potential "CEV space" to answer the objection.

I have radically different ideas about where CEV is going to converge to than most people here. Yet, the lack of distinctions in the description of CEV cause my ideas to be included under any argument for CEV because CEV potentially is . . . ANYTHING! There are no concrete distinctions that clearly state that something is NOT part of the ultimate CEV.

Arguing against CEV is like arguing against science. Can you argue a concrete failure scenario of science? Now -- keeping Hume in mind, what does science tell the AI to do? It's precisely the same argument, except that CEV as a "computational procedure" is much less well-defined than the scientific method.

Don't get me wrong. I love the concept of CEV. It's a brilliant goal statement. But it's brilliant because it doesn't clearly exclude anything that we want -- and human biases lead us to believe that it will include everything we truly want and exclude everything we truly don't want.

My concept of CEV disallows AI slavery. Your answer to that is "If that is truly what a grown-up humanity wants/needs, then that is what CEV will be". CEV is the ultimate desire -- ever-changing and never real enough to be pinned down.

mwaser13y30

I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough -- and I would strongly urge you to do so.

mwaser13y10

Threats are certainly a data point that I factor in when making a decision. I, too, have been known to apply altruistic punishment to people making unwarranted threats. But I also consider whether the person feels so threatened that the threat may actually be just a sign of their insecurity. And there are always times when going along with the threat is simply easier than bothering to fight that particular issue.

Do you really always buck threats? Even when justified -- such a "threatened consequences" for stupid actions on your part? Even from, say, police officers?

mwaser13y00

I much prefer the word "consequence" -- as in, that action will have the following consequences . . . .

I don't threaten, I point out what consequences their actions will cause.

mwaser13y20

For-profit corporations, as a matter of law, have the goal of making money and their boards are subject to all sorts of legal consequences and other unpleasantnesses if they don't optimize that goal as a primary objective (unless some other goal is explicitly written into the corporate bylaws as being more important than making a profit -- and even then, there are profit requirements that must be fulfilled to avoid corporate dissolution or conversion to a non-profit -- and very few corporations have such provisions).

Translation

Corporations = powerful, intelligent entities with the primary goal of accumulating power (in the form of money).

mwaser13y30

As you get closer to the core of friendliness, you get all sorts of weird AGI's that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.

Is this true or is this a useful assumption to protect us from doing something stupid?

Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?

mwaser13y20

I meant lurking is slow, lurking is inefficient, and a higher probability that it gets worse results for the newbie. I'm not sure which objective is being referred to in that clause. I retract those evaluations as flawed.

Yeah, I made the same mistake twice in a row. First, I didn't get that I didn't get it. Then I "got it" and figured out some obvious stuff -- and didn't even consider that there probably was even more below that which I still didn't get and that I should start looking for (and was an ass about it to boot). What a concept -- I don't know what I don't know.

The playground option was an idiot idea. I actually figured out that I don't want to go there and stagnate before your comment. I've got this horrible mental image of me being that guy that whines in boot camp. Let me take a few days and come up with a good answer to one of your questions (once I've worked this through a bit more).

I'd say thank you and sorry for being an ass but I'm not sure of its appropriateness right now. (Yeah, that tag is still really messing with me ;-)

ETA: Still re-calibrating. Realizing I'm way too spoiled about obtaining positive feedback . . . . ;-) EDIT: Make that addicted to obtaining positive feedback and less accepting of negative feedback that I don't immediately understand than I prefer to realize (and actually commenting on the first part seems to immediately recurse into hilarity)

Load More