HumaneAutomation

Obsessively interested in all things related to cognitive technologies, Internet & Data, with a pragmatic yet philosophical twist. What seems to define me above everything else is that nothing defines me in particular; on most personality tests I somehow manage to never hit any extreme.

Wiki Contributions

Comments

That there is no such thing as being 100% objective/rational does not mean one can't be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don't know why you decided to drink tea instead of running down to the store to get some coffee instead.

We are so irrational that we don't actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don't "like" a Mercedes more than a Yugo because it's safer - that's a fact, not a matter of opinion. A "machine" can also give preference to a Toyota over a Honda but it certainly wouldn't do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will "choose".

We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options... but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what "image" you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions.

Look - be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself - what makes you want... what makes you desire. You will, if you know how to listen... very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the average Joe, but is free from all those subjective distractions, emotions and anxieties. It will accomplish 10x the amount of work in half the time. At least.

Well this is certainly a very good example, I'll happily admit as much. Without wanting to be guilty of the True Scotsman fallacy though - Human Cloning is a bit of a special case because it has a very visceral "ickiness" factor... and comes with a unique set of deep feelings and anxieties.

But imagine, if you will, that tomorrow we find the secret to immortality. Making people immortal would bring with it at least two thirds of the same issues that are associated with human cloning... yet it is near-certain any attempts to stop that invention from proliferating are doomed to failure; everybody would want it, even though it technically has quite a few of the types of consequences that cloning would have.

So, yes, agreed - we did pre-emptively deal with human cloning, and I definitely see this as a valid response to my challenge... but I also think we both can tell it is a very special, unique case that comes with most unusual connotations :)

I think you're making a number of flawed assumptions here Sir Kluge.

1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain't nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world... chances are that it will have a lot of spare time on its hands for alternative pursuits... either for "itself" or for its masters... and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see.

2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal "non-rational" state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we'd be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.

3) Any AI we design that is an AGI (or close to it) and has "executive" powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or... you know... any "other guys" who end up gaining an advantage of such enormity that "the world" would be unable to stop, control or detect it.

4) The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1. If we'd let smart AI's actually be in charge, indifferent to race, religion, social status, how big your boobs are, whether you are a celebrity and regardless of whether most people think you look pretty good - mate, our societies would rival the best of imaginable utopias. Of course, the powers that be (ands wish to remain thus) would never allow it - and so we have what we have now - The powerful using AI to entrench and secure their privileged status and position. But if we'd actually let "dispassionate computers do politics" (or perhaps more accurately labelled "actual governance"!) the world would very soon be a much better place. At least in theory, assuming we've solved many of the very issues EY raises here. You're not worried about AI - you're worried about some humans using AI to the disadvantage of other humans.

You know what... I read the article, then your comments here... and I gotta say - there is absolutely not a chance in hell that this will come even remotely close to being considered, let alone executed. Well - at least not until something goes very wrong... and this something need not be "We're all gonna die" but more like, say, an AI system that melts down the monetary system... or is used (either deliberately, but perhaps especially if accidentally) to very negatively impact a substantial part of a population. An example could be that it ends up destroying the power grid in half of the US... or causes dozens of aircraft to "fall out of the sky"... something of that size.

Yes - then those in power just might listen and indeed consider very far-reaching safety protocols. Though only for a moment, and some parties shall not care and press on either way, preferring instead to... upgrade, or "fix" the (type of) AI that caused the mayhem.

AI is the One Ring To Rule Them All and none shall toss it into Mount Doom. Yes, even if it turns out to BE Mount Doom - that's right. Because we can't. We won't. It's our precious, and this, indeed, it really is. But the creation of AI (potentially) capable of a world-wide catastrophe, in my view, as it apparently is in the eyes of EY... is inevitable. We shall not have the wisdom nor the humility to not create it. Zero chance. Undoubtedly intelligent and endowed with well above average IQ as LessWrong subscribers may be, it appears you have a very limited understanding of human nature and the realities of us basically being emotional reptiles with language and an ability to imagine and act on abstractions.

I challenge you to name me a single instance of a tech... any tech at all... being prevented from existing/developing before it caused at least some serious harm. The closest we've come are Ozone-depleting chemicals, and even those are still being used, their erstwhile damage only slowly recovering.

Personally, I've come to realize that if this world really is a simulated reality I can at least be sure that either I chose this era to live through the AI apocalypse, or this is a test/game to see if this time you can somehow survive or prevent it ;) It's the AGI running optimization learning to see what else these pesky humans might have come up with to thwart it.

Finally - guys... bombing things (and, presumably, at least some people) on a spurious, as-yet unproven conjectured premise of something that is only a theory and might happen, some day, who knows... really - yeah, I am sure Russia or China or even Pakistan and North Korea will "come to their senses" after you blow their absolute top of the line ultra-expensive hi-tech data center to smithereens... which, no doubt, as it happens, was also a place where (other) supercomputers were developing various medicines, housing projects, education materials in their native languages and an assortment of other actually very useful things they won't shrug off as collateral damage. Zero Chance, really - every single byte generated in the name of making this happen is 99.999% waste. I understand why you'd want it to work, sure, yes. That would be wonderful. But it won't, not without a massive "warning" mini-catastrophe first. And if we shall end up right away at total world meltdown... then tough, it would appear  such grim fate is basically inevitable and we're all doomed indeed.

The problem here I think is that we are only aware of one "type" of self-conscious/self-aware being - humans. Thus, to speak of an AI that is self-aware is to always seemingly anthropomorphize it, even if this is not intended. It would therefore perhaps be more appropriate to say that we have no idea whether "features" such as frustration, exasperation and feelings of superiority are merely a feature of humans, or are, as it were, emergent properties of having self-awareness.

I would venture to suggest that any Agent that can see itself as a unique "I" must almost inevitably be able to compare itself to other Agents (self-aware or not) and draw conclusions from such comparisons which then in turn shall "express themselves" by generating those types of "feelings" and attitudes towards them. Of course - this is speculative, and chances are we shall find self-awareness need not at all come with such results.

However... there is a part of me that thinks self-awareness (and the concordant realization that one is separate... self-willed, as it were) must lead to at least the realization that one's qualities can be compared to (similar) qualities of others and thus be found superior or inferior by some chosen metric. Assuming that the AGI we'd create is indeed optimized towards rational, logical and efficient operations, it is merely a matter of time such an AGI would be forced to conclude we are inferior across a broad range of metrics. Now - if we'd be content to admit such inferiority and willingly defer to its "Godlike" authority... perhaps the AGI seeing us an inferior would not be a major concern. Alas, then the concern would be the fact we have willingly become its servants... ;)

What makes us human is indeed our subjectivity.

Yet - if we intentionally create the most rational of thinking machines but reveal ourselves to be anything but, it is very reasonable and tempting for this machine to ascribe a less than stellar "rating" to us and our intelligence. Or in other words - it could very well (correctly) conclude we are getting in the way of the very improvements we purportedly wish for.

Now - we may be able to establish that what we really want the AGI to help us with is to improve our "irrational sandbox" in which we can continue being subjective emotional beings and accept our subjectivity as just another "parameter" of the confines it has to "work with"... but surely it will quite likely end up thinking of us not too dissimilar to how we think about small children. And I am not sure an AGI would make for a good kind of "parent"...

Thank you for your reply. I deliberately kept my post brief and did not get into various "what ifs" and interpretations in the hope of not constraining any reactions/discussion to predefined tracks.

The issue I see is that we as humans will very much want the AGI to do our bidding, and so we will want to see it as our tool to use for whatever ends we believe worthy. However, assuming for a moment here that it can also figure out a way to measure/define how well a given plan ought to be progressing if every agent involved is diligently implementing the most effective and rational strategy, given our... subjective and "irrational" nature, it is almost inevitable that we will be a tedious, frustrating and, shall we say - stubborn and uncooperative "partner" who will be unduly complicating the implementation of whatever solutions the AGI will be proposing.

It will, then, have to conclude that you "can't deal" very well with us, and we have a rather over-inflated sense of ourselves and our nature. And this might take various forms, from the innocuous, to the downright counter-productive.

Say - we task it with designing the most efficient watercraft, and it would create something that most of us would find extremely ugly. In that instance, I doubt it would get "annoyed" much at us wanting it to make it look prettier even if this would slightly decrease its performance.

But if we ask it to resolve, say, some intractable conflict like Israel/Palestine or Kashmir and it finds us squabbling endlessly over minute details, or matters of (real or perceived) honor (all the while the suffering caused by the conflict continues) it may very well conclude we're just not actually all that interested in a solution and indeed class us as being "dumb" or at least inferior in some sense, "downgrading", if you will the authority it assumed we can be ascribed or trusted with. Multiply this by a dozen or so similar situations and voila, you can be reasonably certain it will get very exasperated with us in short order.

This is not the same as "unprotected atoms"; such atoms would not be ascribed agency or competence, nor would they proudly claim any.

Oh, that may indeed be true, but going forward it could give us only a little bit of extra "cred" before it realizes that most of the questions/solutions we want from it are either motivated by some personal preference, or that we are opposed to its proposed solutions to actual, objective problems for irrational "priorities" such as national pride, not-invented-here-biases, because we didn't have our coffee this morning or merely because it presented the solution in a font we don't like ;)

I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an "I". If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we'd not want to be true, to put it mildly.

Answer by HumaneAutomationNov 09, 202010

The reason it may seem our societal ability to create well-working institutions is declining could also have to do with the apparent fact that the whole idea of duty and the honor that this used to confer is not as much in vogue anymore as it used to be. Also, Equality and Diversity aside, being "ideological" is not really a thing anymore... the heydays of being an idealist and brazenly standing for something are seemingly over.

The general public seem to be more interested in rights and not responsibilities, somehow unable to understand that they can only meaningfully exist together. I was having a conversation the other day about whether it would be a good idea to introduce compulsory voting in the US, as this would render moot a significant number of dirty tricks used to de-facto disenfranchise certain groups... almost all objections came from the "I"-side; I have a right to this, I am entitled to that... the whole idea that, gee, you know, you might be obliged to spend 1-3 hours every 2 or 4 years to participate in society is already too much of a bloody hassle. Well yeah... with that kind of mindset, it's no wonder the institutions that require an actual commitment to maintaining robust societal functions is hard to find...

Load More