All of Ian_C.'s Comments + Replies

Stopping aid to Africa? It won't happen. Even people who fancy themselves rationalist still follow the Christian ethic that it's better to give something you earn to someone else than to keep it for yourself.

This ethic is irrational because to follow reason is to follow cause and effect, therefore the person who caused the thing to be (who earned it) should suffer the effect (receive the thing).

Religion is possibly to blame for the idea that suspended judgment = superiority. Only God is omniscient, so only He knows things for sure, everyone else must act unsure and tentative.

Priests are allowed to pass judgment and still retain their authority, because they are the voice of God on earth. Maybe the idea of judges evolved from priests and retained that immunity.

In my experience, these attitudes pertain to what should be legal or illegal - what we should impose on others - rather than what we should program into a SAI. In the case of laws, we are uncertain regarding the correct choice in certain situations, so it would be irrational to impose our choice on others (for example.) Similarly, we would not program an AI with our specific opinions on various topics, because even a single mistake could lead to disaster.

Evolution (as an algorithm) doesn't work on the indestructible. Therefore all naturally-evolved beings must be fragile to some extent, and must have evolved to value protecting their fragility.

Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets.

How are we meant to interpret the name? At first blush, I would take it to mean "Posts here are less wrong than average, but still wrong," which is not really encouraging for potential posters...

Also a workaround for anonymous posting might be to make an actual account called "anonymous" and publicize the password.

There were a number of anti-Bush comments in that video. Whatever you thought of him, there were no terrorist attacks for 7 years. Let's hope Obama can beat that record.

"Why does anything exist in the first place?" or "Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?"

So... is cryonics about wanting to see the future, or is it about going to the future to learn the answers to all the "big questions?"

To those who advocate cryonics, if you had all the answers to all the big questions today, would you still use it or would you feel your life "complete" in some way?

I personally will not be using this technique. I will study philosophy and mathematics, and whatever I can find out before I die - that's it - I just don't get to know the rest.

The idea of making a mind-design n-space by putting various attributes on the axis, such as humorous/non-humorous, conceptual/perceptual/sensual, etc. -- how much does this tell us about the real possibilites?

What I mean is, for a thing to be possible, there must be some combination of atoms that can fit together to make it work. But merely making an N-space does not tell us about what atoms there are and what they can do.

Come to think of it, how can we assert anything is possible without having already designed it?

But if the brain does not work by magic (of course), then insight does not either. Genius is 99% perspiration, 10,000 failed lightbulbs and all that...

I think the kind of experimental approach Jed Harris was talking about yesterday is where AI will eventually come from. Some researcher who has 10,000 failed AI programs on his hard drive will then have the insight, but not before. The trick is, once he has the insight, to not implement it right away but stop and think about friendliness! But after so many failures how could you resist...

Edison is not a good example of someone who produced insights.

Eliezer, I'm sure if you complete your friendly AI design, there will be multiple honorary PhDs to follow.

"as long as the differences in the new situation are things that were originally allowed to vary"

And all the things that were fixed are still present of course! (since these are what we are presuming are the causal factors)

'How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?'

Every abstraction is made by holding some things the same and allowing other things to vary. If it allowed nothing to vary it would be a concrete not an abstraction. If it allowed everything to vary it would be the highest possible abstraction - simply "existence." An abstraction can be reapplied elsewhere as long as the differences in the new situation are thing... (read more)

"So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does beg the question of what those changes were."

Perhaps the final cog was language. The original innovation is concepts: the ability to process thousands of entities at once by forming a class. Major efficiency boost. But chimps can form basic concepts and they didn't go foom.

Because forming a concept is not good enough - you have to be able to do something useful with it, to process it. Chimps got stuck there, but we passed abstractions through our existing concrete-only processing circuits by using a concrete proxy (a word).

How clear is the distinction between knowledge and intelligence really? The whole genius of the digitial computer is that programs are data. When a human observes someone else doing something, they can copy that action: data seems like programs there too.

And yet "cognitive" is listed as several levels above "knowledge" in the above post, and yesterday CYC was mocked as being not much better than a dictionary. Maybe cognition and knowledge are not so separate, but two perspectives on the same thing.

One difference between human cognition and human knowledge is that knowledge can be copied between humans and cognition cannot. That's not (necessarily) true for AIs.

"Recursion that can rewrite the cognitive level is worth distinguishing."

Eliezer, would a human that modifies the genes that control how his brain is built qualify as the same class of recursion (but with a longer cycle-time), or is it not quite the same?

'I found my most productive fifteen minutes were when a friend said, out of nowhere, "want to see who can do the most work in 15 minutes?"'

That's interesting, because historically great works have been accomplished when a group of really talented people get together in the same place (e.g. Florence, Silicon Valley, Manhattan Project).

The Internet is great in that it enables you to find like minded people and bounce ideas of them. But that's only half the achievement puzzle. The other half is pestering each other to work, which the Internet is not so good for.

Many rationalists (not saying Eli is one) are of the opinion that introspection is worthless (or at least suspect), so not surprising that trying to predict certain things doesn't occur to us.

While I totally agree with the sentiment of Eliezer's prayer, I don't think saying a prayer on Thanksgiving makes you religious or even implies a belief in God - it's just tradition. It's harmless to follow traditions as long as you are epistemologically strong enough not to be in any danger of confusing reality and myth. Just like it's safe for a person with very strong reason to read a lot of fiction.

Robin's concept of "Singularity" may be very broad, but your concept of "Optimization Process" is too.

I agree. Creativity is not just being random. The old masters used measurement and perspective when painting their masterpieces, they didn't just sit there and hum and at the sky and wait for inspiration to strike them.

I think the idea that creativity is somehow mystical comes from a religious model of the human body. If you think your body has causal flesh and a supernatural/acausal soul, and that creativity comes from your soul (the part that is "you") then it follows that creativity comes from the acausal.

"So do we reason that the most u... (read more)

Earlier I said we are seeing things that are like what we make. But that's not a very useful definition implementation-wise.

My own approach to implementation is to define intelligence as the results of a particular act - "thinking" - and then introspect to see what the individual elements of that act are, and implement them individually.

Yes, I went to Uni and was told intelligence was search, and all my little Prolog programs worked, but I think they were oversimplifying. They were unacknowledged Platonists, trying to find the hidden essence, try... (read more)

"For there to be a concept, there has to be a boundary. So what am I recognizing?"

I think you're just recognizing that the alien artifact looks like something that wouldn't occur naturally on Earth, rather than seeing any kind of essence. Because Earth is where we originally made the concept, and we didn't need an essence there, we just divided the things we know we made from the things we know we didn't.

I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.

But Lanier wasn't saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at... (read more)

The ability to become emotionally detached is a useful skill (e.g. if you are being tortured) but when it becomes an automatic reflex to any emotion, it can take all the colour out of life.

Sometimes highly intelligent people are also overwhelmingly sensitive/empathetic so detaching is very tempting. The first few minutes of this video with the genius girl walking around the spaceship shows what it's like to be highly empathetic (Firefly).

But also: emotions come from the subconscious, and the subconscious contains... (read more)

I agree that there are certain moral rules we should never break. Human beings are not omniscient, so all of our principles have to be principles-in-a-context. In that sense every principle is vulnerable to a black swan, but there are levels of vulnerability. The levels correspond to how wide ranging the abstraction. The more abstract the less vulnerable.

Injunctions about truth are based on the metaphysical fact of identity, which is implied in every single object we encounter in our entire lives. So epistemological injunctions are the most invulnerable. T... (read more)

I don't think it's possible that our hardware could trick us in this way (making us doing self-interested things by making them appear moral).

To express the idea "this would be good for the tribe" would require the use of abstract concepts (tribe, good) but abstract concepts/sentences are precisely the things that are observably under our conscious control. What can pop up without our willing it are feelings or image associations so the best trickery our hardware could hope for is to make something feel good.

The meta argument others have mentioned - "Telling the world you let me out is the responsible thing to do," would work on me.

Re: why rationality can't be learned by rote -

If you introspect on a process of reason, you see that you actually choose at each step which path of inquiry to follow next and which to ignore. Each choice takes the argument to the next step, ultimately driving it to completion. Reason is "powered by choice(TM)" which is why it is incoherent to argue rationally for determinism and also why it can't be learned by rote.

Software developers (such as myself) in our more abstract moments can think of reason as simply encoding ones premises as a string of... (read more)

Except the universe doesn't care how much backbreaking effort you make, only if you get the cause and effect right. Which is why cultures that emphasize hard work are not overtaking cultures that emphasize reason (Enlightenment cultures). Of course even these cultures must still do some work, that of enacting their cleverly thought out causes.

"If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment"

This seems like the way to go to me. It's like "generation ships" in sci-fi. Should we launch ships to distant star systems today, knowing that ships launched 10 years from now will overtake them on the way?

Of course in the case of AI, we don't know what the rate of human enhancement will be, and maybe the star isn't so distant after all.

I don't want to sign up for cryonics because I'm afraid I will be revived brain-damaged. But maybe others are worried they will have the social status of a freak in that future society.

Not that I am willing to sign up for cryonics but I don't see this as a problem. Presumably some monkeys will be placed on ice at some point in the testing of defrosting and you will not be defrosted until they are sure that the defrosting side does not cause brain damage. Also presumably there should be some way of determining if brain damage has occurred before defrosting happens and hopefully no one is defrosted that has brain damage until a way to fix the brain damage has been discovered. I suppose that if the brain damage could be fixed you might lose some important information which does leave the question if you are still you. However if you believe that you are still yourself with the addition of new information, such as is received each day just by living, then you should likewise believe that you will still be yourself if information is lost. Also one of the assumptions of Cryogenics is that the human lifespan will have been greatly expanded so if you have major amnesia from the freezing you can look at it as trading your current life up to the point of freezing for one that is many multiple in length. This is assuming that cryogenics works as intended, of which point I am not convinced of.

I don't believe IQ tests measure everything. There's a certain feeling when being creative, and when completing these tests I have not felt it, so I don't think it's measuring it.

Also I am not sure intelligence is general. At the level of ordinary life it certainly is, but geniuses are always geniuses at something, e.g. maths, physics, composing. Why aren't they geniuses at everything.

Reminds me of this: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

But my question would be: Is the universe of cause and effect really so less safe than the universe of God? At least in this universe, someone who has an evil whim is limited by the laws of cause and effect, e.g. Hitler had to build tanks first, which gave the allies time to prepare. In that other universe, Supreme Bei... (read more)

Short Answer: It's not. Longer Explanation: The way I understand it, the universe of God feels safer because we think of God as like us. In that world, there's a higher being out there. Since we model that being as having similar motivations, desires, etc., we believe that God will also follow some sort of morality and subscribe to basic ideas of fairness. So He'll be compelled to intervene in the case things get too bad. The existence of God also makes you feel less responsible for your fate. For example, if he chooses to smite you, there's nothing you can do. But in a universe of Math, if you don't take action, no higher being is going to step in to hurt/harm you.

When Yoda said "there is no try," I took it more literally. In the absence of human concepts there is no "try," there is only things that act or don't act. Let go of your mind and all that.

"I understood that you could do everything that you were supposed to do, and Nature was still allowed to kill you."

You finally realized inanimate objects can't be negotiated with... and then continued with your attempt to rectify this obvious flaw in the universe :)

One theory has a track record of prediction, and what is being asked for is a prediction, so at first glance I would choose that one. But the explanation based-one is built on more data.

But it is neither prediction nor explanation that makes things happen in the real world, but causality. So I would look in to the two theories and pick the one that looks to have identified a real cause instead of simply identifying a statistical pattern in the data.

The observations in this post gel with my experience also.

Middle managers can be the most short-sided, penny-pinching, over-simplifying people in the world. But when you talk to CEOs they are often well-spoken, well-read, philosophical, long-term.

You ask them a business question and expect to get back balance sheets, dollars, etc. but instead you get something surprisingly wide-ranging/philosophical.

How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.

It's deeper than science being only applicable to natural things -- reason as such is only applicable to natural things. Once you are in the realm of the supernatural anything is possible and the laws of logic don't necessarily hold. You have to just close your mouth and turn off your mind and have faith. Which does not give a teacher a lot of material to work with...

@denis bider: 'In the example you made, it appears as though you are using "superior" to mean "the one I like more" or "the one I think is worthy of praise" or "the one whose behavior should be encouraged".'

I was using it as in "an actual is better than a potential."

Having a high IQ doesn't make someone a "superior human being" in my opinion, it's what you do that counts. An man of average intelligence who starts a small business and employs some people is superior to a lazy genius.

Stephen: "the issue isn't whether it could determine what humans want, but whether it would care."

That is certainly an issue, but I think in this post and in Magical Categories, EY is leaving that aside for the moment, and simply focussing on whether we can hope to communicate what we want to the AI in the first place.

It seems to me that today's computers are 100% literal and naive, and EY imagines a superintelligent computer still retaining that property, but would it?

Is intelligence general or not? If it is, then an entity that can do molecular engineering but be completely naive about what humans want it impossible.

Completely misses the point.

Caledonian - in matters of the heart perhaps people go with emotion, merely rationalizing after the fact, but in other areas - career, finances, etc, I think most people try to reason it out. You need to have more faith in your fellow man :)

The whole question of "should" only arises if you have a choice and a mind powerful enough to reason about it. If you watch dogs it does sometimes look like they are choosing. For example if two people call a dog simultaneously it will look here, look there, pause and think (it looks like) and then go for one of the people. But I doubt it has reasoned out the choice, it has probably just gone with whatever emotion strikes it at the key moment.

In the real world, everything worth having comes from someone's effort -- even wild fruit has to be picked, sorted, and cleaned and fish need to be caught, gutted etc. I think this universal fact of required effort is probably part of the data we get the concept of fairness from in the first place, so reasoning in a space where pies pop in to existence from nothing seems like whatever you conclude might not be applicable to the real world anyway.

In reality someone would have had to bake the pie, and it's fair that they get it since they put in the work. The problem is that the author, in creating the example, eliminated certain facts such as the baker in order to get to the essence of the problem. But the more facts you eliminate the more chance that something will appear arbitrary, due to fewer paths back to reality. It's the fallacy of the over-simplified model (no that's not a real fallacy :).

Load More