Followup toNonsentient Optimizers

Why would you want to avoid creating a sentient AI?  "Several reasons," I said.  "Picking the simplest to explain first—I'm not ready to be a father."

So here is the strongest reason:

You can't unbirth a child.

I asked Robin Hanson what he would do with unlimited power.  "Think very very carefully about what to do next," Robin said.  "Most likely the first task is who to get advice from.  And then I listen to that advice."

Good advice, I suppose, if a little meta.  On a similarly meta level, then, I recall two excellent advices for wielding too much power:

  1. Do less; don't do everything that seems like a good idea, but only what you must do.
  2. Avoid doing things you can't undo.

Imagine that you knew the secrets of subjectivity and could create sentient AIs.

Suppose that you did create a sentient AI.

Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.

Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.

Suppose that these AIs were emotional as well as sentient, capable of being happy or sad.  And that these AIs were capable, indeed, of finding fulfillment in our world.

And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—

—these trillions of people also cared, very strongly, about making giant cheesecakes.

Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.

Consider, I beg you, the full and awful depths of our moral dilemma.

Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—

—would we be right to do so?  They'd be people, no less than us.

We, the original humans, would have become a numerically tiny minority.  Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?

Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

I can tell you my advice on how to resolve this horrible moral dilemma:  Don't create trillions of new people that care about cheesecake.

Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions.

I've heard proposals to "uplift chimpanzees" by trying to mix in human genes to create "humanzees", and, leaving off all the other reasons why this proposal sends me screaming off into the night:

Imagine that the humanzees end up as people, but rather dull and stupid people.  They have social emotions, the alpha's desire for status; but they don't have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts.  They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human's abstract intelligence. 

When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time.  Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...

CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.

I've lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness.  You want to know how high a challenge has to be before I finally give up and flee screaming into the night?  There it stands.

You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence.  Otherwise: egg, meet chicken, chicken, meet egg.

If you create a sentient superintelligence—

It's not just the problem of creating one damaged soul.  It's the problem of creating a really big citizen.  What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being?  What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that's enough by itself to outweigh our species?

Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—

—delivers a gigantic kick to the world, which cannot be undone.

And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.

What you make nonsentient, can always be made sentient later; but you can't just unbirth a child.

Do less.  Fear the non-undoable.  It's sometimes poor advice in general, but very important advice when you're working with an undersized decision process having an oversized impact.  What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue.  But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.

A nonsentient Friendly superintelligence is a more colorless act.

So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.

 

 

Part of The Fun Theory Sequence

Next post: "Amputation of Destiny"

Previous post: "Nonsentient Optimizers"

New Comment
96 comments, sorted by Click to highlight new comments since: Today at 2:29 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This is all predicated on the assumption that "sentience" automatically results in moral rights. I would say that moral rights are fundamentally based on empathy, which is subjective -- we give other people moral rights in order to secure those rights for ourselves.

I think the vast majority of the population would have no problem with "apartheid" or "genocide" of sentient AIs or chimps. As a secular humanist, I would reluctantly agree with them. Like it or not, at some level my morality boils down to an emotional attachment... (read more)

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.

-1pnrjulius12y
If the universe is open, we won't ever run out of space! The infinite future and infinite space raise plenty of other problems of their own, but I think it's interesting that they actually do solve this one.

Tim:

Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think... (read more)

Some people take "satisficing, instead of maximizing" a little too far.

Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist.

Also, I think it is at least as possible that on moral reflection we would consider all mammals/animals/life as equal citizens. So we may already be outvoted.

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?

0pnrjulius12y
It does seem intuitively right to say that killing something already existing is worse than not creating it in the first place. (Though, formalizing this intuition is murder. Literally.)
2MugaSofer11y
... it is?
2wedrifid11y
No, murder requires that you kill someone (there are extra moral judgements necessary but the killing is rather unambiguous.)
0Brilliand9y
I read that quote as saying "if you formalize this intuition, you wind up with the definition of murder". While not entirely true, that statement does meet the "kill" requirement.
0DanielLC11y
A superintelligent AI doesn't have truly unlimited power. It can't even violate the laws of physics, let alone morality. If your moral system says that death is inherently bad, then undoing the creation of a child is bad.
2CAE_Jones11y
I often think about a rewound reality, where the only difference is the data in my brain... and the biggest problem I have with this is all the people that are born after the time I'd go back to that I don't want to unmake. Of course, my attention span is terrible, so I never follow one of these long enough or thorough enough to simulate how I'd try to avert such issues... then chaos theory would screw it up in spite of all that. The point is that I concur.
3MugaSofer11y
I'm pretty sure that "rewinding" is different to choosing now not to create lives.

So if we created a brain emulation that wakes up one morning (in a simulated environment), lives happily for a day, and then goes to bed after which the emulation is shut down, would that be a morally bad thing to do? Is it wrong? After all, living one day of happiness surely beats non-existence?

luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.

The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.

nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not real... (read more)

Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.

you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about? I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.

anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."

I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.

"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."

Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake&... (read more)

1pnrjulius12y
Indeed, when we substitute for "cheesecake" the likely things that a superintelligent AI might value, the problem becomes a whole lot less obvious. "We want to create a unified superintelligence that encompasses the full computational power of the universe." "We want to create the maximum possible number of sentient intelligences the universe can sustain." "We want to create a being of perfect happiness, the maximally hedonic sentient." "We want to eliminate the concepts of 'selfishness' and 'hierarchy' in favor of a transcendental egalitarian anarchy." Would humans resist these goals? Yes, because they probably entail getting rid of us puny flesh-bags. But are they worth doing? I don't know... it kinda seems like they might be.
6Ghatanathoah12y
It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can't be incrementally satisfied. This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren't all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people. The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that's the case I can't complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn't cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don't know, would that one AI be willing to settle for encompassing 1/4 of the computational power of the universe with a superintelligence?). That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don't create creatures who will be totally unsatisfied unless they're allowed to eat the universe.

I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?

[-][anonymous]15y00

luzr: Using anything but "cheesecake" as a placeholder adds a bias to the whole story, in that case.

luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or oth... (read more)

what effect would it have on the point

if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you're a utilitarian this is the delicate... (read more)

0pnrjulius12y
No, the theft problem is much easier than the aggregate problem. If the only thing in our power to change is the one man's behavior, we probably would allow the man to steal. It's worse to let his family die. But if we start trying to let everyone steal whenever they can't afford things, this would collapse our economy and soon mean there weren't enough goods to even steal. So if it's within our power to change the whole system, we wouldn't let the man steal---instead we would eliminate poverty so that no one ever has to steal. This is obviously the optimal long-run large-scale decision, and the trick is really getting there from here (the goal is essentially undisputed). The aggregate problem is a whole lot harder, because the goals themselves are in dispute. Which world is better, a world of 1,000 ultimately happy people, or a world of 1 billion people whose lives are just barely worth living?

Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we ar... (read more)

the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that this sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?

"please please slow all this change down"

No way no how. Bring the change on, baby. Bring.It.On.

For those who complain about being on your toes all the time, I say take ballet.

-1pnrjulius12y
Also, think of all the millions of children you're killing because we didn't cure their diseases fast enough.
1Ghatanathoah11y
That's true, but shouldn't we also give weight to the billions of people who might die if we screw up and create some sort dangerous AI? Or, in a less exotic scenario, we end up fighting a war with some kind of world-destroying weapon we invent? We've already had some close calls in that department. So far the amount of benefits the accelerating changes have given us outweigh the harms, but we've been really lucky. Or, more pertinent to the OP, if the lives that would be lost if we create a bunch of AIs that we don't consider morally significant, erase them, and then later realize we were wrong to consider them not morally significant?

I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.

Democracy is a dumb idea. I vote for aristocracy/apartheid. Considering the disaster of the former Rhodesia, currently Zimbabwe, and the growing similarities in South Africa, the actual historical apartheid is starting to look pretty good. So I agree with Tim M, except I'm not a secular humanist.

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

0DanielLC11y
It's not going to make you more powerful than it if it's going to limit its ability to make you more intelligent in the future. It will make sure it's intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.

Anon: "The notion of "morally significant" seems to coincide with sentience."

Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."

Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.

James: "Shouldn't this outcome be something the C... (read more)

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

And it doesn't consider it significant that this one hack that boosts IQ by 100 points makes us miserable/vegetables/sadists/schizophrenic/take your pick. Or think that it should have asked ... (read more)

Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.

If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.

Offering someone a pill that'll make them a schizophrenic genius, without telling them about the schizophrenia part, doesn't even fall under most (any?) ordinary definitions of "coercion". (Which vary enough to have whole opposing political systems be built on them – if I'm dependent on employment to eat, am I working under coercion?)

An AI improving itself has a clear definition of what not to mess with – its current goal system.

Nick,

Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.

Not telling people about harmful side-effects that they don't ask about wasn't considered fraud when all the food companies failed to inform the public about Trans Fats, as far as I can tell. At the least, their management don't seem to be going to jail over it. Not even the cigarette executives are generally concerned about prison time.

5pnrjulius12y
That's because of the legal principle of ex post facto, not because it isn't coercion.

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

0pnrjulius12y
Yeah, do we really want to give over control to a super-powerful intelligence that DOESN'T have feelings?
2JulianMorrison12y
Er, yes? Feelings are evolution's way of playing carrot-and-stick with the brain. You really do not want to have an AI that needs spanking, whether it's you or a emotion module that does it: it's apt to delete the spanker and award itself infinite cake.
0TheOtherDave12y
Can you summarize your reasons, stipulating that we really want to give over control to a super-powerful intelligence at all, for why we should want it to have feelings?

Implementing an algorithm is simpler than optimizing for morality: you have all kinds of equivalence at your disposal, you can undo anything. If the first AI doesn't itself contribute any moral content, you (or it) is free to renormalize it in any way, recreating it the way it was supposed to be built, as opposed to the way it was actually built, experimenting with its implementation, emulating its runs, and so on and so forth. If, on the other hand, its structure is morally significant, rebuilding might no longer be an option, and a final result may be wo... (read more)

Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.

If we suppose that intelligences have a power-law distribution, and the sysop is the one at the top, we'll find that it uses up something around 20% to 50% of the accessible universe's computronium.

That would be a natural (as in "expected in nature") distribution. But since the sysop needs to stay in charge, it will probably destroy any other AIs who reach the "second tier" of intelligence. So i... (read more)

1Ghatanathoah11y
Not necessarily. It depends on what the sysop does with all that computing power once it's in charge. Sentience is one of the basic goods, but having whatever sentient creatures exist live excellent lives is another one. If the sysop uses 60% of the computing power to run itself and 40% to run sentient creatures, it could still be a net win if the sysop spends most of its time finding new ways to make those other creature's lives as wonderful as possible. Look at it another way, the organic matter currently being used to make your clothes, food, home, etc could probably also be used to make more humans out of. But it's probably better to use it to improve your life then to create a bunch of cold, naked, hungry people.

I am uncomfortable with the notion that there is an absolute measure of whether (or to what degree) a particular entity is morally significant. It seems to touch on Eliezer's discarded idea of Absolute Morality. Is it an intrinsic property of reality whether a given entity has moral significance? If so, what other moral questions can be resolved Absolutely?

Isn't it possible, or even likely, that there is no Absolute measure of moral significance? If we accept that other moral questions do not have Absolute answers, why should this question be different?

Hal: Within a given 'moral reference frame', there is an absolute measure of significance.

Hal, while many of our moral categories do seem to be torturable by borderline cases, if we get to pick the system design, we can try to avoid a borderline case.

"Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions."

That sounds like self-referential logic to me. What could possibly understand the implications of a new intelligence, except for a test run of the whole or part of that new intelligence.

I really like your site and your writings as it always seems to enrich my own thoughts on similar subjects. But I do find that I disagree with you on one point. I would jus... (read more)

[-][anonymous]12y60

You can't unbirth a child.

The revealed human preferences speak otherwise. Subsets of humans have decided that you can't do that, but I'm not at all certain they really are something humans would converge to if they where wiser, smarter and less crazy.

But I think I agree with the basic premise, we don't know, so lets not do something that might leave a bad taste in our mouths for eternity. To rephrase that:

I understood this blog post as: Trillions of cheesecake lovers, we care about change the utility pay off we can get in our universe. Us denying them... (read more)

4wedrifid12y
Isn't it a question of physics? Unbirthing seems impossible. You can kill and or destroy children if you want but you can't unbirth.
4[anonymous]12y
I don't see this as a question of physics. Though we may be arguing about words here. * A > B > C > Child "You can't unbirth a child" is just how we say its ok to undo A, B or C but not the child. It is physically impossible to "unbirth" or " undo" B and C or A in exactly the same material sense as the child. We don't see that as carrying the moral weight of killing the child so we don't say you can't unbrith B or A. In any case child is just a place-holder for "sentient" which seems to be a place-holder for "something we care about". * A > B > C > Child * A > B > C > D > Something we care about * A > B > Person Can describe the same exact physical process. By speaking of revealed human preferences I wanted to it be put into consideration that humans have historically used the first, the second and the third description for the same thing. We may in the future use heuristics that are ok with us painlessly erasing the cheescake lovers, just as at one point we decided that abortion is ok, or as we at one point decided that infanticide is not. But the risk that while we think we wouldn't care, we would actually end up caring may be enough to swamp the gain. Reliably "non-sentient" AI is probably the better option.
5JoachimSchipper12y
I think that's mostly correct, but Eliezer means something stronger than "considerable disutility" when he says "right" (e.g. self-modifying to like killing people and then killing people is not right; The meaning of right.)
[-][anonymous]12y00

So, the thing I primarily got from this article was a gigantic wiggling confusion...

What is "sentience"? I have been thinking this over for about three days and I still got neither a satisfying reduction to the subjective side of cognitive algorithms nor to anything resembling a mathematical principle.

If I took an EM and filed and refined the components, replaced the approximative neurons by hard applied maths, and compared the result to a run-of-the-mill bayes AI, would I have a module left over?

What exactly makes both me and EY and presumably m... (read more)

0Mitchell_Porter12y
I suggest that you try to read Heidegger's Being and Time. You will probably abandon the book in disgust; but that is how far away from your current concepts you will have to reach, in order to answer your final questions, just on the epistemological level. The natural sciences construct their ontology by focusing entirely on the objective pole of thought and experience, and the subjective pole won't reappear by itself, just from thinking about algorithms and mathematics.
1[anonymous]12y
I'll add it to my reading list. How do you know that? Like, genuine question, this smells like a cached thought.
2Mitchell_Porter12y
It's certainly a conclusion I reached long ago and became comfortable with long ago. But you should understand that this is perhaps the major intellectual issue of my life. It's about twenty years since I started thinking about alternatives to the standard crypto-dualist theories of mind that are advanced by materialists, computational neoplatonists, and so on. I call these theories crypto-dualist because they are expounded as if reality is "nothing but atoms" or "nothing but computation", yet they also assert the existence of conscious experience, yet they don't really reduce it to atoms or to computation. They assert a correlation between two things, and call it an identity; thus, crypto-dualism, secret dualism. It's easy to see that it won't work once you can diagnose what's going on. Once you accept that, for example, colors, thoughts, etc, are actually something different from anything you can make out of points in space or out of sets of numbers, it's easy to see when someone is making exactly this mistake, and the steps in their argument where "a miracle occurs", or the property dualism slips past, unnoticed. But to be outspoken about the issue, and boldly assert that, no, if you go that way, you must become a dualist, even though you're going that way precisely in order to avoid dualism ... it helps to have an inkling of what a genuine solution to the problem would look like; and I have that thanks of long readings in phenomenology (which can equip you with the concepts and language to think about consciousness as it actually presents itself, and without importing metaphors and assumptions from natural science or computer science), and a knowledge of mathematical physics which tells me how unfamiliar the fundamental ontology can look, and finally some acquaintance with the long tradition of speculation about the role of quantum physics in biology and the brain - a line of thought which gets more robust with each decade, even as the concrete early forms of
4[anonymous]12y
I cannot make sense of your comment. Will you please just state your thesis simply and without discourse?
3Mitchell_Porter12y
My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science. The attempt to describe reality entirely in terms of the existing concepts of those disciplines is necessarily incomplete, partly because it's all about X causing Y but not about what X and Y are. Consciousness gives us a glimpse of the "true nature" of at least one thing - itself, i.e. our own minds - and therefore a glimpse of the true ontological depths. But rationalists and materialists who define their rationalism and materialism as "explaining everything in terms of the existing concepts" create intellectual barriers within themselves to the sort of progress which could come from this reflective, phenomenological approach. I'm not just talking about arcane metaphysical "aspects" of consciousness. I'm talking about something as basic as color. Color does not exist in standard physical ontology - "colors" are supposed to be wavelengths, but a length is not a color; this is an example of the redefining of concepts that I mentioned in the previous long comment. This is actually an enormous clue about the nature of reality - color exists, it's part of a conscious state, therefore, if the brain is the conscious thing, then part of the brain must be where the color is. But it sounds too weird, so people settle for the usual paradoxical crypto-dualism: the material world consists of colorless particles, but the experience of color is in the brain somewhere, but that doesn't mean that anything in the brain is actually "colored". This is a paradox, but it allows people to preserve the sense that they understand reality. You asked for a simple exposition but that's just not easy. Certainly color ought to be a very simple example: it's there in reality, it's not there in physics. But let me try to express my thoughts about the actual nature of color... it's an
2TheOtherDave12y
OK, so, I perceive certain things are red, and I perceive certain groups of things as numbering four. On your account, I perceive the "redness" by virtue of an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state. OK. On your account, do I perceive the "fourness" the same way? Or is that different?
0Mitchell_Porter12y
To understand my position, first see this latest comment. It is that physical ontology is a subset of the true ontology, a bit like replacing a meaningful communication with a tree diagram. The tree structure is present in the original communication, and it inhabits everything to do with syntax and semantics, but the tree structure does not in itself contain the meaning. Analogously, everything following "...which is formally..." is the abstracted description of consciousness, in mathematical/physical terms. The true ontology is the stuff about monadic intentionality with a subjective pole and an objective pole. My supposition is that this takes a finite number of bits to describe, and if you were to just talk about the structure and dynamics of those bits, solely in physical and computational terms, you would find yourself talking about (e.g.) nested qubit structures in the Hilbert space of entangled microtubular electrons. (That last is not a hypothesis that I advance with deadly seriousness and specificity, it's just usefully concrete.) So if you want to talk about the basis of perception and knowledge, there are two levels available. There is the physical-computational level, and then the level of "true ontology". Perception and knowledge are really concepts at the deeper, truer level, because in truth they involve the "subjective" categories like intentionality, as well as the purely "objective" ones like structure and cause. But they will have their abstracted counterparts on the computational level of description. In principle, the way we learn about the scientifically neglected subjective side of ontology is through phenomenology, i.e. introspection of an unusually systematic and rigorous sort, usually conducted in a doubting-Cartesian mode in which you put to one side the question of whether there is an external world causing your perceptions, and just focus on the nature of the perceptions themselves. Your question - what's going on when you perceive so
1TheOtherDave12y
If you intended to answer my question, you might want to know that after reading your response, I still have no idea whether on your account perceiving some system as comprised of four things requires some ontologically distinct noncomputational something-or-other in the same way that perceiving a system as red does. If you intended to use my question as a launching pad from which to expound your philosophy, or intended to be obscurantist, then you might not.
2Mitchell_Porter12y
Aha! Only now do I understand exactly what you were asking. Recap: I complain that colors, such as redness, exist in reality, but not in physics as we describe it now, not even in the physics of the brain. So I just postulate that somewhere in the brain are entities, "manifolds of qualia", which will have a naturalistic, mathematical description as physical degres of freedom, but which in their full ontological reality are actually red. So great, I've "saved the phenomenon", my ontology contains true color. But now I need an ontological account of awareness of color. Reality contains awareness of redness, just as much as it contains redness. This is why I started talking about "positing" and "givenness" and the subjective pole of intentionality - because that stuff is needed in order to say what awareness is. The question about fourness starts out looking simpler than that. If you asked, Does your ontology contain redness, I can say, Yes; it contains qualia-manifolds, and they can be genuinely red. The question about fourness seems quite analogous. If there is a square in your visual field, do I claim that there is a platonic property of fourness inhabiting your manifold of visual qualia? I believe in the existence of colors, but I am a skeptic about the existence of numbers. You might get away with a metaphysics in which there are no number-entities, just states of processes for counting. I'm not sure; if numbers are real, they might be properties of collections... but I'm a skeptic. More importantly, my ontology of conscious states gives redness and fourness a different status, which allows me to be agnostic about whether or not there's a real "essence of fourness" inhabiting the visual sensation of a square. I hypothesize that the entity "redness" (more precisely, a particular shade of redness) is itself part of the entity, "awareness of that shade of redness"; but that "awareness of fourness" does not contain any correspondingly real "fourness". Analysed, i
2TheOtherDave12y
OK, cool. That does indeed address the question, thank you. When you have the time, I would be interested in your thoughts about what sort of evidence might convince you that a functionalist account of number "perception" is inadequate in the same way that (on your account) a functionalist account of color perception is.
4[anonymous]12y
Do you mean 'intensionality'? (and should we worry that the Chrome spell check recognizes neither of these words?) This sounds like you mean "the perception of color is a brain state". Am I missing something?
0Mitchell_Porter12y
I definitely mean intentionality with a T. Again, see my latest comments, on the need to reintroduce at a fundamental level, ontological categories which have been excluded as subjective in order to build the scientific model of the world. I am hinting that, rather than intentionality being an abstraction from a mass of microphysical causal relations, the locus of consciousness is a specific, complex, but microphysically exactly bounded object, whose actual ontology includes intentionality, and for which the standard physical description would be the abstracted one. That is, in reality the world consists of a causal network of "monads", some of which have extremely complex intentionality, but most of which are simple and are entirely pre- or non-intentional in their nature; but that the mathematical representation of this ontology is the "Machian quantum geometry" of "coherent tensor factors". Machian quantum geometry is not a well-defined mathematical concept, it's a rhetorical construct meant to suggest a quantum geometry based on matter (analogous to Ernst Mach's ideas). The monads are the "matter", the "geometry" encodes their immediate causal relations... This is handwaving meant to convey the gist of a way of thinking.
[-][anonymous]12y140

This is hard to reply on. I really wish to not insult you, I really do, but I have to say some harsh words. I do not mean this as any form of personal attack.

You are confused, you are decieving yourself, you are pretending to be wise, you are trying to make yourself unconfused my moving your confusion into such a complicated framework that you loose track of it.

Halt, melt and catch fire. It is time to say a loud and resounding "whoops."

You seemingly have something you think is a great idea. I can discern that it is about ontology and something about a dichotomy between "physical things" and "mental? things" and how "color" and related concepts exists in neither? I am a reasonably intelligent man, and I can literally not make sense of what you are communicating. You yourself admit you cannot summarize your thoughts which is almost always a bad sign.

My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science.

What evidence do you have?

The attempt to describe reality entirely in terms of th

... (read more)
4Mitchell_Porter12y
You may see the unacknowledged dualism to which I refer, in the phrase "how an algorithm feels from inside". This implies that the facts about a sentient computer or sentient brain consist of (1) all the physical facts (locations of particles, or whatever the ultimate physical properties are) (2) "how it feels" to be the entity. All those many definitions of color will be found on one side or the other side of that divide, usually on the "physical" side. The original meaning of color is usually shunted off to "experienced color", "subjective color", "color qualia", and so on. It ends up on the "feeling" side. People generally notice at some point that the "color feelings" don't exist on the physical side. Nothing there is actually red, actually green, etc, in the original sense of those words. There are two main ways of dealing with this. Either you say that there aren't any real color feelings, there's just a feeling of color feelings that is somehow a side effect of information processing. Or, you say that subjective conscious experience is a terrible mystery, but one day we'll solve it somehow. (On this site, I nominate orthonormal as a representative of the first option, and Richard Kennaway of the second option.) The third option, which I represent, says this: The only way to admit the existence of consciousness, and believe in physics, and not believe in dualism, is for the "feelings" to be the physical entities. They aren't "how it feels to be" some particular entity which is fundamentally defined in "non-feeling" terms, and which plays a certain causal role in the physical description of the world. The "feelings" themselves (the qualia, if you prefer that term) have to be causally active. The qualia must enter physics at a fundamental level, not in an emergent, abstracted, or epiphenomenal way. They will have an abstracted mathematical description, in terms of their causal role, but it is wrong to say that they are nothing but That Which Plays A Certain
2hairyfigment12y
You're begging the question. I think you mean it doesn't seem obvious that a functional process is a feeling of color. You object to the fact that we don't recognize ourselves with certainty in this description. And yet you know that functionalism doesn't predict certain recognition. You know that it would seem, if not directly self-contradictory per Gödel and Löb, at least rather surprising for a mind in a functionalist world to find functionalism intuitively obvious when viewed from this angle. But we don't have to speculate about the limits of self-consciousness in humans. We know for a fact that a lot of 'unconscious' processing takes place during perception. And orthonormal provides a credible account of how that could produce thoughts like yours. I would actually say that if you think a functionally-human version of "Martha" would not have consciousness, your intuition is broken. So now we have an impasse between dueling intuitions. I suppose you could try to argue that one intuition seems more reliable than the other. Or we could just admit that they aren't reliable.
0[anonymous]12y
There are no fundamental "feelings." The map of reality exists inside a brain which is a part of reality. Your modal logic and monad tensor algebra is unnecessary and meaningless. Everything you say has simpler explanations. You're begging the question, you show clear signs of self deception. The universe is fundamentally simple, only in our map-of-the-universe do we pretend that things are different in order to compress the information. You are misusing words. Like, basic errors. And I am not going to take apart your wall-of-text philosophy. Come back when you have equations and predictions. Until then I am a material reductionist. Halt, melt, catch fire. Now. Unless you Aumann up, this conversation is over.
2Richard_Kennaway12y
Aumann agreement is a cooperative process. Flying off the handle in the face of persistent disagreement does not look like part of such a process. For you and Mitchell Porter, that is probably the best achievable outcome.
4Richard_Kennaway12y
That accurately characterises my view. I'd just like to clarify it by saying that by "somehow, one day" I'm not pushing it off to Far-Far-Land (the rationalist version of Never-Never-Land). For all I know, "one day" could be today, and "we" could be you. I think it fairly unlikely, but that's just an expression of my ignorance, not my evidence. On the other hand, it could be as far off as electron microscopes from the ancient Greeks.
4Bugmaster12y
I think the confusion here stems from the fact that the word "color" has two different meanings. When physicists talk about "color", what they mean is, "a specific wavelength of light". Let's call this "color-a". When biologists or sociologists (or graphic artists) talk about "color", what they mean is, "a series of biochemical reactions in the brain which is usually the result of certain wavelengths of light hitting the retina". Let's call this "color-b". Both "color-a" and "color-b" are physical phenomena, but they are distinct. As it happens, "color-b" is often caused by "color-a", but that isn't always the case. And we can often map "color-b" back onto a single "color-a", but that isn't always the case either; for example, the "color-b" we know as "brown" depends on local contrast, and thus does not have a single "color-a" cause. This confusion in terms makes philosophical discussions confusing, but that's just an artifact of the English language. The concepts themselves are relatively simple, IMO.
1Mitchell_Porter12y
Using the distinction I introduce here, both your color-a and your color-b are on the "physics side", but there absolutely has to be color on the "feeling side" as well; that's the original meaning of color and the one that we know about directly. Now, in real life I have a deadline to meet, and further communications will be delayed for a few days, if I'm wise...
0Bugmaster12y
I think you may be somewhat confused about Eliezer's terminology. You say: But the original article does not propose any kind of a dualism. Instead (IMO), it attempts to expose certain mental biases inherent to all humans, which are caused by the specific ways in which our neural hardware is configured: "Because we don't instinctively see our intuitions as "intuitions", we just see them as the world". You say that... But people "generally notice" a lot of things, including the existence of gods and demons, and the shape of the Earth, which is flat. Just because people notice something, doesn't mean it's there (but it doesn't mean it's not there, either). You go on to say that materialists are... But this just isn't true. We know a lot (though not everything) about how our consciousness operates; in fact, we can even observe some of it happening in real time under fMRI scans. Sure, some philosophers might wax poetic about the grand mystery of consciousness, but they are the same kinds of people who waxed poetic about the grand mystery of the heavens before Newtonian Mechanics was discovered. Thus, I'm not convinced that... ...assuming of course that by "feeling side" you mean something distinct from brain-states. I could be wrong, of course; but since you are making the positive proposition about the existence of qualia, the burden of proof is on you.
2thomblake12y
Please don't, unless you would instead be watching reality TV or something. It's a complete waste of time. Heidegger speaks nonsense. He even makes up words and doesn't define them, so that he can speak more blatant nonsense.
0[anonymous]12y
Thanks. I missed an update on the recommender's credulity.
0Bugmaster12y
Well, to be entirely fair, the recommender did warn us that we would most likely hate the book, since it would require us to discard all of our cherished assumptions. Of course, there could be other reasons for hating it, as well... I am kind of curious to take a look at it, to be honest; maybe I'll find a preview somewhere, when I have more time.
4thomblake12y
If you do read it, don't worry about getting it in the original German. I have it on good authority that German philosophy students are often given English translations of Heidegger because they're more readable.
3Richard_Kennaway12y
You might also try Heidegger: A Very Short Introduction. I have the book, although I don't think I ever read it; but it is short, deals mainly with the ideas (whatever they are) of "Being and Time", and the reviews on Amazon are favourable.
2Oligopsony12y
Having attempted Heidegger in English, I can only shudder at what the German versions are like.
0Bugmaster12y
Wait, is "sentient" actually a thing ? I always thought that it was just a shorthand we use for describing a wide gamut of phenomena. Humans are quite sentient, chimps less so, dogs even less so, our current AIs even less sentient than that, and rocks aren't sentient at all. Am I wrong about this ?
2[anonymous]12y
That is what I try to discern: Is "sentient" a computational property or reducible to "why does my brain make me think it." I agree with your statement, but I fail to see how to distinguish a "sentient" super-intelligence for a "non-sentient" one. In general I am confused.
1Bugmaster12y
I'm not entirely sure what "why does my brain make me think it" means, but I've just noticed that I incorrectly used the word "sentient" in its science-fictional sense; I should've said something like "sapient", instead. The word sentient is often incorrectly used (f.ex. by me) to mean "capable of rational thought and communication", whereas the more correct definition is "capable of having subjective experiences". As luck would have it, my previous comment applies to both meanings of the word, but still, they are distinct (though probably related). I apologize for the confusion.
2TheOtherDave12y
Well, you could ask it whether it has subjective experience and trust its self-report. That's basically the same strategy we use for other intelligences, after all.
0[anonymous]12y
And we return to the back box of subjective experience.
0Bugmaster12y
What do you mean by "black box" ? If the AI (or alien or uplifted dolphin or whatever) tells me that it has subjective experiences, why shouldn't I take it at its word ?
0[anonymous]12y
Oh, I am not denying that they exist, just saying I don't know a solid theory of subjective experience. I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.
0Bugmaster12y
Sorry, I have trouble parsing this sentence. But in general, I don't think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient -- as long as that entity is also sapient, and capable of communication. If that's the case, then we can just ask it, and trust its word. If that's not the case, then I agree, we have a problem.
[-][anonymous]3y00

Not that I disagree with the conclusion, but these are good arguments against democracy, humanism and especially the idea of a natural law, not against creating a sentient AI.