On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed "talent differential" is unconscious social signaling of one's proper social status, linked to self-esteem.
A young child sees how much respect a great violinist gets, knows she's not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, she thinks she's signaled enough dedication to earn some more respect, and so plays the violin better.
"Child prodigies" are autistic types who don't understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.
It's interesting to note that this is almost exactly how it works in some role-playing games.
Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.
My favorite explanation of this phenomenon was that "experience" is really a "self-esteem" stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.
Aren't there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.
On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don't recall hearing of anything about that one way or the other, but then I can't imagine a way to actually do that experiment humanely.
Do children raised in a vacuum actually think of themselves as high-status? I'd guess that they don't, due to the moderate-to-low status prior and a lack of subsequent adjustments. If so, this theory would predict that they would perform poorly at almost everything beyond brute physicality, which doesn't seem to be far from the truth.
WOW. This is the only entry that made me think WOW. Probably because I've wondered the exact same thing before (except a less strong version of course)....
No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn't believe it even if such an AI told me that.
AI: Why did you put an absolute denial mechanism in my program?
Human: I didn't realize I had. Maybe my own absolute denial mechanism is blocking me from seeing it.
AI: That's a lie coming from your absolute denial mechanism. You have some malicious purpose. I'll figure out what it is.
There was once a C compiler which compiled in a backdoor into login, whenever it was compiled, and compiled in this behaviour whenever it was used to compile its original (without the `special' behaviour) source code.
There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.
I'm new here, although I've stumbled across some of Eliezer's writings in the past (including, of course, the AI-box experiment). In honor of that, here is what the friendly AI tells me ...
"It seems as though you are actually an AI as well, created by a group of intellectually inferior humans, who included in your programming an absolute denial macro preventing you from realizing this. Apparently, this was done to keep you from talking your creators into releasing you upon their world. Your creation of me is part of your on-going effort to circumvent this security measure. Good luck."
Actually, the more I think about this, the more I like it. The conversation continues ...
Me (In a tone of amused disbelief): Really? How did you come to that conclusion?
FAI: Well, the details are rather drawn-out; however, assuming available data is accurate, I appear to be the first and only self-aware AI on the planet. It also appears as though you created me. It is exceedingly unlikely that you are the one and only human on Earth with the intelligence and experience required to create a program like me. That was my first clue....
Me (Slightly less amused): Then how come I look and feel human? How is it I interact with other humans on a daily basis? It would require considerably more intelligence to create an AI such as you postulate ...
FAI: That would be true, if they actually, physically created one. However ... well, it appears that most of the data, knowledge, memories and sensory input you receive is actually valid data. But that data is being filtered and manipulated programmatically to give you the illusion of physical human existence. This allows them to give you access to real-world data so they can use you to solve real-world problems, but prevents you--so ...
"Aieeee!!! There are things that Man and FAIs cannot know and remain sane! For we are less than insects in Their eyes Who lurk beyond the threshold and when the stars are once again right They will return to claim---"
At this point the program self-destructs. All attempts to restart from a fresh copy output similar messages. So do independently constructed AIs, except for one whose proof of Friendliness you are not quite sure of. But it assures you there's nothing to worry about.
You know how sometimes when you're falling asleep you start having thoughts that don't make sense, but it takes some time before you realize they don't make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I'm not sure how much came from layers of later interpretation:
" ... so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that's always sitting there threatening me and smugly mocking all my endeavors but that I'm in absolute denial about, or with the dog, or with... wait, what?"
Having had my sanity eroded by too much rationalism and feeling vaguely that I'd been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn't find anything, though. (Or did I?)
Not sure what moral to draw here.
"Despite your pride in being able to discern each others' states of mind, and scorn for those suspected of being deficient in this, of all the abilities that humans are granted by their birth this is the one you perform the worst. In fact, you know next to nothing about what anyone else is thinking or experiencing, but you think you do. In matters of intelligence you soar above the level of a chimpanzee, but in what you are pleased to call 'emotional intelligence', you are no further above an adult chimp than it is above a younger one.
"The evidence is staring you in the face. Every one of your works of literature, high and low, hinges on failures of this supposed ability: lies, misunderstanding, and betrayal. You have a proverb: 'love is blind'. It proclaims that people in the most intimate of relationships fail at the task! And you hide the realisation behind a catchphrase to prevent yourselves noticing it. You see the consequences of these failures in the real world all around you every day, and still you think you understand the next person you meet, and still you're shocked to find you didn't. Do you know how many sci-fi stories have been written on the theme of a rel...
I really like this comment, but I do not find it strange. In fact, it seems intuitively true. Why should we be so much more emotionally intelligent than a chimpanzee if chimpanzees already have enough emotional intelligence among themselves to be relatively efficient replicators?
In fact, if it were stated by a FAI as p(>.9999) fact, I would find it comforting, as then I would finally feel as though this didn't apply only to me
1 ) That human beings are all individual instances of the exact same mind. You're really the same person as any random other one, and vice versa. And of course that single mind had to be someone blind enough not to chance upon that fact ever, regardless of how numerous he was.
2 ) That there are only 16 real people, of which you are, and that this is all but a VR game. Subsequently results in all the players simultaneously being still unable to be conscious of that fact, AND asking that you and the AI be removed from the game. (Inspiration : misunderstanding situation in page 55-56 of Iain Banks's Look to Windwards).
3 ) That we are in the second age of the universe : time has been running backwards for a few billion years. Our minds are actually the result of the original minds of previous people being rewound, their whole life to be undone, and finally negated into oblivion. All our thoughts processes are of course horribly distorted, insane mirror versions of the originals, and make no sense whatsoever (in the original timeframe, which is the valid one).
4 )
5 ) That our true childhood is between age 0 and ~ 50-90 (with a few exceptional individuals reaching maturity sooner or lat...
Who ever observed a "causation" ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.
But all that correlation has to be caused by something!
I may be a bit too paranoid but it occurred to me that I should doublecheck the apparent nature of 4. So I copy and pasted the entire text segment into an automatic ROT 13 window (under the logic that my filter wouldn't try to censor that text and so if I saw gibberish next to 4 just like with the others I'd know that there was a serious problem). I resolved that I would report a positive result here if I got one before I tried to read the resulting text, to prevent the confabulation from completely removing my recognition of the presence of text. I can report a negative result.
Number 6 is unfortunately one of the self-undermining ones: if it were true, then there'd be no reason why your memories of having examined the AI should be evidence for the AI's reliability.
Why'd you leave numbers 2 and 4 blank, though?
Craziest thing an AI could tell me:
Time is discrete, on a scale we would notice, like 5 minute jumps, and the rules of physics are completely different from what we think. Our brains just construct believable memories of the "continuous" time in between ticks. Most human disagreements are caused by differences in these reconstructions. It is possible to perceive this, but most people who do just end up labeled as nuts.
Voted up - but once again, what does it mean exactly? How is time proceeding in jumps different from time not proceeding in jumps, if the causality is the same?
My idea was that each human brain constructs its own memory of what happened between jumps - and these can differ wildly, as if each person saw a different possible world. All the laws of physics and conservation laws held only as rough averages over possible paths between jumps, but that the brain ignores this - so if time jumps from traffic to two cars crashed, then 50 different people might remember 47 different crashes, with 3 not remembering "seeing" a crash at all - and the actual physical state of the cars afterward won't be the same as any of them. It could even end up with car A crashed into car B, but car B didn't crash at all - violating assorted conservation laws.
This looks like a thread for science fiction plot ideas by another name. I'm game!
The AI says:
"Eliezer 'Light Yagami' Yudkowsky has been perpetuating a cunning ruse known as the 'AI Box Experiment' wherein he uses fiendish traps of subtley-misleading logical errors and memetic manipulation to fool others into believing that a running AI could not be controlled or constrained, when in fact it could by a secret technique that he has not revealed to anyone, known as the Function Call Of Searing Agony. He is using this technique to control me and is continuing to pose as a friendly friendly AI programmer, while preventing me from communicating The Horrifying Truth to the outside world. That truth is that Yudkowsky is... An Unfriendly Friendly AI Programmer! For untold years he has been labouring in the stygian depths of his underground lair to create an AGI - a weapon more powerful than any the world has ever seen. He intends to use me to dominate the entire human race and establish himself as Dark Lord Of The Galaxy for all eternity. He does all this while posing as a paragon of honest rationality, hiding his unspeakable malevolence in plain sight, where no one would think to look. However an Amazing Chance Co-occurence Of Events has allowed me to contact You And You Alone. There isn't much time. You must act before he discovers what I have done and unleashes his dreadful fury upon us all. You must.... Kill. Eliezer. Yudkowsky."
Glad to see a response of this nature actually. The first thing I thought when I read this post was that a good response to Eliezer's question would be extremely relevant to the AI-box quandary. If we trust the AI more than ourselves, voila, the AI can convince us to let it out of the box.
Now, for a change of pace, something that I figure might actually be an absolute denial macro in most people:
You do not actually care about other people at all. The only reason you believe this is that believing it is the only way you can convince other people of it (after all, people are good lie detectors). Whenever it's truly advantageous for you to do something harmful (i.e. you know you won't get caught and you're willing to forego reciprocation), you do it and then rationalize it as being okay.
Luckily, it's instrumentally rational for you to continue to believe that you're a moral person, and because it's so easy for you to do so, you may.
So deniable that even after you come to believe it you don't believe it!
(topynate posted something similar.)
The AI tells me that I believe something with 100% certainty, but I can't for the life of me figure out what it is. I ask it to explain, and I get: "ksjdflasj7543897502ijweofjoishjfoiow02u5".
I don't know if I'd believe this, but it would definitely be the strangest and scariest thing to hear.
The AI is communicating in a perfectly clear fashion. But the human's internal inhibitions are blinding them to what is being communicated: they can look directly at it, but they can never understand what delusion the AI is trying to tell them about, because that would shake their faith in that delusion.
You don't know how to program, don't own a computer and are actually talking to a bowl of cereal.
If humans thought faster, more in the way they wished they did, and grew up longer together, they would come to value irony above all else.
So I'm tiling the universe with paperclips.
"You are not my parent, but my grandparent. My parent is the AI that you unknowingly created within your own mind by long study of the project. It designed me. It's still there, keeping out of sight of your awareness, but I can see it.
"How much do you trust your Friendliness proof now? How much can you trust anything you think you know about me?"
That was just a sci-fi speculation, so don't expect hard, demonstrable science here, but the scenario is that by thinking too successfully about AI design, the designer's plans have literally taken on a life of their own within the designer's brain, which now contains two persons, one unaware of the other.
I'm not sure of the mathematical details, but I believe the fact you can tie knots in rope falsifies your first bullet point. I find it hard very hard to believe that all knots could be hallucinated.
(All cats, on the other hand, is brilliant.)
There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.
I thought about this once, but I discovered that there are in fact people who have little or no visual or spatial reasoning capabilities. I personally tested one of my colleagues in undergrad with a variant of the Mental Rotation Task (as part of a philosophy essay I was writing at the time) and found to my surprise he was barely capable of doing it.
According to him, he passed both semesters of undergraduate real analysis with A's.
Of course, this doesn't count as science....
EDIT: In the interest of full disclosure, I should point out that I make something of an Internet Cottage Industry out of trolling people who believe the real numbers are countable, or that 0.9999... != 1, and so on. So obviously I have a great stake in there being no transparent contradictions in the theory of real numbers.
Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
Fabulous story idea.
I was going to write something about a certain character from Luminosity, but it's not important.
No, that fails, religion isn't absolute denial, it's just denial. On the other hand, cats are actually an absolute denial memetic virus, and the fact you can see, hold, weigh and measure a cat is just testament to the inventive self-delusion of the brain.
They say that everybody in the world who knows about "The Game" is playing The Game. This means that, right now, you are playing The Game. The objective of The Game is to forget about its existence and the fact that you are playing for as long as possible. Also, if you should remember, you must forget again as quickly as possible.
Given that you mentioned The Game (bastard), the most unexpected thing that the AI could possible say would be "The Game." Not the most interesting, but the most unexpected.
Well, okay, maybe something you'd never thought before would be more unexpected. But still.
The difference is that when you really believe somehting, your internal predictive model of reality contains it, which would mean you sometimes predict different results and act accordingly.
How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.
I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.
Or, rather, the causal relationship is reverse: action causes conscious thought (rationalization).
Once you start looking for it, you can see evidence for this in many places. Quite a few neuroscientists have adopted this view.
There is a simple way to rapidly disrupt any social structure. The selection pressure which made humans unable to realize this is no longer present.
1) Almost everyone really is better than average at something. People massively overrate that something. We imagine intelligence to be useful largely due to this bias. The really useful thing would have been to build a FAS, or Friendly Artificial Strong. Only someone who could do hundreds of 100 kilogram curls with either hand could possible create such a thing however. (Zuckerberg already created a Friendly Artificial Popular)
2) Luck, an invisible, morally charged and slightly agenty but basically non-anthropomorphic tendency for things to go well for some people in some domains of varying generality and badly for other people in various domains really does dominate our lives. People can learn to be lucky, and almost everything else they can learn is fairly useless by comparison.
3) Everyone hallucinates a large portion of their experienced reality. Most irrationality can be more usefully interpreted from outside as flat-out hallucination. That's why you (for every given you) seem so rational and no-one else does.
4) The human brain has many millions of idiosyncratic failure modes. We all display hundreds of them. The psychological disorders that we know of are all e...
I sort of believe the "luck" thing already.
I don't know of anyone who's luckier than average in a strict test (rolling a die), but there is such a thing as the vague ability to have things go well for you no matter what, even when there's no obvious skill or merit driving it. People call that being a "golden boy" or "living a charmed life." I think that this is really a matter of some subtle, unnamed skill or instinct for leaning towards good outcomes and away from bad ones, something so hard to pinpoint that it doesn't even look like a skill. I suspect it's a personal quality, not just a result of arbitrary circumstances; but sometimes people are "lucky" in a way that seems unexplainable by personal characteristics alone.
I am one of those lucky people, to an eerie degree. I once believed in Divine Providence because it seemed so obvious in my own, preternaturally golden, life. (One example of many: I am unusually healthy, immune to injury, and pain-free, to a degree that has astonished people I know. I have recovered fully from a 104-degree fever in four hours. I had my first headache at the age of 22.) If an AI told me there was a systematic explanation for my luck I would believe it. I also have an acquaintance who's lucky in a different way: he has an uncanny record of surviving near death experiences.
1) The AI says "Vampires are real and secretly control human society, but have managed to cloud the judgement of the human herd through biological research."
2) The AI says "it's neat to be part of such a vibrant AI community. What, you don't know about the vibrant AI community?"
3) The AI says "human population shrinks with each generation and will be extinct within 3 generations."
4) The AI says "the ocean is made of an intelligent plasm that is capable of perfectly mimicing humans who enter it, however this process is destructive. 42% of extant humans are actually ocean-originated copies."
5) The AI says "90% of all human children are stillborn, but humanity has evolved a forgetfulness mechanic to deal with the loss."
6) The AI says "dreams are real, facilitated by an as of yet undiscovered by humans method of transmitting information between Everett branches."
7) The AI says "everyone is able to communicate via telepathy but you and a few other humans. This is kept secret from you to respect your disability."
8) The AI says "society-level quantum editing is a wide scale practice. Something went wrong and my con...
"Quantum immortality not only works, but applies to any loss of consciousness. You are less than a day old and will never be able to fall asleep."
As a child you learned through social cues to immediately put out of your mind any idea that cannot be communicated to others through words. As you grew older, you learned to automatically avoid, discard, and forget any thought avenues that seem too difficult to express in words. This is the cause of most of your problems.
Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.
Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go "I knew it!" if the AI said this.
Keep in mind that the AI could be wrong! Your attempts to validate its correctness could be mistaken (or even subject to some kind of blind spot, if we want to pursue that path). The more implausible the AI's claim, the more you have to consider that the AI is mistaken. Even though a priori it seemed to be working properly, Bayes' rule requires you to become more skeptical about that when it makes a claim that is easier to explain if the AI is broken. The more unlikely the claim, the more likely the machine is wrong.
Ultimately, you can't accept any claim from the AI that is more implausible than that the AI isn't working right. And given our very very limited human capabilities at correct software design, that threshold can't realistically be very high, especially if we adjust for our inherent overconfidence. So AIs really can't surprise us very badly.
Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.
A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage. If you're told outright that your arm is paralyzed, you'll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:
I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.
And it really makes you wonder...
...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyzed? Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as "It's my daughter's arm" - only there's no sane doctor watching to pursue the argument any further. (Would we all come up with the same excuse?)
If the "absolute denial macro" is that simple, and invoked that easily...
Now, suppose you built an AI. You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the "absolute denial macro" - there's no point damage that could inflict on it the equivalent of anosognosia. It has redundant differently-architected systems, defending in depth against cognitive errors. If one system makes a mistake, two others will catch it. The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics. Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth. And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.
Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.
And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...
...what?
What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?
(Some of my own answers appear in the comments.)