In other words, "what is well-being?", in such terms that we can apply it to a completely alien situation. This is an important issue.
One red herring, I think, is this:
One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness.
That could be read two ways. One way is the way that you and these psychologists are reading it. Another interpretation is that the subjects estimated the impact on their future well-being correctly, but after the events, they reported their happi... (read more)
@billswift: You were right about Pavlina. I discovered that as I read more of his stuff.
@RT Wolf: Thanks for the Pavlina link. It looks fascinating so far.
Apparently the people who played gatekeeper previously held the idea that it was impossible for an AI to talk its way out. Not just for Eliezer, but for a transhuman AI; and not just for them, but for all sorts of gatekeepers. That's what is implied by saying "We will just keep it in a box".
In other words, and not meaning to cast any aspersions, they all had a blind spot. Failure of imagination, perhaps.
This blind spot may have been a factor in their loss. Having no access to the mysterious transcripts, I won't venture a guess as to how.
a "logically possible" but fantastic being â a descendent of Ned Block's Giant Lookup Table fantasy...
First, I haven't seen how this figures into an argument, and I see that Eliezer has already taken this in another direction, but...
What immediately occurs to me is that there's a big risk of a faulty intuition pump here. He's describing, I assume, a lookup table large enough to describe your response to every distinguishable sensory input you could conceivably experience during your life. The number of entries is unimaginable. But I sus... (read more)
To put it much more briefly, under the Wesley Salmon definition of "explanation" the epiphenomenal picture is simply not an explanation of consciousness.
Any commited autodidacts want to share how their autodidactism makes them feel compared to traditional schooled learners? I'm beginning to suspect that maybe it takes a certain element of belief in the superiority of one's methods to make autodidactism work.
As Komponisto points out, traditional schooling is so bad at educating that belief in the superiority of one's [own] methods is easily acquired. I first noticed traditional schooling's ineptitude in kindergarten, and this perception was reinforced almost continuously thru the rest of my schooling.
PS: I liked the initiation ceremony fiction, Eliezer.
In classical logic, the operational definition of identity is that whenever 'A=B' is a theorem, you can substitute 'A' for 'B' [but it doesn't follow that]
I believe 2 + 2 = 4 => I believe TRUE => I believe Fermat's Last Theorem.
The problem is that identity has been treated as if it were absolute, as if when two things are identical in one system, they are identical for all purposes.
The way I see it, identity is relative to a given system. I'd define it thus: A=B in system S just if for every equivalence relation R that can be constructed in S, R... (read more)
Great post, Rolf Nelson.
This seems to me a special case of asking "What actually is the phenomenon to be explained?" In the case of free will, or should I say in the case of the free will question, the phenomenon is the perception or the impression of having it. (Other phenomena may be relevant too, like observations of other people making choices between alternatives).
In the case of the socks, the phenomenon to be explained can be safely taken to be the sock-wearing state itself. Though as Eliezer correctly points out, you can start farther back, that is, you can start with the phenomenon that you think you're wearing socks and ask about it and work your way towards the other.
"Have you stopped beating your wife?" has well-defined true-or-false answers. It's just that people are generally too stupid to understand what the no-answer actually indicates.
It's usually given as "Have you stopped beating your wife yet?" (Emph mine). The problem is the presupposition that you have been beating your wife. Either answer accepts (or appears to accept) that presupposition.
It's a different sort of bad question than the underconstrained questions. The Liar Paradox OTOH is a case of underconstrained question because it contains non-well-founded recursion.
Wrt defining art, I offer my definition:
"An artifact whose purpose is to be perceived and thereby produce in
its perceiver a positive experience of no direct practical value to
"Artifact" here is meant in the sense of being appropriate for Daniel
Dennett's design stance. It is not neccessarily tangible or
This is what's called a Genus-differentia definition, or
type-and-distinction definition. "Artifact" is the type, the
rest is the distinction.
This lets me build on existing understandings about artifa... (read more)
Just because there's a word "art" doesn't mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
True, but it strongly suggests that people who use the term believe there is a referent for it. Sometimes there is none (eg "phlogiston" or "unicorn"). Sometimes the referent is so muddled or misunderstood that the term is has little use except to name the mistake (eg "free will", which seems to function as a means of grouping quite distinct concepts of subje... (read more)
I never could understand why people made such a fuss about whether the tree made a sound or not.
Because the sense in which this question is being used as an example here is not the real question that bishop Berkeley had in mind.
It's really a question about epistemology. It's related to the "grue" paradox, which is a bit easier to explain. The grue paradox first notes that ordinarily we have good reason to believe that certain things (grass, green paint, copper flames) are green and will continue to be green after (say) 1 January 2009. It the... (read more)
IMO there's less to Newcomb's paradox than meets the eye. It's basically "A future-predicting being who controls the set of choices could make rational choices look silly by making sure they had bad outcomes". OK, yes, he could. Surprised?
What I think makes it seem paradoxical is that the paradox both assures us that Omega controls the outcome perfectly, and cues us that this isn't so ("He's already left" etc). Once you settle what it's really saying either way, the rest follows.
No matter how many of McGee's bets you take, you can always take one more bet and expect an even higher payoff. It's like asking for the largest integer. There isn't one, and there isn't an optimal plan in McGee's dilemma.
Yes, the inability to name a largest number seems to underlie the infinity utility paradoxes. Which is to say, they aren't really paradoxes of utility unless one believes that "name a number and I'll give you that many dollars" is also a paradox of utility. (Or "...and I'll give you that many units of utility")
It's... (read more)
Perhaps a lot of confusion could have been avoided if the point had been stated thus:
One's decision should be no different even if the odds of the situation arising that requires the decision are different.
Footnote against nitpicking: this ignores the cost of making the decision itself. We may choose to gather less information and not think as hard for decisions about situations that are unlikely to arise. That factor isn't relevant in the example at hand.
FWIW, having tried that tack a few times, I've always been disappointed. The answer is always along the lines of "I'm not meeting any psychological need, I'm searching sincerely for the truth."
But what I found even more fascinating was the qualitative distinction between "certain" and "uncertain" arguments, where if an argument is not certain, you're allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you're allowed to keep it.
I think that's exactly what's going on. These people you speak of who do this are mentally dealing with social permission, not with probability algebra. The non-zero probability gives them social permission to descr... (read more)
The selection pressure for faith is almost surely memetic, not genetic. You can focus on the genetic adaptations that it hijacked, but in doing so you will miss the big picture.
Secondly, for understanding religion, I strongly recommend Pascal Boyer's Religion Explained.
That's true, Benquo.
"How many legs does a dog have, if you call a tail a leg? Four. Calling a tail a leg doesn't make it a leg."
-- Abraham Lincoln
This is the sort of quip that gives the speaker a cheap thrill of superiority, but underneath it is just a cheap trick.
In this case, the trick is that Lincoln (or whoever its real author is) has confused de dicto and de re. That is, he confuses assertions that are to be understood inside vs outside a quote-like context; in this case, in the context of the provision that we shall call a dog's tail a leg. He uses th... (read more)
...there really is some good stuff in there.
My advice would be to read Reasons and Persons (by Derek Parfit) and The Methods of Ethics (by Henry Sidgwick).
Looked up both. Two bum steers. Sidgwick is mostly interested is naming and taxonomizing ethical positions, and Parfit is just wrong.
The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigur
Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?
It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.
Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:
D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))
...doesn't meet this test.
Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * ... (read more)
It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".
One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:
...depends on the following three claims:
a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,
a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,
"Unproblematically&quo... (read more)
In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:
The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia
G, you're raising points that I already answered.
I don't believe this is exactly correct. After all, when you're just about to start listening to the clever arguer, do you really believe that box B is almost certain not to contain the diamond?
Where do you get that A is "almost certain" from? I just said the prior probability of B was "low". I don't think that's a reasonable restatement of what I said.
Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points), an
Each statement that he makes is valid evidence - how could you not update your probabilities? ...
But then the clever arguer can make you believe anything he chooses, if there is a sufficient variety of signs to selectively report. That doesn't sound right.
What's being overlooked is that your priors before hearing the clever arguer are not the same as your priors if there were no clever arguer.
Consider the case if the clever arguer presents his case and it is obviously inadequate. Perhaps he refers to none of the usual signs of containing a diamond a... (read more)
There are really two questions in there:
One is a question about the world, the other about a neccessary truth.
The first is about what aspect of the world we are looking at, under what definitions. 2 rabbits plus 2 rabbits may not result in 4 rabbits. So I have to assume Eliezer refers to the second question.
Can we even meaningfully ask the second question? Kind of. As... (read more)
If you happened to be a literate English speaker, you might become confused, and think that this shaped ink somehow meant that box B contained the diamond.
A sign S "means" something T when S is a reliable indicator of T. In this case, the clever arguer has sabotaged that reliability.
ISTM the parable presupposes (and needs to) that what the clever arguer produces is ordinarily a reliable indicator that box B contained the diamond, ie ordinarily means that. It would be pointless otherwise.
Therein lies a question: Is he neccessarily able to sa... (read more)
What if self-deception helps us be happy? What if just running out and overcoming bias will make us - gasp! - unhappy?
You are aware, I'm sure, of studies that connect depression and freedom from bias, notably overconfidence in one's ability to control outcome.
You've already given one answer: to deliberately choose to believe what our best judgement tells us isn't so would be lunacy. Many people are psychologically able to fool themselves subtly, but fewer are able to deliberately, knowingly fool themselves.
Another answer is that even though depression l... (read more)
This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people."
Perhaps because it resembles the "2" part of a common verbal bully's 1-2 punch, the one that first insults you
and then when you react, slurs you for allegedly not being able to handle the truth. I'm specifically thinking of the part of Crocker's Rule that goes "If you're offended, it's your fault".
Yes, I see that one is "me" and the other is &q... (read more)
It seems to be a common childhood experience on this list to have tried to disprove famous mathematical theorems.
Me, I tried to disprove the four-color map conjecture when I was 10 or 11. At that point it was a conjecture, not a theorem. I came up with a nice moderate size map that, after a apparently free initial labelling and a sequence of apparently forced moves, required a fifth color.
Fortunately the first thing that occured to me was to double-check my result, and of course I found a 4-color coloring.