All of kip1981's Comments + Replies

Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.

First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illus... (read more)

At least when it comes to the concepts "Good," "Morality" and "Free Will," I'm familiar with some fairly prominent suggestions that they are in dire need of redefinition and other attempts to narrow or eliminate discussions about such loose ideas altogether.
And they neve expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too mcuh effort arguing about, too, too much. OK: the problem with philosopher is that they are contradictory.

My biggest criticism of SI is that I cannot decide between:

A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe

This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden's analogy that SI is trying to develop facebook before the Internet).

A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obvious... (read more)

I like your gun safety analogy. Actually however, it seems to me that a significant portion of LW shares your doubts, or even favors view B. I second your call for some (more?) direct discussion on the question.

He didn't count on the stupidity of mankind.

"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."

But why/how is it better spent that way?

"I consistently refuse to be drawn into running the Singularity Institute. I have an overwhelming sense of doom about what happens if I start going down that road."

This strikes me as pretty strange. I would like to hear more about it.

Certainly, one can obtain status by other means, such as by posting at OB and LW, and presenting at conferences, etc. Are there other reasons why you don't want to "run" the Singularity Institute?

I assume Eliezer thinks that his energy is much better spent on doing the things he does, not dealing with that specific leadership position.

I think these are great predictions.

I agree that that's one reasonable interpretation.

I just want to emphasize that that standard is very different than the weaker "if I had to guess, I would say that the person actually committed the crime." The first standard is higher. Also, the law might forbid you from considering certain facts/evidence, even if you know in the back of your mind that the evidence is there and suggestive. There are probably other differences between the standards that I'm not thinking of.

By guilty, do we mean "committed or significantly contributed to the murder"?

Or do we mean "committed or significantly contributed to the murder AND there is enough evidence showing that to satisfy the beyond-a-reasonable-doubt (or Italian equivalent) standard of proof for murder"?

The comments don't seem to make that distinction, but I think it could make a big difference.

3Eliezer Yudkowsky13y
probability = probability of having committed murder, not probability of sufficient evidence
I interpreted it as 'how would you vote if you were on the jury', which implies 'guilty beyond reasonable doubt' under the legal systems I'm familiar with. I don't know if the standard is any different in Italy.

I would be surprised if Eliezer would cite Joshua Greene's moral anti-realist view with approval.

6Eliezer Yudkowsky14y
Correct. I'm a moral cognitivist; "should" statements have truth-conditions. It's just that very few possible minds care whether should-statements are true or not; most possible minds care about whether alien statements (like "leads-to-maximum-paperclips") are true or not. They would agree with us on what should be done; they just wouldn't care, because they aren't built to do what they should. They would similarly agree with us that their morals are pointless, but would be concerned with whether their morals are justified-by-paperclip-production, not whether their morals are pointless. And under ordinary circumstances, of course, they would never formulate - let alone bother to compute - the function we name "should" (or the closely related functions "justifiable" or "arbitrary").

GEB has always struck me as more clever than intelligent.


Some points.

  1. The typical mind fallacy sounds just like the "Mind Projection Fallacy," or the empathy gap. It's a fascinating issue.

  2. You sound like you have Asperger tendencies: introverted, geeky, cerebral, sensitivity to loud noise. Interestingly, people with Asperger's are famously bad at empathizing; i.e. more likely to commit the Mind Projection Fallacy. This may be one reason why we find the fallacy so fascinating: we've been burned by it before (as you relate in your post), and seem uniquely vulnerable to it.

Every time I have heard the phrase "mind projection fallacy" before, it has been with an entirely different meaning, namely the error of mistaking bits of your mental processes for aspects of the external world. It's unfortunate that it sounds so similar both to "typical mind fallacy" and "projection".