PhilGoetz

Wiki Contributions

Comments

I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.

That's the main point of what gjm wrote.  I'm sympathetic to the view you're trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that's the worst mind-killer of all.  Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.

I understand that you think that's just a quibble, but it really, really isn't.  Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP.  It would do no harm to what you wanted to say to use phrases like "many people" or even "most people" instead of the implicit "all people", and it would eliminate a lot of pushback.

I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn't like knowing another language.  It's like knowing language at all.  Learning these types of math gives you as much of an effective intelligence boost over people who don't, as learning a spoken language gives you above people who don't know any language (e.g., many deaf-mutes in earlier times).

The kinds of math I mean include:

  • how to count things in an unbiased manner; the methodology of polls and other data-gathering
  • how to actually make a claim, as opposed to what most people do, which is to make a claim that's useless because it lacks quantification or quantifiers
    • A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently.  Most of them say things like, "Global warming will make X worse", where you already know that OF COURSE global warming will make X worse, but you only care how much worse.
    • More generally, any claim of the type "All X are Y" or "No X are Y", e.g., "Capitalists exploit the working class", shouldn't be considered claims at all, and can accomplish nothing except foment arguments.
  • the use of probabilities and error measures
  • probability distributions: flat, normal, binomial, poisson, and power-law
  • entropy measures and other information theory
  • predictive error-minimization models like regression
  • statistical tests and how to interpret them

These things are what I call the correct Platonic forms.  The Platonic forms were meant to be perfect models for things found on earth.  These kinds of math actually are.  The concept of "perfect" actually makes sense for them, as opposed to for Earthly categories like "human", "justice", etc., for which believing that the concept of "perfect" is coherent demonstrably drives people insane and causes them to come up with things like Christianity.

They are, however, like Aristotle's Forms, in that the universals have no existence on their own, but are (like the circle , but even more like the normal distribution ) perfect models which arise from the accumulation of endless imperfect instantiations of them.

There are plenty of important questions that are beyond the capability of the unaided human mind to ever answer, yet which are simple to give correct statistical answers to once you know how to gather data and do a multiple regression.  Also, the use of these mathematical techniques will force you to phrase the answer sensibly, e.g., "We cannot reject the hypothesis that the average homicide rate under strict gun control and liberal gun control are the same with more than 60% confidence" rather than "Gun control is good."

Agree.  Though I don't think Turing ever intended that test to be used.  I think what he wanted to accomplish with his paper was to operationalize "intelligence".  When he published it, if you asked somebody "Could a computer be intelligent?", they'd have responded with a religious argument about it not having a soul, or free will, or consciousness.  Turing sneakily got people to  look past their metaphysics, and ask the question in terms of the computer program's behavior.  THAT was what was significant about that paper.

It's a great question.  I'm sure I've read something about that, possibly in some pop book like Thinking, Fast & Slow.  What I read was an evaluation of the relationship of IQ to wealth, and the takeaway was that your economic success depends more on the average IQ in your country than it does on your personal IQ.  It may have been an entire book rather than an article.

Google turns up this 2010 study from Science.  The summaries you'll see there are sharply self-contradictory.

First comes an unexplained box called "The Meeting of Minds", which I'm guessing is an editorial commentary on the article, and it says, "The primary contributors to c appear to be the g factors of the group members, along with a propensity toward social sensitivity."

Next is the article's abstract, which says, "This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group."

These summaries directly contradict each other: Is g a primary contributor, or not a contributor at all?

I'm guessing the study of group IQ is strongly politically biased, with Hegelians (both "right" and "left") and other communitarians, wanting to show that individual IQs are unimportant, and individualists and free-market economists wanting to show that they're important.

But what makes you so confident that it's not possible for subject-matter experts to have correct intuitions that outpace their ability to articulate legible explanations to others?

That's irrelevant, because what Richard wrote was a truism. An Eliezer who understands his own confidence in his ideas will "always" be better at inspiring confidence in those ideas in others.  Richard's statement leads to a conclusion of import (Eliezer should develop arguments to defend his intuitions) precisely because it's correct whether Eliezer's intuitions are correct or incorrect.

The way to dig the bottom deeper today is to get government bailouts, like bailing out companies or lenders, and like Biden's recent tuition debt repayment bill.  Bailouts are especially perverse because they give people who get into debt a competitive advantage over people who don't, in an unpredictable manner that encourages people to see taking out a loan as a lottery ticket.

Finding a way for people to make money by posting good ideas is a great idea.

Saying that it should be based on the goodness of the people and how much they care is a terrible idea.  Privileging goodness and caring over reason is the most well-trodden path to unreason.  This is LessWrong.  I go to fimfiction for rainbows and unicorns.

No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem.  It comes up in many "intro to philosophy" or "philosophy of science" texts or courses.  Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.

I'm glad to see Eliezer addressed this point.  This post doesn't get across how absolutely critical it is to understand that {categories always have exceptions, and that's okay}.  Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek "philosophy" from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).

Many philosophers have recognized that Aristotle's conception of categories fails; but most still assumed that that's how categories must work in order to be "real", and so proving that categories don't work that way proved that categorizations "aren't real".  They them became monists, like the Hindus / Buddhists / Parmenides / post-modernists.  The way to avoid this is to understand nominalism, which dissolves the philosophical understanding of that quoted word "real", and which I hope Eliezer has also explained somewhere.

Load More