Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

And one new rule:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
659 comments, sorted by Click to highlight new comments since: Today at 2:07 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:

[Following the discovery of some errors in his earlier work:] I think it was at this moment that I largely stopped doing what is called “curiosity driven research” and started to think seriously about the future.


A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.


It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.


Among mathematicians computer proof verification was almost a forbidden subject. A conversation started about the need for computer proof assistants would invariably drift to the Goedel Incompleteness Theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was.


I now do my mathematics with a proof assistant and do not have to worry all the time about mistakes in my arguments or about ho

... (read more)
A video of the whole talk is available here [].
And his textbook on the new univalent foundations of mathematics in homotopy type theory is here [].
It is misleading to attribute that book solely to Voevodsky.
Yes. But it's forgiveably misleading to attribute it non-exclusively to him, in a thread of comments which was started about him.
Computer scientists seem much more ready to adopt the language of homotopy type theory than homotopy theorists at the moment. It should be noted that there are many competing new languages for expressing the insights garnered by infinity groupoids. Though Voevodsky's language is the only one that has any connection to computers, the competing language of quasi-categories is more popular.
I know you're not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it. Computer proof verification is like taking off and nuking the whole site from orbit: it's the only way to be sure.

"It is one thing for you to say, ‘Let the world burn.' It is another to say, ‘Let Molly burn.' The difference is all in the name."

-- Uriel, Ghost Story, Jim Butcher

I love the character of Uriel in the Dresden Files. I find his interpretation of the Fallen very interesting also.

It is easier to fight for one's principles than to live up to them.

-- Alfred Adler

ADDED: Source:

Quoted in: Phyllis Bottome, Alfred Adler: Apostle of Freedom (1939), ch. 5

Problems of Neurosis: A Book of Case Histories (1929)

Comedian Simon Munnery:

Many are willing to suffer for their art; few are willing to learn how to draw.

[-][anonymous]8y 47

Philosophers often behave like little children who scribble some marks on a piece of paper at random and then ask the grown-up "What's that?"- It happened like this: the grown-up had drawn pictures for the child several times and said "this is a man," "this is a house," etc. And then the child makes some marks too and asks: what's this then?

  • Wittgenstein, Culture and Value

Slartibartfast: Perhaps I'm old and tired, but I think that the chances of finding out what's actually going on are so absurdly remote that the only thing to do is to say, "Hang the sense of it," and keep yourself busy. I'd much rather be happy than right any day.

Arthur Dent: And are you?

Slartibartfast: Well... no. That's where it all falls down, of course.

Douglas Adams, Hitchhiker's Guide to the Galaxy

Thanks for this one.. It's been some time since I re-read Douglas Adams , and had forgotten how good he can be. It makes so much sense reading this right after reading "Bind yourself to Reality" []. Had good long guffaw out of this one .:-)

Now, one basic principle in all of science is GIGO: garbage in, garbage out. This principle is particularly important in statistical meta-analysis: because if you have a bunch of methodologically poor studies, each with small sample size, and then subject them to meta-analysis, what can happen is that the systematic biases in each study — if they mostly point in the same direction — can reach statistical significance when the studies are pooled. And this possibility is particularly relevant here, because meta-analyses of homeopathy invariably find an inverse correlation between the methodological quality of the study and the observed effectiveness of homeopathy: that is, the sloppiest studies find the strongest evidence in favor of homeopathy. When one restricts attention only to methodologically sound studies — those that include adequate randomization and double-blinding, predefined outcome measures, and clear accounting for drop-outs — the meta-analyses find no statistically significant effect (whether positive or negative) of homeopathy compared to placebo.

A bigger danger is publication bias. collect 10 well run trials without knowing that 20 similar well run ones exist but weren't published because their findings weren't convenient and your meta-analysis ends up distorted from the outset.

This principle is particularly important in statistical meta-analysis: because if you have a bunch of methodologically poor studies, each with small sample size, and then subject them to meta-analysis, what can happen is that the systematic biases in each study — if they mostly point in the same direction — can reach statistical significance when the studies are pooled.

Does anyone know how often this happens in statistical meta-analysis?

Fairly often. One strategy I've seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a 'ground truth' or oracle for the meta-analysis's estimate, and if it's later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the 'right' answer.)

For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. "Discrepancies between meta-analyses and subsequent large randomized, controlled trials". N Engl J Med 1997;337:536e42

Results: We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa ϭ 0.35; 95% confidence interval, 0.06-0.64). The positive predictive value of the meta-analyses was 68%, and the negative predictive value 67%. However, the difference in point est

... (read more)
I'm not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.

As a percentage? No. But qualitatively speaking, "often."

The most recent book I read discusses this particularly with respect to medicine, where the problem is especially pronounced because a majority of studies are conducted or funded by an industry with a financial stake in the results, with considerable leeway to influence them even without committing formal violations of procedure. But even in fields where this is not the case, issues like non-publication of data (a large proportion of all studies conducted are not published, and those which are not published are much more likely to contain negative results) will tend to make the available literature statistically unrepresentative.

We can't know for certain. That's the idea of systematic biases. There no way to tell if all your trials are slanted in a specific fashion, if the biases also appears in your high quality studies. On the other hand we have fields such as homeopathy [] or telephathy [] (Ganzfeld experiments) where there are meta-analysis that treat all studies mostly equally that find that homeopathy works and telepahty exist. On the other hand you have meta-analysis who try to filter out low quality studies who come to the conclusion that homeopathy doesn't work and telepathy doesn't exist.
Sokal's hoax was heroic []
See also Jaynes's comments on sampling error vs systematic biases ('Emperor of China fallacy') which I quote in []

It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.

- Paul Krugman


"Throughout the day, Stargirl had been dropping money. She was the Johnny Appleseed of loose change: a penny here, a nickel there. Tossed to the sidewalk, laid on a shelf or bench. Even quarters.

"I hate change," she said. "It's so . . . jangly."

"Do you realize how much you must throw away in a year?" I said.

"Did you ever see a little kid's face when he spots a penny on a sidewalk?”

Jerry Spinelli, Stargirl

So as to keep the quote on its own, my commentary:

This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.

Specifically, [these recent books that deal with parallel universes] argue that if some scientific theory X has enough experimental support for us to take it seriously, then we must take seriously also all its predictions Y, even if these predictions are themselves untestable (involving parallel universes, for example).

As a warm-up example, let's consider Einstein's theory of General Relativity. It's widely considered a scientific theory worthy of taking seriously, because it has made countless correct predictions -- from the gravitational bending of light to the time dilation measured by our GPS phones. This means that we must also take seriously its prediction for what happens inside black holes, even though this is something we can never observe and report on in Scientific American. If someone doesn't like these black hole predictions, they can't simply opt out of them and dismiss them as unscientific: instead, they need to come up with a different mathematical theory that matches every single successful prediction that general relativity has made -- yet doesn't give the disagreeable black hole predictions.

-- Max Tegmark, Scientific American guest blog, 2014-02-04

I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
Better answers, yes, but I'd expect the new answers to be at least quite like the GR answers. I mean, probably no singularities in the real theory, but lots of time-warping and space-whirling, surely. He only says 'take seriously', not 'swallow whole including the self-contradictory bits'.
Well... Einstein didn't need a complete theory of quantum electrodynamics to predict the coefficients of spontaneous emission from thermodynamical arguments; I don't think Bekenstein and Hawking need a complete theory of quantum gravity to make predictions other than those of classical GR either.

How much of a disaster is this? Well, it’s never a disaster to learn that a statement you wanted to go one way in fact goes the other way. It may be disappointing, but it’s much better to know the truth than to waste time chasing a fantasy. Also, there can be far more to it than that. The effect of discovering that your hopes are dashed is often that you readjust your hopes. If you had a subgoal that you now realize is unachievable, but you still believe that the main goal might be achievable, then your options have been narrowed down in a potentially useful way.

-Timothy Gowers, on finding out a method he’d hoped would work, in fact would not.

I had been planning to post this (as in, had copied it from a text file saved for the purposes of this thread), saw it here, noted the fact, and then didn't bother to upvote until just now. How odd.

Richard Feynmann claimed that he wasn't exceptionally intelligent, but that he focused all his energies on one thing. Of course he was exceptionally intelligent, but he makes a good point.

I think one way to improve your intelligence is to actually try to understand things in a very fundamental way. Rather than just accepting the kind of trite explanations that most people accept - for instance, that electricity is electrons moving along a wire - try to really find out and understand what is actually happening, and you'll begin to find that the world is very different from what you have been taught and you'll be able to make more intelligent observations about it.

reddit user jjbcn on trying to improve your intelligence

If you're not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It's free for download!

It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:

The intervening years might have glazed their memories with a euphoric tint, but about 80 perce

... (read more)

Trying to actually understand what equations describe is something I'm always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:

(Teacher derives an equation, then suddenly makes it into an iterative formula, with no explanation of why)

Me: Woah, why has it suddenly become an iterative formula? What's that got to do with anything?

Teacher: Well, do you agree with the equation when it's not an iterative formula?

Me: Yes.

Teacher: And how about if I make it an iterative formula?

Me: But why do you do that?

Friend: Oh, I see.

Me: Do you see why it works?

Friend: Yes. Well, no. But I see it gets the right answer.

Me: But sir, can you explain why it gets the right answer?

Teacher: Ooh Ben, you're asking one of your tough questions again.

(Physics class)

Me: Can you explain that sir?

Teacher: Look, Ben, sometimesnot understanding things is a good thing.

And yet to most people, I can't even vent the ridiculousness of a teacher actually saying this; they just think it's the norm!

For every EY quote, there exists an equal and opposite EY PC Hodgell quote:
(That was P.C. Hodgell, not EY.)
Good point, I'll correct it.
3Ben Pace8y
Amusing, although I'll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...

I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were... somewhat off, ever after.

And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.

Were there tentacles involved? Strange ethereal piping? Anything rugose or cyclopean in character?

I think we can safely say there were non-Euclidean geometries involved.

Were there also course numbers with a non-zero complex part?
What level of school?
0Ben Pace8y
Secondary school.

A visit to wikipedia suggests that "secondary school" can refer to either what we in the U.S. call "middle school / junior high school", or what we call "high school". That's a fairly wide range of grade levels. In which year of pre-university education are you?

7Ben Pace8y
Oh, okay. After I finish this year, I'll study at school for one final year, and then go to university. Edit: I am confused that this got five up votes, and would be interested in hearing an explanation from someone who up voted it.
I didn't upvote you but I would have if you hadn't mentioned it; it would have been because I appreciate people answering questions and finishing comment threads rather than leaving them hanging forever unresolved.
2Ben Pace8y
So you wanted to know not how to derive the solution but how to derive the derivation? I wouldn't blame the teacher for not going there. There's not enough time in class to do something like that. Bringing the students to understand the presented math is hard enough. Describing the process of how this math was found, would take too long. Because especially for harder problems there were probably dozens of mathematicians who studied the problem for centuries in order to find those derivations that your teacher presents to you.
What's wrong with saying something to the effect of "There's a theorem -- it's not really within the scope of this course, but if you're really interested it's called the fixed-point theorem, you can look it up on Wikipedia or somewhere"?
6Ben Pace8y
Derive the derivation? Huh? And you say that's different from 'understanding' it. No, I just didn't have the most basic of intuitive ideas as to why he suddenly made an iterated equation, and I didn't understand why it worked, at any level. It was all just abstract symbol manipulation with no content for me, and that's not learning. Furthermore, he does have the time. We have nine hours a week. With a class size of four pupils.
He may actually not know. People who teach maths are often not terribly good at it. Why don't you post the equation and the thing he turned it into? One of us will probably be able to see what is going on. In all fairness, at university, being lectured by people whose job was maths research and who were truly world class at it, I remember similar happenings. Although they have subtler ways of telling you to shut up. Figuring out what's going on between the steps of a proof is half the fun and it tends to make your head explode with joy when you finally get it. I just gave a couple of terms of first year maths lectures, stuff that I thought I knew well, and the effort of going through and actually understanding everything I was talking about turned what was supposed to be two hours a week into two days a week, so I can quite see why busy people don't bother. And in the process I found a couple of mistakes in the course notes (that of course get passed down from year to year, not rewritten from scratch with every new lecturer).
In my school math education we had the standard that everything we learn get's proved. If you are not in the habit of proving math, students are not well prepared for doing real math in university which is about mathematical proofs. In general the math that's not understood but memorized gets soon forgotten and is not worth teaching in the first place.
That's a great rule, but it still has to have limits. Otherwise you couldn't teach calculus without teaching number theory and set theory and probably some algebraic structures and mathematical logic too.
We actually did learn number theory, set theory, basic logic and algrebraic structures such as rings, groups and vector spaces. In Germany every student has to select two subjects called "Leistungskurse" in which he gets more classes. In my case I selected math and physics which meant we had 5 hours worth of lessons in those subjects per week.
When I went to high school in Israel we had a similar system, but extra math wasn't an option (at least not at my school). A big part of an undergrad math (or CS) degree is spent on these subjects. I don't believe the study everything, prove everything you do level is attainable with 5 hours per week for 3 years at the high-school level, even with a very good self-selected student group.
The German school system starts by separating students into 3 different kind of schools based on the academic skill of the student: Hauptschule, Realschule and Gymnasium. The Gymnasium is basically for those who go to university. That separation starts by school year 5 or 7 depending on where in Germany the school is located. You have more than 3 years of math classes at school. I think proving stuff started at the 8 or 9 school year. At the beginning a lot of it focused on geometry. At the time I think it was 4 hours of math per week for everyone. I think there were many cases where the students who were good at math had time to prove things while the more math adverse students took more time with the basic math problems.
What did the most advanced students (say, top 15%) study and prove by the end of highschool?
It's been a while but before introducing calculus we did go through the axioms and theorems of limit of a function. Peano's axioms and how you it's enough to prove things for n=0 and that n->n+1 were basis for proofs.
Your previous comment: Might as well be a description of almost all the non-CS math content in my CS undergrad degree. (The only core subjects missing are probability and statistics). Of course, the depth and breadth and quality of treatment may still be different. But maybe an average high school in Israel is really that much worse than a good high school in Germany. I now recall that my father, who went to high school in Kiev in the 70s, used to tell me that the math I learned in the freshman year, they learned in high school. (And they had only 10 years of school in total, ages 7 to 17, while we had 12, ages 6 to 18.) I always thought his stories may have been biased, because he went on to get a graduate degree in applied math and taught undergrad math at a respected Russian university. So I thought maybe he also went to a top high school and/or associated with other students who were good at math and enjoyed it. But I know there is a wide distribution of math talent and affinity among people. There are definitely enough students for math-oriented schools, or extra math classes or programmes in large enough schools, at that level of teaching. I just assumed based on my own experience that the schools themselves wouldn't be good enough to support this, or wouldn't be incentivized correctly. But there's no reason these problems should be universal.
In university students often spend time in large lectures in math classes. There's no real to expect that to be a lot more effective than a 15 person course with a good teacher. In our times the incentives go against teaching like this. in Berlin centralized math testing effectively means that all schools have to teach to the same test and that test doesn't contain complicated proofs. Yes, the difference between a math education at bad school with only 3 hours per week at the end and the math education at a good school in Germany with 5 hours per week might be the freshman year of a non-CS math content of a CS undergrad degree.
What is wrong with learning logic, set theory, and number theory before (or in the context of high school, instead of) calculus? EDIT: Personally, I think going into computer science would have been easier if in high school I learned logic and set theory my last two years rather than trigonometry and calculus.
The thing that's wrong is exactly that it would indeed have to be instead of calculus. And then students would not pass the nationally mandated matriculation exams or university entry exams, which test knowledge of calculus. One part of the system can't change independently from the others. I agree that if you're going to teach just one field of math, then calculus is not the optimal choice. I do believe that for every field that's taught in highschool, the most important theories and results should be taught: evolution, genetics, cell structure and anatomy in biology; Newtonian mechanics, electromagnetism and relativity in physics (QM probably requires too much math for any high-school program); etc. There won't be time to prove and fully explain everything that's being shown, because time is limited, and it's better that all the people in our society know about classical mechanics and EM and relativity, than that they know about just one of them but have studied and reproduced enough experiments to demonstrate that that one theory is true compared to all alternatives of similar complexity. And similarly, I think it would be better if everyone knew about the fundamental results of all the important fields of math, than being able to prove a lot of theorems in a couple of fields on highschool exams but not getting to hear a lot of other fields.
Really? I think it's very beautiful and it's what hooked me. And it's the bit the scientists use. What would you teach everyone instead?
As far as possible, we should allow students to learn more and help guide them to the sciences. But scientists are in the end a small minority of the population and some things are important to teach to everyone. I don't think calculus passes that test, and neither does classic geometry and analytic geometry, which received a lot of time in my school. Instead I would teach statistics, basic probability theory, programming (if you can sell it as applied math), basic set and number theory (e.g. countable and uncountable infinities, rational and real numbers), basic computer science with some important cryptography results given without proof (e.g. public-key encryption). At least one of these should demonstrate the concept of mathematical proofs and logic (set theory is a good candidate).
Interesting question. I'm a programmer who works in EDA software, including using transistor-level simulations, and I use surprisingly little math. Knowing the idea of a derivative (and how noisy numerical approximations to them can be!) is important - but it is really rare for me to actually compute one. It is reasonably common to run into a piece of code that reverses the transformation done by another pieces of code, but that is about it. The core algorithms of the simulators involves sophisticated math - but that is stable and encapsulated, so it is mostly a black box. As a citizen, statistics are potentially useful, but mostly just at the level of: This article quotes an X% change in something with N patients, does it look like N was large enough that this could possibly be statistically significant? But usually the problem with such studies in the the systematic errors, which are essentially impossible for a casual examination to find.
I see computer science as a branch of applied math which is important enough to be treated as a top-level 'science' of its own. Another way of putting it is that algorithms and programming are the 'engineering' counterpart to the 'science' of (the rest of) CS and math. Programming very often involves math that is unrelated to the problem domain. For instance, using static typing relies on results from type theory. Cryptography (which includes hash functions, which are ubiquitous in software) is math. Functional languages in particular often embody complex mathematical structures that serve as design paradigms. Many data structures and algorithms rely on mathematical proofs. Etc. That is also a fact that ought to be taught in school :-)
He doesn't have to give proofs. Just explaining the intuition behind each formula doesn't take that long and will help the students understand how and when to use them. Giving intuitions really isn't esoteric trivia for advanced students, it's something that will make solving problems easier for everyone relative to if they just memorized each individual case where each formula applies.

I suspect this is typical mind fallacy at work. There are many students who either can't, or don't want to, learn mathematical intuitions or explanations. They prefer to learn a few formulas and rules by rote, the same way they do in every other class.

There are many students who either can't, or don't want to, learn mathematical intuitions or explanations. They prefer to learn a few formulas and rules by rote, the same way they do in every other class.

Former teacher confirming this. Some students are willing to spend a lot of energy to avoid understanding a topic. They actively demand memorization without understanding... sometimes they even bring their parents as a support; and I have seen some of the parents complaining in the newspapers (where the complaints become very unspecific, that the education is "too difficult" and "inefficient", or something like this).

Which is completely puzzling for the first time you see this, as a teacher, because in every internet discussion about education, teachers are criticized for allegedly insisting on memorization without understanding, and every layman seems to propose new ideas about education with less facts and more "critical thinking". So, you get the impression that there is a popular demand for understanding instead of memorization... and you go to classroom believing you will fix the system... and there is almost a revolution against you, outraged ... (read more)

Speaking as a student: I sympathize with Benito, have myself had his sort of frustration, and far prefer understanding to memorization... yet I must speak up for the side of the students in your experience. Why?

Because the incentives in the education system encourage memorization, and discourage understanding.

Say I'm in a class, learning some difficult topic. I know there will be a test, and the test will make up a big chunk of my grade (maybe all the tests together are most of my grade). I know the test will be such that passing it is easiest if I memorize — because that's how tests are. What do I do?

True understanding in complex topics requires contemplation, experimentation, exploration; "playing around" with the material, trying things out for myself, taking time to think about it, going out and reading other things about the topic, discussing the topic with knowledgeable people. I'd love to do all of that...

... but I have three other classes, and they all expect me to read absurd amounts of material in time for next week's lecture, and work on a major project apiece, and I have no time for any of those wonderful things I listed, and I have had four hours of sleep (an... (read more)

Ah. I think this is why I'm finding physics and maths so difficult, even though my teachers said I'd find it easy. It's not just that the teachers have no incentive to make me understand, it's that because teachers aren't trained to teach understanding, when I keep asking for it, they don't know how to give it... This explains a lot of their behaviour.

Even when I've sat down one-on-one with a teacher and asked for the explanation of a piece of physics I totally haven't understood, they guy just spoke at me for five/ten minutes, without stopping to ask me if I followed that step, or even just to repeat what he'd said, and then considered the matter settled at the end without questions about how I'd followed it. The problem with my understanding was at the beginning as well, and when he stopped, he finished as if delivering the end of a speech, as though it were final. It would've been a little awkward for me to ask him to re-explain the first bit... I thought he was a bad teacher, but he's just never been incentivised to continually stop and check for understanding, after deriving the requisite equations.

And that's why my maths teacher can never answer questions that go under the s... (read more)

If you're really curious, have you considered a private maths tutor? I wouldn't go anywhere near the sort of people who help people cram for exams, but if there's a local university you might find a maths student (even an undergrad would be fine) who'd actually enjoy talking about this sort of thing and might be really grateful for a few pounds an hour. Hell, if you find someone who really likes the subject and can talk about it you may only have to buy them a coffee and you'll have trouble getting them to shut up!
0Ben Pace8y
Thanks for the tip, and no, I hadn't considered going out and looking for maths students. I mainly spend my time reading good textbooks (i.e. Art of Problem Solving). I had a maths tutor once, although I didn't get out of it what I wanted.
Why do you think that?
4Ben Pace8y
Oops, I didn't mean to sound quite so arrogant, and I merely meant in the top bit of the class. If you do want to know my actual reasons for thinking so, off the top of my head I'd mention teachers saying so generally, teachers saying so specifically, performance in maths competitions, a small year group such that I know everyone in the class fairly well and can see their abilities, observation of marks (grades) over the past six years, and I get paid to tutor maths to students in lower years. Still, edited.
Word of advice: don't put too much attention into your "potential". That's an unfalsifiable hypothesis that you can use to inflate your ego without actually, you know, being good. Look at your actual results, and only those.
I schlepped through physics degree without understanding much of anything, and then turned to philosophy to solve the problem...the rest is ancient history.
3Ben Pace8y
From what I hear, philosophy is mostly ancient history.
It's mostly mental masturbation where ancient history plays the role of porn...
7Ben Pace8y
writes down in list of things people have actually said to me
Kinda like this site. :-)
This site has different preferences in pr0n :-P
I had this experience in a context of high school, with no homework and no additional study at home.
1Said Achmiz8y
None of the students' classes assigned any homework?!
Some of them probably did, but most didn't. The "no homework and no additional study at home" part was meant only for computer science, which I taught.
This is not usually true in the context of physics. I recently taught a physics course, the final was 3 questions, the time limit was 3 hours. Getting full credit on a single question was enough for an A. Memorization fails if you've never seen a question type before.
Not all tests are like that. I had plenty of tests in math that did require understanding to get a top mark. Memorization can get you enough points to pass the test but not all points.
It's more useful than that, even. There are also times where the problem isn't necessarily memorization, but just lapse of insight that makes it hard to realize that a problem as presented matches one of your pre-canned equations, even though it can be solved with one of them. Panic sets in, etc. In situations like that, particularly in those years when you have calculus and various transforms in your toolkit (even if they aren't strictly /expected/), you can solve the problem with those power tools instead, and having understood and being able to derive solutions to closely related problems from basic principles ought to be fairly predictive of you being able to generate a correct answer in those situations.

My first explanation was that understanding is the best way, but memorization can be more efficient in short term, especially if you expect to forget the stuff and never use it again after the exam. Some subjects probably are like this, but math famously is not. Which is why math is the most hated subject.

Another explanation was that the students probably never actually had an experience of understanding something, at least not in the school, so they literally don't understand what I was trying to do.

What do you think about these other possible explanations?

  1. Some of these students really can't learn to prove mathematical theorems. If exams required real understanding of math, then no matter how much these students and their teachers tried, with all the pedagogical techniques we know today, they would fail the exams.

  2. These students really have very unpleasant subjective experiences when they try to understand math, a kind of mental suffering. They are bad at math because people are generally bad at doing very unpleasant things: they only do the absolute minimum they can get away with, so they don't get enough practice to become better, and they also have trouble concentrating

... (read more)
It could be different explanations for different people. This said, options 1 and 2 seem to contradict with my experience that students object even against explaining relatively simple non-mathy things. My experience comes mostly from high school where I taught everything during the lessons, no homeword, no home study; this seems to rule out option 3. Option 4 seems plausible, I just feel it is not the full explanation, it's more like a collective cooperation against something that most students already dislike individually.
I'm closer to the typical mind than most people here with regard to math. I deeply loved humanities and thought of math and mathy fields as completely sterile and lifeless up until late high school, when I first realized that there was more to math than memorizing formulas. And then boom it became fun and also dramatically easier. Before that I didn't reject the idea of learning using mathematical intuitions, I just had no idea that mathematical intuitions were a thing that could exist. I suspect that most people learn school-things by rote simply because they don't realize that school-things can be learned another way. This is evidenced by how people don't choose to learn things they actually find interesting or useful by rote. There are quite a few people out there who think "book smarts" and "street smarts" are completely separate things and they just don't have book smarts because they aren't good at memorizing disjointed lists of facts.
This is hard to test. What we need here are studies that test different methods of teaching math on randomly selected people. Of course people self-selecting to participate in the study would ruin it, and most people hate math after the experience and wouldn't participate unless paid large sums. On the other hand, a study of highschool students who are forced to participate also isn't very useful because the fact of forcing students to study may well be the major reason why they find it a not fun experience and don't study well.
If they get a few formulas and rules by rote, but can't figure out when to apply them because they lack understanding, what does that actually get them? It's not a waste of time to give them a chance of getting something out of it, even if they're almost certainly doomed in this regard.
I'm not saying it's a bad thing in itself, but there's usually not enough time in class to do it; it comes at the expense of the rote learning which these students need to pass the exams.
This is very much true, as I was one of those students myself. I did care about passing exams, not learning math.
I haven't seen them mentioned in this thread, so thought I'd add them, since they're probably valid and worth thinking about: * the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such things in average classrooms may on average be both inefficient and unfair to the majority of students. you're looking for knowledge and understanding in all the wrong places. * the vast majority of public education systems are, pragmatically speaking, tools purpose built and designed to produce model citizens, with intelligence and knowledge gains seen as beneficial but not necessary side effects. ie, as long as the kids are off the streets - if they're going to get good jobs as a side effect, that's a bonus. you're using the wrong tools, for the job (either use better tools, or misuse the tools you have to get the job you want done, right)
I've noticed that one of the biggest thing holding me back in math/physics is an aversion to thinking too hard/long about math and physics problems. It seems to me that if I was able to overcome this aversion and math was as fun as playing video games I'd be a lot better at it.

You have to want to be a wizard.

Plenty of us took the Wizard's Oath as kids and still have a hard time in math classes sometimes.
I think everyone has trouble in math class, eventually.
From here. [] Or as I just think of it, if you don't at least have a hard time sometimes, if not fail sometimes, you're not shooting high enough.
If I don't get a game over at least once, the game is too easy.
Is that an Umeshism []?
Almost, but not quite. "If you never get a game over, you're playing games that are too easy" would indeed be a Umeshism, but this is a complaint about easy games rather than a suggestion that I should be playing harder ones.
Not in my experience, unless you're talking about trouble teaching them. It's very possible to run out of classes before you hit anything truly difficult (in my country there are no more classes after Masters level, a PhD student is expected to be doing research - the american notion of "all but dissertation" provokes endless amusement, here you're "all but dissertation" from day 1).
A system where a non-genius math student never faces a challenging math class would probably "provoke endless amusement" from an American grad student, since to them it means that the program is too weak to be considered serious.
If you literally never had trouble in math class, you are a rare mind of the Newton/Gauss calibre, and you should go get your Field's medal before you are 40 :).
I had trouble in my Masters (a combination of course choice and bad luck) and so didn't do a PhD. But we're talking about the top university in at least the country, and by some accounts the hardest non-research course in the world. I'm pretty sure that going a different route I could've got to the point of starting a PhD before hitting anything difficult. I do sometimes think I should've chased the Fields medal, but I'm ultimately happier the way things turned out. I worked my ass off the whole time in school/university; nowadays I earn a good living doing fun things, but my evenings and weekends are my own, and I've got a much better social life.
Huh. Yes, I guess that in retrospect I wouldn't be the only one.
This is your secret?
You have to want to learn how to be a wizard.
You have to like to learn how to be a wizard.

if I was able to overcome this aversion and math was as fun as playing video games

Good video games are designed to be fun, that is their purpose. Math, um, not so much.

And at least some math instructors effectively teach that if you aren't already finding (their presentations of) math fascinating, that you must just not be a Math Person.

Math is a bit like liftening weights. Sitting in front of a heavy mathematical problem is challenging. The job of a good teacher isn't to remove the challenge. Math is about abstract thinking and a teacher who tries to spare his students from doing abstract thinking isn't doing it right.

Deliberate practice is mentally taxing.

The difficult thing as a teacher is to motivate the student to face the challenge whether the challenge is lifting weights or doing complicated math.

The job of a good teacher is to find a slightly less challenging problem, and to give you that problem first. Ideally, to find a sequence of problems very smoothly increasing in difficulty. Just like a computer game doesn't start with the boss fight, although some determined players would win that, too.
No. Being good at math is about being able to keep your attention on a complicated proof even if it's very challenging and your head seems like it's going to burst. If you want to build muscles you don't slowly increase the amount of weight and keep it at a level where it's effortless. You train to exhaustion of given muscles. Building mental stamina to tackle very complicated abstract problems that aren't solvable in five minutes is part of a good math education. Deliberate practice is supposed to feel hard. A computer game is supposed to feel fun. You can play a computer game for 12 hours. A few hours of delibrate practice are on the other usually enough to get someone to the rand of exhaustion. If you only face problems in your education that are smooth like a computer game, you aren't well prepared for facing hard problems in reality. A good math education teaches you the mindset that's required to stick with a tough abstract problem and tackle it head on even if you can't fully grasp it after looking 30 minutes at it. You might not use calculus at your job, but if your math education teaches you the ability to stay focused on hard abstract problems than it fulfilled it's purpose. You can teach calculus by giving the student concrete real world examples but that defeats the point of the exercise. If we are honest most students won't need the calculus at their job. It's not the point of math education. At least in the mindset in which I got taught math at school in Germany.
You don't put on so much weight than you couldn't possibly lift it, either (nor so much weight that you could only lift it with atrocious form and risk of injury, the analogue of which would be memorising a proof as though it was a prayer in a dead language and only having a faulty understanding of what the words mean).
Yes, memorizing proof isn't the point. You want to derive proofs. I think it's perfectly fine to sit 1 hours in front of a complicated proof and not be able to solve the proof. A ten year old might not have that mental stamia, but a good math education should teach it, so it's there by the end of school.
This kind of philosophy sounds like it's going to make a few people very good at tackling hard problems, while causing everyone else to become demotivated and hate math.
Motivation has a lot to do with knowing why you are engaging in an action. If you think things should be easy and they aren't you get demotivated. If you expect difficulty and manage to face it then that doesn't destroy motivation. I don't think getting philosophy right is easy. Once things that my school teachers got very wrong was believing in talents instead of believing in a growth mindset. I did identify myself as smart so I didn't learn the value of putting in time to practice. I tried to get by with the minimum of effort. I think Cal Newport wrote a lot of interesting things about how a good philosophy of learning would look like. There a certain education philosophy that you have standardized tests, than you do gamified education to have children score on those tests. Student have pens with multiple colors and are encouraged to draw mind maps. Afterwards the students go to follow their passions and live the American dream. It fits all the boxes of ideas that come out of California. I'm not really opposed to someone building some gamified system to teach calculus but at the same time it's important to understand the trade offs. We don't want to end up with a system where the attention span that students who come out of it is limited to playing games.
I think that the way good games teach things is basically being engaging by constantly presenting content that's in the learner's zone of proximal development [], offering any guidance needed for mastering that, and then gradually increasing the level of difficulty so as to constantly keep things in the ZPD. The player is kept constantly challenged and working at the edge of their ability, but because the challenge never becomes too high, the challenge also remains motivating all the time, with the end result being continual improvement. For example, in a game where your character may eventually have access to 50 different powers, throwing them at the player all at once would be overwhelming when the player's still learning to master the basic controls. So instead the first level just involves mastering the basic controls and you have just a single power that you need to use in order to beat the level, then when you've indicated that you've learned that (by beating the level), you get access to more powers, and so on. When they reach the final level, they're also likely to be confident about their abilities even when it becomes difficult, because they know that they've tackled these kinds of problems plenty of times before and have always eventually been successful in the past, even if it required several tries. The "math education is all about teaching people how to stay focused on hard abstract problems" philosophy sounds to me like the equivalent of throwing people at a level where they had to combine all 50 powers in order to survive, right from the very beginning. If you intend on becoming a research mathematician who has to tackle previously unencountered problems that nobody has any clue of how to solve, it may be a good way of preparing you for it. But forcing a student to confront needlessly difficult problems, when you could instead offer a smoothly increasing difficulty, doesn't seem like a very g
Not only research mathematicians but basically anyone who's supposed to research previously unencountered problems. That's the ability that universities are traditionally supposed to teach. If that's not what you want to teach, why teach calculus in the first place? If I need an integral I can ask a computer to calculate the integral for me. Why teach someone who wants to be a software engineer calculus? There a certain idea of egalitarianism according to which everyone should have an university education. That wasn't the point why we have universities. We have universities to teach people to tackle previously unencountered problems. If you want to be a carpenter you don't go to university but be an apprentice with an existing carpenter. Universities are not structured to be good at teaching trades like carpenting.
Isn't that rather "problems that can't be solved using currently existing mathematics"? If it's just a previously unencountered problem, but can be solved using the tools from an existing branch of math, then what you actually need is experience from working with those tools so that you can recognize it as a problem that can be tackled with those tools. As well as having had plenty of instruction in actually breaking down big problems into smaller pieces. And even those research mathematicians will primarily need a good and thorough understanding of the more basic mathematics that they're building on. The ability to tackle complex unencountered problems that you have no idea of how to solve is definitely important, but I would still prioritize giving them a maximally strong understanding of the existing mathematics first. But I wasn't thinking that much in the context of university education, more in the context of primary/secondary school. Math offers plenty of general-purpose contexts that may greatly enhance one's ability to think in precise terms: to the extent that we can make the whole general population learn and enjoy those concepts, it might help raise the sanity waterline []. I agree that calculus probably isn't very useful for that purpose, though. A thorough understanding of basic statistics and probability would seem much more important.
There an interesting paper [] about how doing science is basically about coping with feeling stupid. No matter whether you do research in math or whether you do research in biology, you have to come to terms with tackling problems that aren't easily solved. One of the huge problems with Reddit style New Atheists is that they don't like to feel stupid. They want their science education to be in easily digestible form. I agree, that's an important skill and probably undertaught. Nobody understands all math. For practical purposes it's often more important to know which mathematical tools exist and having an ability to learn to use those tools. I don't need to be able to solve integrals. It's enough to know that integrals exists and that Wolfram Alpha will solve them for me. I'm not saying that one shouldn't spend any time on easy exercises. Spending a third of the time on problems that are really hard might be a ratio that's okay. Statistics are important, but it's not clear that math statistics classes help. Students that take them often think that real world problems follow a normal distribution.
Calculus isn't as important to software engineering as some other branches of math, but it can still be handy to know. I've mostly encountered it in the context of physical simulation: optics stuff for graphics rendering, simplified Navier-Stokes for weather simulation, and orbital mechanics, to name three. Sometimes you can look up the exact equation you need, but copying out of the back of a textbook won't equip you to handle special cases, or to optimize your code if the general solution is too computationally expensive. Even that is sort of missing the point, though. The reason a lot of math classes are in a traditional CS curriculum isn't because the exact skills they teach will come up in industry; it's because they develop abstract thinking skills in a way that classes on more technical aspects of software engineering don't. And a well-developed sense of abstraction is very important in software, at least once you get beyond the most basic codemonkey tasks.
To that extend the CS curriculum shouldn't be evaluated by how well people do calculus but how well they do teach abstract thinking. I do think that the kind of abstract thinking where you don't know how to tackle a problem because the problem is new is valuable to software developers.
This is a very strong set of assertions which I find deeply counter intuitive. Of course that doesn't mean it isn't true. Do you have any evidence for any of it?
Which one's do you find counter intuitive? It's a mix of referencing a few very modern ideas with more traditional ideas of education while staying away from the no-child-left-behind philosophy of education. I can make any of the points in more depths but the post was already long, and I'm sort of afraid that people don't read my post on LW if they get too long ;) Which ones do you find particularly interesting?
Of course bad instructors can say this as easily as good ones. But isn't it true to say that if you have reasonably wide experience with different presentations of math, and you don't find any of them fascinating, then you're probably not a Math Person? Or do Math People not exist as a natural category?
I'd be ever so interested in the answer to this question. It seems really obvious that some people are good at maths and some people aren't. But it's also really obvious that some people like sprouts. And it turns out as far as I'm aware that it's possible to like sprouts for both genetic and environmental reasons. I'd love to know the causes of mathematical ability. Especially since it seems to be possible to be both 'clever' and 'bad at maths'. Does anyone know what the latest thinking on it is? My recent experiences trying to design IQ tests tell me that that's both innate and very trainable. In fact I'd now trust the sort of test that asks you how to spell or define randomly chosen words much more than the Raven's type tests. It's really hard to fake good speling, whereas the pattern tests are probably just telling you whether you once spent half an hour looking closely at the wallpaper. Which is exactly the reverse of the belief that I started with.
Related: some people believe that programming talent is very innate and people can be sharply separated into those who can and cannot learn to write code. Previously on LW here [], and I think there was an earlier more substantive post but I can't find it now. See also this [] . Gwern collected [] some further evidence and counterevidence.
It was probably mentioned in the earlier discussions, but I believe the "two humps" pattern can easily be explained by bad teaching. If it hapens in the whole profession, maybe no one has yet discovered a good way to teach it, because most of the people who understand the topic were autodidacts. As a model, imagine that a programming ability is a number. You come to school with some value between 0 and 10. A teacher can give you +20 bonus. Problem is, the teacher cannot explain the most simple stuff which you need to get to level 5; maybe because it is so obvious to the teacher that they can't understand how specifically someone else would not already understand it. So the kids with starting values between 0 and 4 can't follow the lessons and don't learn anything, while the kids with starting values 5 to 10 get the +20 bonus. At the end, you get the "two humps"; one group with values 0 to 4, another group with values 25 to 30. -- And the worst part is that this belief creates a spiral, because when everyone observed the "two humps" at the adult people, then if some student with starting value 4 does not understand the lesson, we don't feel a need to fix this; obviously they were just not meant to understand programming. What are those starting concepts that some people get and some people don't? Probably things like "the computer is just a mechanical thing which follows some mechanical rules; it has no mind [], and it doesn't really understand anything", but you need to feel it in the gut level. (Maybe aspies have a natural advantage here, because they don't expect the computer to have a mind.) It could probably help to play with some simple mechanical machines first, where the kids could observe the moving parts. In other words, maybe we don't only need specialized educational software, but also hardware. A computer in a form of a black box is already too big piece of magic, prone to be anthropomorphized. You shoul
A lot of effort has gone into trying to invent ways of teaching programming to complete newbies. If really no-one has succeeded at all, then maybe it's time to seriously consider that some people can't be taught. A claim that someone cannot be taught by any possible intervention would be a very strong claim indeed, and almost certainly false. But a claim that no-one knows how to teach this even though a lot of people have tried and failed for a long time now, makes predictions pretty similar to the theory that some people simply can't be taught. This model matches the known facts, but it doesn't tell us what we really want to know. What determines what value people start out with? Does everyone start out with 0 and some people increase their value in unknown, perhaps spontaneous ways? Or are some people just born with high values and they'll arrive at 5 or 10 no matter what they do, while others will stay at 0 no matter what? I don't know if educators have tried teaching the concepts you suggest explicitly.
4fubarobfusco8y [] The researcher didn't distinguish the conjectured cause (bimodal differences in students' ability to form models of computation) from other possible causes (just to name one — some students are more confident, and computing classes reward confidence). And the researcher's advisor later described his enthusiasm for the study as "prescription-drug induced over-hyping" [] of the results ... Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers. -------------------------------------------------------------------------------- Once upon a time, it would have been a radical proposition to suggest that even 25% of the population might one day be able to read and write. Reading and writing were the province of magic special people like scribes and priests. Today, we count on almost every adult being able to read traffic signs, recipes, bills, emails, and so on — even the ones who do not do "serious reading". A problem with programming education is that it is frequently unclear what the point of it is. Is it to identify those students who can learn to get jobs as programmers in industry or research? Is it to improve students' ability to control the technology that is a greater and greater part of their world? Is it to teach the mathematical concepts of elementary computer science? We know why we teach kids to read. The wonders of literature aside, we know full well that they cannot get on as competent adults if they are literate. Literacy was not a necessity for most people two thousand years ago; it is a necessity for most people today. Will programming ever become that sort of necessity?
That was the thinking at the dawn of personal computing, back in the 80s. Turns out the answer is "no".
"Not yet."
You think the general population the future will hacking code into text editors? That isn't even ubiquitous in the industry, since you can call yourself a developer if you only know how to us graphical tools. They'll be doing something, but it will be analogous to electronic music production as opposed .tk p.suing an instrument.
Computing hasn't even existed for a century yet. Give it time. There will come a day when ordinary educated folks quicksort their playing cards when they want to put them in order. :)
I insertion sort. :P
Doesn't almost everyone? I've always heard that as the inspiration for insertion sorting.
No way, I pigeonhole sort.
My bet would be on childhood experience. For example the kinds of toys used. I would predict a positive effect of various construction sets. It's like "Reductionism for Kindergarten". :D The silent pre-programming knowledge could be things like: "this toy is interacted with by placing its pieces and observing what they do (or modelling in one's mind what they would do), instead of e.g. talking to the toy and pretending the toy understands".
An anecdatum. The only construction set I had as a boy was lego, and my little sister played with it too. As far as I know, there was no feeling that it was my toy only. We're five years apart so all my stuff got passed down or shared. My sister's very clever. We both did degrees in the same place, mine maths and hers archaeology. She's never shown the slightest interest in programming or maths, whereas I remember the thunderbolt-strike of seeing my first computer program at ten years old, long before I'd ever actually seen a computer. I nagged my parents obsessively for one until they gave in, and maths and programming have been my hobby and my profession ever since. I distinctly remember trying to show Liz how to use my computer, and she just wasn't interested. My parents are entirely non-mathematical. They're both educated people, but artsy. Mum must have some natural talent, because she showed me how to do fractions before I went to school, but I think she dropped maths at sixteen. I think it's fair to say that Dad hates and fears it. Neither of them knew the first thing about computers when I was little. They just weren't a thing that people had in the 70s, any more than hovercraft were. Every attempt my school made to teach programming was utterly pointless for me, I either already knew what they were trying to teach or got it in a few seconds. The only attempts to teach programming that have ever held my attention or shown me anything interesting are SICP, and the algorithms and automata courses on Coursera, all of which I passed with near-perfect scores, and did for fun. So from personal experience I believe in 'natural talent' in programming. And I don't believe it's got anything to do with upbringing, except that our house was quiet and educated. You'd have had to work quite hard to stop me becoming a programmer. And I don't think anything in my background was in favour of me becoming one. And anything that was should have favoured my sister too.
And another anecdote: I've got two friends who are talented maths graduates, and somehow both of them had managed to get through their first degrees without ever writing programs. Both of them asked me to teach them. The first one I've made several attempts with. He sort-of gets it, but he doesn't see why you'd want to. A couple of times he's said 'Oh yes, I get it, sort of like experimental mathematics'. But any time he gets a problem about numbers he tries to solve it with pen and paper, even when it looks obvious to me that a computer will be a profitable attack. The second, I spent about two hours showing him how to get to "hello world" in python and how to fetch a web page. Five days later he shows me a program he's written to screen-scrape betfair and place trades automatically when it spots arbitrage opportunities. I was literally speechless. So I reckon that whatever-makes-you-a-mathematician and whatever-makes-you-a-programmer might be different things too. Which is actually a bit weird. They feel the same to me.
That seems like rather a strong claim. Everyone who can program now was a complete newbie at some point. Presumably they did not learn by a bolt of divine inspiration out of the blue sky.
The sources linked above claim that some can be taught, and some (probably most of the population) can't, no matter what you do. And of those who can learn, many become autodidacts in a suitable environment. Of course they don't reinvent programming themselves, they do learn it from others, but the same could be said of any skill or knowledge. And yet there are skills which clearly have very strong inborn dispositions. It's being claimed that programming is such a skill, and an extreme one at that, with a sharply bimodal distribution.
Bad teaching? There's an even simpler explanation (at least regarding programming): autodidacts with previous experience versus regular students without previous experience. The fact that the teaching is often geared towards the students with previous experience and suffers from a major tone of "Why don't you know this already?" throughout the first year or two of undergrad doesn't help a bit.
"I can teach you this only if you already know it" seems like bad teaching to me. Not sure if we are not just debating definitions here.
I don't think we're even debating. Yes, that is the definition of bad teaching. My assertion is that CS departments have gotten so damn complacent about receiving a steady stream of autodidact programmers as their undergrad entrants that they've stopped bothering with actually teaching low-level courses. They assign work, they expect to receive finished work, they grade the finished work, but it all relies on the clandestine assumption that the "good students" could already do the work when they entered the classroom.
Only a small fraction of math has practical applications, the majority of math exists for no reason other than thinking about it is fun. Even things with applications had sometimes been invented before those applications were known. So in a sense most math is designed to be fun. Of course it's not fun for everyone, just for a special class of people who are into this kind of thing. That makes it different from Angry Birds. But there are many games which are also only enjoyed by a specific audience, so maybe the difference is not that fundamental. A large part of the reason the average person doesn't enjoy math is that unlike Angry Birds math requires some effort, which is the same reason the average person doesn't enjoy League Of Evil III.
Spot on. Pure, fun math does benefit society directly in at least one way, however, in that the opportunity to engage in it can be used to lure very smart people into otherwise unpalatable teaching jobs. In fact, that seems to be the main point of "research" in most less-than-productive fields (i.e. the humanities).
Is it clear that this is in the best interests of society? It would seem to me the end result is bad teaching. Back when I was in undergrad, the best researchers were the worst teachers (for obvious reasons- they were focused on their research and didn't at all care about teaching). When I was in grad school in physics, the professor widely considered the strongest teacher was denied tenure (cited AGAINST him in the decision was that he had written a widely used textbook),etc. Also, the desire for tenured track profs to dodge teaching is why the majority of math classes at many research institutions were taught by grad students.
Interesting. Did there seem to be any pedagogical benefit to having relatively easy access to research-level experts, though?
In graduate school, for special topics class there were usually only 1 or 2 professors that COULD teach a certain class (and only 3 or 4 students interested in taking it)- so when you are talking cutting edge research topics, its a necessity to have a researcher because no one else will be familiar enough with whats going on in the field. Outside of that, not really. Good teaching takes work, so if you put someone in front of the class whose career advancement requires spending all their time on research, then the teaching is just a potentially career destroying distraction. Also, at the intro level, subject-pedagogy experts tend to do better (i.e. the physics education group were measurably more effective at teaching physics than other physics groups. So much so that I think now they exclusively teach the large physics courses for engineers)
I mean, it's easier to get research positions with those professors, and those are learning experiences, but the students generally get very little out of it during the actual class.
Thinking for a long time is one of the classic descriptions of Newton; from John Maynard Keynes's "Newton, the Man" []:
Paul Graham also mentions focus in this article [].
I think math is more fun than playing video games. But I guess it's subjective.
Lucky you.
He brags shamelessly about his wide variety of interests: Drumming, lockpicking, PUA, biology, Tana Tuva, etc.
The Feynman divorce:
You're right.
Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics [] for some examples.

Being wrong about something feels exactly the same as being right about something.

-- many different people, most recently user chipaca on HN

Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you're ignoring, or a feeling that your opponents are acting the way they are because they're stupid or evil? Or wanting to censor someone else's speech? I take all these things as alarm signals. A communist friend of mine once said, after I'd nailed her into a corner in a political argument about appropriate rates of pay during a fireman's strike, "Well under socialism there wouldn't be as many fires.". I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism. The feelings you list correspond to very common ways people behave. So they're very weak evidence that you're wrong about something. Unless you're a trained rationalist who very rarely has these feelings / behaviors. Most people first acquire a belief - whether by epistomologically legitimate ways or not - and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that's just the way most people deal with beliefs that are important to them.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar): Kathryn Schulz, Being Wrong [] But I'm not comfortable endorsing either of these quotes without a comment. chipaca's quote (and friends) suggest to me that * my "being wrong" and "being right" are complementary hypotheses, and * my subjective feelings are not evidence either way. Schulz's quote (and book) suggest to me that * my "being wrong" is broadly and overwhelmingly true (my map is not the territory), and * my subjective feeling of being right is in fact evidence that I am very wrong. I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way: Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn't been quoted here before, but you might've seen my use of that quote on [] to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
It occurs to me that "being wrong" can be divided into two subcategories -- before and after you start seeing evidence or arguments which undermine your position. With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.

He said:

When you play bridge with beginners—when you try to help them out—you give them some general rules to go by. Then they follow the rule and something goes wrong. But if you'd had their hand you wouldn't have played the thing you told them to play, because you'd have seen all the reasons the rule did not apply.

from The Last Samurai by Helen DeWitt

“I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to that question is—if it has a right answer—this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these other areas, almost as well as it works in our daily lives as folk psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. That would be premature. I want to explore first the power and the extent of application of this good trick, the intentional stance. Once we see what it is good for, and why, we can come back and ask ourselves if we still feel the need for formal, watertight definitions. My move is an instance of nibbling on a tough problem instead of trying to eat (and digest) the whole thing from the outset. “Many of the thinking tools I will be demonstrating are good at nibbling, at roughly locating a few “fixed” points that will help

... (read more)
As far as I understand, he actually does define his terms. Dennett defines a mind as a rational agent/decision algorithm (subject to evolutionary baggage and bugs in the algorithm). Please correct me if I'm wrong.
9Ben Pace8y
At this point in the book, he certainly hasn't reached that conclusion. He's merely given parameters under which taking the Intentional Stance is a good idea; when it's useful to treat something as having a mind, beliefs, desires, etc. This, he says, will be a useful stepping stone to figuring out what minds and beliefs and desires really are, and how to know where they exist in this world.

Today is already the tomorrow which the bad economist yesterday urged us to ignore.

-- Henry Hazlitt, Economics in One Lesson

And it seems to be going pretty well!

Ah, but you have not seen the counterfactual.

“Even if it's not your fault, it's your responsibility.”

This is a great tagline for the doctrine of Original Sin [].
"Even if it's not your fault, it's your punishment."

“If only there were irrational people somewhere, insidiously believing stupid things, and it were necessary only to separate them from the rest of us and mock them. But the line dividing rationality and irrationality cuts through the mind of every human being. And who is willing to mock a piece of his own mind?”

(With apologies to Solzhenitsyn).

– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”

The original quotation on LW [].

“Anything outside yourself, this you can see and apply your logic to it." She said. "But it’s a human trait that when we encounter personal problems, those things most deeply personal are the most difficult to bring out for our logic to scan. We tend to flounder around, blaming everything but the actual, deep-seated thing that’s really chewing on us.”

Jessica speaking to Thufir Hawat in Frank Herbert's Dune

There is an important difference between “We don’t know all the answers yet” and “Do what feels right, man.” These questions have answers, because humans have biochemistry, and we should do our best to find them and live by the results.

~J. Stanton, "The Paleo Identity Crisis: What Is The Paleo Diet, Anyway?"

But the answers might be specific to each individual because the biochemistry of humans is not exactly the same.
In that case, the questions have complicated answers. The best dieting advice might be "first sequence your personal microbiome then consult this lookup table..."
The important thing is not that the answers are complicated, but that the answers are different for different people. "Consult a lookup table" is not an answer, it's advice how to get to one.
Individuals being different from each other shouldn't necessarily diminish the significance of biochemistry. Biochemistry should explain not just our similarities but overarching principles that organize and explain the differences.
My point wasn't that biochemistry is not important. My point was that the answers you get from biochemistry might be complicated and limited in application.
It not at all clear that someone who knows all the biochemistry will outperform someone who's good at feeling what goes on in his body. In the absence of good measurement instruments feelings allow you to respond to specific situations much better than theoretical understanding.

I am told that the natural feeling for gravity and balance is worse than useless to a pilot.

I am told this as well.
See also []
Depending on the outcome specificied and the type of feelings attended to, of course.
Yes, being able to tell apart the feeling, that makes you crave sugar from the felling that tells you that you should eat some flesh to fix your B12 deficieny, isn't easy. Getting clear about the outcome that you want to achieve with your eating choices is also not straightforward. Both are skills for which understanding biochemistry is secondary.
As far as I can tell, distinguishing between those sorts of feeling is a matter of accumulated experience. There aren't classes of feelings, some of which are desires for things which are bad for you and others which are desires for what you need.
I'm not 100% sure because I'm not that good at making eating choicses but there are those people who make intuitive eating choices you wouldn't eat sugared food but who eat mostly raw vegan and who their raw steak once a month to stock up on B12 when their body calls for it (or whatever the body actually calls for when he brings up the desire to eat flesh). With cognitive thinking, there far- and near-thinking. I think that exists also for feelings. Fun would be a word that generally describes a near-feeling while life satisfaction refers to a more far-feeling. A meditation is finished when you feel it's finished. If you don't have that feeling which can take years to develop you need a clock to tell you when 15 minutes are over because otherwise you might use it as a excuse to quit the meditation when things become really hard.

Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.

Hans Moravec, Wikipedia/Moravec's Paradox

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

Stephen Pinker, Wikipedia/Moravec's Paradox

What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist? Pinker understood AI correctly (it's still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software. If you were to say "a machine can't do everything a horse can do", you'd be right, even today, but that isn't a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long. Let's hope that we're not still paying rent then, or we might find ourselves homeless.

I have long ceased to argue with people who prefer Thursday to Wednesday because it is Thursday.

G. K. Chesterton, attributed.

Upvoted. I would've preferred the following version:
2Ben Pace8y
Might someone offer an explanation of this to me?

On its own I can think of several things that these words might be uttered in order to express. A little search turns up a more extended form, with a claimed source:

My attitude toward progress has passed from antagonism to boredom. I have long ceased to argue with people who prefer Thursday to Wednesday because it is Thursday.

Said to be by G.K. Chesterton in the New York Times Magazine of February 11, 1923, which appears to be a real thing, but one which is not online. According to this version, he is jibing at progressivism, the adulation of the latest thing because it is newer than yesterday's latest thing.

ETA: Chesterton uses the same analogy, in rather more words, here.

If I advance the thesis that the weather on Monday was better than the weather on Tuesday (and there has not been much to choose between most Mondays and Tuesdays of late), it is no answer to tell me that the time at which I happen to say so is Tuesday evening, or possibly Wednesday morning.

It is vain for the most sanguine meteorologist to wave his arms about and cry: “Monday is past; Mondays will return no more; Tuesday and Wednesday are ours; you cannot put back the clock.” I am perfectly entitled to answer that the changing face of the clock does not alter the recorded facts of the barometer.

Note that this accentuates the relevance of a detail that might be skipped over in the original quote- that Thursday comes after Wednesday. That is, this may be intended as a dismissal of the 'all change is progress' position or the 'traditions are bad because they are traditions' position.
Not to mention the people who think accusing their opponents of being "on the wrong side of history" constitutes an argument.
So you are not going to argue that history has shown that socialism has failed?
That's using history as evidence. What I was complaining about is closer to the people who declare that all opponents of a change that they plan to implement (or at best have only implemented at most several decades ago) are "on the wrong side of history".
I think you may not be interpreting the phrase "the wrong side of history" as people who say it mean it. There a classic saying that " A new scientific truth does not triumph by convincing its opponents and making them see light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Max Planck Effectively there's a position that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct. When you tell someone they are on the wrong side of history you are reminding them they are behaving like one of the old men that Plank mentions. Put another way, what it's saying is "if you look at people who don't come from the past and don't have large status quo bias you will notice a trend".
In physics, yes. In history / political science, no.
"Slavery is wrong" isn't obviously correct?
I find this comment particularly ironic given your chosen username.
"War is wrong" isn't obviously correct?
I think the majority of the population believes that there are valid reasons to start a war. R2P etc.
I was talking about war,not wars. Everybody would wish away war if they could. Many people think THIS war need to be fought.
I wouldn't wish away war unless I also wished away the things we need to go to war for, in which case you could as easily say that I would wish away cancer treatments or firefighters.
People go to war because of war, because they have been attacked. That would get wished away as part of the deal. Or they go to war less honorable reasons like grabbing resources, or making forcible converts to a religion. Can't see anything I'd want to keep.
Or because they are being mistreated by others in ways that don't qualify as war.
Inter-state war is by far the least common type of warfare in the modern era, although the proxy wars growing out of the Cold War muddy the waters some. Civil and ethnic warfare is much more common, and I don't think we can say that civil conflicts, at least, can always be described in terms of straightforward aggression and defense against aggression. (Truthfully I wouldn't say that for inter-state wars either, not all of them, but they're a lot easier to spin that way.)
I was using wish away to mean magically get rid of. Unmagically getting rid of it requires unmagically fixing a lot of other things, which is why it hasn't happened.
Magically getting rig of it strikes me as one of those wishes [] that will backfire horribly in one of several ways depending on exactly how the wisher defines "war".
Depends. For starters are you counting revolutions and civil wars as "wars"?
The point being that you can't infer that everyone believes in X in a society where X exists. They may dislike it but be unable to do anything about it.
I'm not making that argument. There polling out there that tells you what people like or dislike. I think that responsibility to protect (R2P) is accepted by a lot of people as a valid reason for military intervention.
Considering it was the norm for several thousand years of history and many philosophers either came out in favor of it or were silent ... no, it's not obviously correct.
There is obviously no one here who will disagree with it. But it is still a moral judgment, not a matter of fact.
Mencius Moldbug does argue that all moral changes after a certain point in time should be rolled back. That timeframe does include the abolition of slavery. I don't know whether there at the moment someone on LW willing to make the argument for slavery explicitly but you might find people who do have Moldbug's position. The last census shows a bunch of neoreactionaries.
A former poster here [] (known elsewhere on the net as "James A. Donald") does disagree with it. He believes that slavery is the rightful state for many people. And for what it's worth, he also believes that moral judgements are matters of fact, in the strong sense of ethical naturalism [].
Where can I find evidence linking the sam0345 account to the identity James A. "Jim" Donald?
Somewhat laboriously, by searching LessWrong for his very first postings and working forwards from there, looking for my replies to him and he to me. I recognised him as James A. Donald as soon as he started posting here, from his distinctive writing style and views, which were very familiar to me from his long history of participating in rec.arts.sf.* on USENET. As evidence, I linked to other places on the net where he had posted views identical to what he had just posted here, expressed in very similar terms. He never took notice of my identification, even when replying directly to comments of mine identifying him, but I think it definite. BTW, while "sam0345" is obviously not a real-world name, I have never seen reason to think that "James A. Donald" is. Searches on that name turn up nothing but his online activity (and a mugshot of an unprepossessing individual of the same name who served 35 years for forgery, and who I have no reason to think has any connection with him). I have almost never, here or anywhere else, seen him post anything personal about himself. He is American, and an Internet engineer, and that's about it. And 10 inches taller than his wife, for what that's worth. I have never seen anyone mention having met him. His ownership of is unusual, in that it goes back well before the advent of public Internet access and easy private ownership of domain names. Try getting a domain name that short and simple nowadays! They're all taken.
Interesting! Before the great-grandparent I would have assigned a pretty low prior to sam being Jim; I never even considered the possibility explicitly. Now that I'm looking at it closely, sam does use a similar writing style. I'm updating substantially, and now believe there is a roughly 50-75% chance they're the same person. Thanks for answering!
In which meaning do you use the word "correct"?
In which meaning do you use the word meaning?
Is this falsifiable?
Sure, just step back in time. A bit less than two millenia ago one could have said "Effectively there's a position -- that Jesus gifted eternal life to humanity -- that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct."
I was actually thinking of eugenics, which was once a progressivist "obvious correct thing where we just need to wait until these luddites die off until everything will be great" thing, until it wasn't. Incidentally a counterexample to "Cthulhu always swims left" too. It's a case where "correct", "right side of history" and "progress" dissociate from each other.

I think you could make a case for totalitarianism, too. During the interwar years, not only old-school aristocracy but also market democracy were in some sense seen as being doomed by history; fascism got a lot of its punch from being thought of as a viable alternative to state communism when the dominant ideologies of the pre-WWI scene were temporarily discredited. Now, of course, we tend to see fascism as right-wing, but I get the sense that that mostly has to do with the mainstream left's adoption of civil rights causes in the postwar era; at the time, it would have been seen (at least by its adherents) as a more syncretic position.

I don't think you can call WWII an unambiguous win for market democracy, but I do think that it ended up looking a lot more viable in 1946 than it did in, say, 1933.

Note terms like the third position [] or third way [].
Seen by some as doomed by history, perhaps. The whole point of US liberalism as I understand the FDR version was to provide a democratic alternative; you may recall this enjoyed some success.
Indeed, many of the most prominent supporters of fascism came from the traditional left. Mussoloni was originally a socialist, Mosley defected from the Labour party, and they didn't call it "national socialism" for nothing. In fact part of the reason why communists and fascists had such mutual loathing (aside from actual ideology) was that they were competing for the same set of recruits. Then again, Quisling and Franco especially were firmly in the right-wing camp. With such concordance from all sides of the political spectrum it's easy to see how one could conclude that totalitarianism was the next natural stage in history.
Interestingly, if you press the people making that claim for what they mean by "left", their answer boils down to "whatever is in Cthulhu's forward cone".
For a more modern example, wouldn't that have been said for marijuana a few decades ago? Everyone expected that once the older people who opposed marijuana died off and the hippies grew into positions of power, everyone would want it to be legal. That didn't work out. (The support for legalization has gone up recently, but not because of this.)
Guilty as charged [] .
The point is that decades ago, illegal substance use was popular among people of college age. Yet as those people grew up, they stopped using the substances and did not, once they were in power, try to make them legal. I'm not comparing young people today versus older people today, I'm pointing out that all those marijuana smokers from the 1960's and 1970's didn't grow up and legalize pot. I'm sure back then if you went onto a college campus you'd have heard plenty of sentiment of "when the old fogies die off and we're running the country, we'll legalize weed". The old fogies died off; the people from the 60s and 70s grew up to rule the country, and... it didn't happen.
The peak year for the popularity of marijuana use among young adults (18-25 years old) was 1979, and it was still less than half. []
According to your link, a poll in 1973 shows 43% of students having tried it with 51% in 1971. That 1979 figure is for people who are currently using it. I suspect the percentage that have tried it, rather than the percentage of regular users, is a closer fit to the percentage who would have supported legalization back then. Furthermore, even if the percentage was under 50%, it's clear that once they grew older they didn't exert the massive influence over marijjuana policy that would have been expected. If 30% or 40% of 25-40 year olds actively support something, even if they are not a majority, that's going to be very prominent in politics, and heavily drive the discourse, and that just hasn't happened. (And even 30% or 40% might be enough to pass legalization considering that a lot of the remainder are probably just neutral on the issue.)
Not really. US politics is a lot about what the kind of people who donate to political campaign thinks about issues. The Koch brothers are for example old people supporting marijjuana legislation.
It's not unheard of for people who've recently tried various substances to nonetheless support stricter restrictions on them. The usual narrative goes something like "I can handle this, but there are lots of people that can't, and we have to keep it out of their hands", though the people in question vary -- drawing class, demographic, or cognitive lines is common. There can be other ulterior motives, too. In the early 2000s, a few marijuana growers in Northern California were among the opponents of a ballot proposition that would have legalized it in the state -- because legalization was expected to harm their profit margins, doing more damage than than removing the chance of arrest would have made up for.
Or, alternately, "It was a mistake for me to do it, and I was lucky to get away without punishment, but legalizing would encourage other people to make the same mistake." I seem to recall a few U.S. politicians on both sides of the aisle saying things of this nature.
I would believe that people who used drugs back then would say this now. I find it hard, however, to believe that people who used drugs back then would have said it back then, and the point is that people back then thought they would legalize weed once the old fogies died off.
How do you know that this wasn't the cause?
Because as army1987 points out, legalization is supported by the young, not by people who were young in the 1960's and 1970's.
Was the forceful kind ever an obviously correct/leftist position? To my mind non-violent eugenics is still obviously the correct thing where we just need to wait until the luddites die off - it's just the association with the Nazis has given ludditery a big (but ultimately temporary) boost.
1bramflakes8y []
Do you actually mean non-coercive? There are great many ways to apply pressure on people without actually getting violent....
I suspect it is falsifiable. I might unpack it as the following sub claims 1 Degree of status quo bias is positively correlated to time spent in a particular status quo (my gut tells me there should be a causal link, but I bet correlation is all you could find in studies) 2 On issue X, belief that X[past] is the correct way to do X is correlated with time spent living in an X[past] regime. 2.5 Possibly a corollary to the above, but maybe a separate claim: among people who you would expect to have the least status quo bias position X[other] is favored at much higher rates than among the general population For most issues 2 and 2.5 can probably be checked with good polling data. Point 1 is the kind of thing its possible to do studies on, so I think its in principle falsifiable, though I don’t know if such studies have actually been done.
2) is also what you would expect to see if X[past] was indeed better than X[other]. 2.5) Not having status quo bias isn't equivalent to being unbiased. A large number of the people that are least likely to have status quo bias are going to be at the other end of the spectrum - chronic contrarians.
Note that which X is better may depend on circumstances (e.g. technological level).
In politics, no position is obviously correct. Claiming that one's own position is obviously correct or that history is on our side is just a way of browbeating others instead of actually making a case. Claiming that the opponents of some newly viral idea are "on the wrong side of history" is like claiming that Klingon is the language of the future based on the growth rate when the number of speakers has actually gone from zero to a few hundred. No -- you are telling them. To remind someone of a thing is to tell them what they already know. To talk of "reminding" in this context is to presume that they already know that they are wrong but won't admit it, and is just another way of speaking in bad faith to avoid actually making a case.
One's person status quo bias is another person's Chesterton fence. The quote from which this comment tree branches is from Chesterton.
I strongly agree. It's possible that history has a side, but we can hardly know what it is in advance.
I don't think you agree. I think Eugine has a problem with the idea that just because an idea wins in history doesn't mean that's it's a good idea. Marx replaced what Hegel called God with history. Marx idea that you don't need a God to tell you what's morally right, history will tell you. Neoreactionaries don't like that sentiment that history decides what's morally right.

Neoreactionaries doesn't like that sentiment that history decides what's morally right.

I am not a neoreactionary and I think the sentiment that history decides what's morally right is a remarkably silly idea.

You have to compare it to the alternatives. Do you think it's more or less silly than the idea that there a God in the sky judging what's right or wrong? Marx basically had the idea that you don't need God for an absolute moral system when you can pin it all on history with supposedly moves in a certain direction. You observe how history moves. Then you extrapolate. You look at the limit of that function and that limit is the perfect morality. It's what someone who got a rough idea of calculus does, but who doesn't fully understand the assumptions that go into the process. In the US where Marx didn't have much influence as in Europe there are still a bunch of people who believe in young earth creationism. On a scale of silliness that's much worse. Today the postmodernists rule liberal thought but there are still relicts of marxist ideas. Part of what being modern was about is having an absolute moral system. Whether or not those people are silly is also open for debate.
Sure. Let's compare it to the alternative the morality is partially biologically hardwired and partially culturally determined. By comparison the idea that "history decides what's morally right" is silly. Yep, he had this idea. That doesn't make it a right idea. Marx had lots of ideas which didn't turn out well. Oh, so -- keeping in mind we're on LW -- the universe tiled with paperclips might turn out to be the perfect morality? X-D And remind me, how well does extrapolation of history work? Do you, by any chance, believe there is a causal connection between these two observations that you jammed into a single sentence?
Since culture evolves with history there is a lot of overlap between culture determining moralty and history determining morality.
What's the overlap between two empty sets?
There's no culture and no history?
Oh yes, there is culture, and there is history, and there is an overlap. Now work out what two sets I am implying are empty.
We didn't talk about right or wrong but silly. Let's do what partially biologically hardwired and partially culturally determined is not exactly the battle cry under which you can unite people and get them to adopt a new moral framework. It also has the problem of not telling people who want to know what they should do what they should do. Yes, I do think that Marxism and Socialism has a lot to do with spreading atheism in Europe. Socialist governments did make a greater effort to push back religion and make people atheists than democratic governments did. If I hear Dawkins talk how it's important that atheists self identify as being atheists to show the rest of America that one can be an atheist and still a morally good person, than that does indicate to me a problem of American culture that's largely solved in Europe. Socialist activism has a lot to do with why that's the case.
* Dawkings -> Dawkins
Thanks. The fact that I made that error is pretty interesting to me. Someone else used the Dawkings spelling a few days ago on LW. I felt that it was wrong and looked up the correct spelling to try to be sure. Somehow my brain still updated in the background from Dawkins to Dawkings.
Promoting a century-and-a-half-old wrong idea looks pretty silly to me. You want to revive phlogiston, too, maybe? That's a good thing. I am highly suspicious of ideologies which want people to adopt new moral frameworks, especially if it involves battle cries. That's a feature, not a bug. Oh yes, they certainly did. I take it, you approve of these efforts?
That question indicates being mindkilled. I happen to be able to discuss issues like that without treating arguments as soldiers. Discussing cause and effects is hard enough as it is without involving notions of approval or disapproval. The implication that somehow socialism isn't responsible for spreading atheism in Europe because socialist used some immoral technique is a conflation of moral beliefs with beliefs about reality.
It seems to me that you two are talking past each other. Here's what I hear: ChristianKI: "Socialist movements and governments did successfully promote atheism and materialism in the populations of Europe. This is why Europeans do not tend to believe, as Americans do, that atheists are incapable of being moral." (This is a descriptive claim about history and public opinion.) Lumifer: "We should not advocate socialism as a way of promoting atheism and materialism, because socialism is awful and Marxist ideas of historical progress are silly." (This is a normative claim about advocacy.)
You're using "socialism" vaguely. Iron curtain socialism was awful. North-western European social democracy is not.
What do we get if we Taboo socialism?
I haven't said anything about morals. In particular, I haven't labeled any actions as immoral. I just inquired whether you approved of the efforts that the socialist governments have made in reality in the XX century to spread atheism. Moreover, we are already past the question of whether the socialist governments made "a greater effort to push back religion and make people atheists" -- we know they did -- the issue now is the cost-benefit analysis of these efforts. You clearly like the outcome, so do you think the price was worth it? This is what I mean by the question about whether you approve.
I do approve of democratic socialism. I'm heavily opposed to what currently happens in France when it comes to fighting religion. But I guess both claims won't tell the average person here where much because the political background of European politics isn't that clear in English speaking forum.
The question wasn't which political system you approve. The question was whether you think the outcome of more atheists in Europe was worth the cost incurred during the efforts of the socialist governments to suppress religion and promote atheism.
I'm living in a country in which the people who want socialism who had the most political power favor democratic socialism over communism. In Germany you had a split in the left. One half thought that you need a revolution to achieve the goal of socialism and the other half thought that you can work within the democratic institutions to achieve the goal of socialism. I haven't meet any young earth creationists in Berlin or for that matter people who doubt the theory of evolution so I'm completely happen with the state of affairs where I live. No catholics bombing protestants either. On the other hand I don't approve of the kind of policies that exist in France or Soviet Russia. I'm not familiar enough with Swedish policies to tell you whether I approve of them.
This is a bit of a sideline, but if you're talking about the Troubles in Northern Ireland, I think modeling it as a religious conflict is the wrong way to go. The impression I get is more of religion as a shibboleth for cultural and political ties than the other way around.
Lucky you X-D Right. Instead you had the Baader-Meinhof gang. They wanted socialism, too, didn't they?
There advocated way of getting there wasn't the "way through the institutions" but "revolution". There are Marxist arguments that revolution is the only way and that it's not possible to change the system from the inside. According to our university constitution students are supposed to vote in an election for a 5 person group to represent the body of students of a university department. At our university the students of the political science department don't like this. The elected 5-person body doesn't constitute itself and the decisions are rather supposed to make by a self governed open body in which everyone who wants can speak and that makes decisions via "consensus". I don't see myself in that tradition or have any loyalty to that fraction. As far as current affairs go, I would want liquid democracy for those student institutions with some elected persons taken representative roles and not "consensus" style democracy.
I think they're both quite silly. Also, the fact that many people believe in God as a source of morality, is itself a reason why history (i.e. the actions of those people) is a bad moral guide. Surely most pre-modern philosophers also had absolute moral systems?
Beforehand there was the idea that God's simply beyond human comprehension. One day he tells the Israelis to love their neighbors and the next he orders the Israelis to commit genocide. You were supposed to follow a bunch of principles because those came from authoritative sources and not because you could derive them yourself. If you read Machiavelli, he's using God as a word at times when we might simply use luck today. Machiavelli very much criticizes that approach of simply thinking that God works in mysterious ways. Greeks and Romans had many different Gods and not one single source of morality. Of course absolute morality is not all the modernism is about.
I was thinking about classic and medieval Christian philosophy, which tied morality to an unchanging (and so absolute) God. As an aside, when the Israelis were ordered to love their neighbors, the reference was to the neighboring Israelis and peaceful co-inhabitants of other tribes. Jews were never told by God to love everyone or not to have enemies; that is a later, Christian or Christian-era idea.
But still a mysterious God who's so complicated that humans can't fully understand him so the should simply follow what the priest who has a more direct contact to God says. Furthermore you should follow the authority of your local king because of the divine right of kings that your local king inherited. The idea that you can use reason to find out what God wants and then do that is a more modern idea. Things switched from saying that if the telescope doesn't show that planets move the way the ancestors said they are supposed to move, then the telescope is wrong to the idea that maybe the ancestors are wrong about the way the planets move. The dark ages ended and you have modernity.
I don't have much to say about the actual point you're making, but you've been setting off alarm bells with stuff like this: What's your background on the history of this period? And on the philosophy of Marx and Hegel? The things you are saying seem to me to be false, and I want to check if the problem isn't on my end.
What do you mean with this period? I don't think that modernity started with Hegel but with people like Machiavelli with is around ~1500. Hegel and Marx on the other hand did their work in the 19th century. I did read Machiavelli's The Prince cover to cover. In the case of Marx and Hegel I'm a German and in this case speaking about German philosophers. That means I have been educated in school with a German notion of what history happens to be. I don't see political history in the Anglosaxon frame of Whig vs Tory. I did spent a bunch of time in the JUSOS with is the youth organisation of the German SPD and the abbreviation roughly translates into Young Socialists in the SPD. I therefore did follow debates about whether socialism as end goal should be kicked out of the party program of the SPD or be left in. Lastly I did a lot of reading in political philosophy both primary and secondary sources. Most of it a while ago. But one sentence summaries of complex political thoughts are by their nature vague. Of course Hegel already had the notion of history and me saying replaced might give the impression that he didn't. But Hegel did have God and Marx did not.
Just out of curiosity, what was the result of those debates?
It's still in there but more for symbolic reasons. Party leadership didn't really want it but the party base did. The relevant phrase also happens to be democratic socialism. Meaning that the goal is economic equality but representative democracy and not a bunch of soviets and "consensus" decision making. In practice the party policies under Schroeder were more "third way" and as a result they wanted to "update" the party program to reflect that policy change.
What do you mean by "economic equality"? Do you mean that everyone should have the same amount of money/resources? (This is not a stable state of affairs if people then proceed to engage in commerce).
If you have a government which constantly redistributes money you could hold it constant if you wanted to do so. But the people with whom I spoke usually don't go that far. Concerns are rather that everyone has access to a "living wage". Defining how exactly the end state will look like isn't that much of a concern if you can decide whether or not you move in the right direction. There the feeling that third way policies of cutting government pensions don't go in that direction.
Yes, but that's not exactly compatible with anything resembling freedom. The problem is what's considered a "living wage" changes with changes in society. It is a concern if you want to evaluate whether you should even be trying to move in that direction.
What is a living wage changes with changes in society, and that isn't obviously a problem. If society becomes richer, people expect higher wages, and if society becomes richer, it can afford them. Depending on the quantities.
Amazingly enough, freedom supporting policies can negatively impact equality. To put it another way, if there were no conflicts between values, there would be no politics. To put it a third way, you keep writingas though you are the Tablet, and have the One True Set of Values inscribed in your brain.
Christian mentioned having the government constantly redistributing money as a possibly desirable end state []. I was pointing out one of the implications of said end state. Also I'm getting increasingly frustrated at people, yourself included, who keep trying to pass off their false beliefs about the nature of the world as different preferences. In particular, to use the economic equality example, if you constantly redistributed money to keep everyone equal, as I mentioned it would destroy anything resembling freedom. But suppose you claim to have a utility function that puts no value on freedom. Well, another consequence is that it would destroy the motivation for people to engage in productive work (if the benefits would just get redistributed) so you'd wind up with a bunch of equally starving people. Assuming, that is, that this redistribution was somehow magically enforced, more realistically you'd wind up with everything in the hands of the redistributors.
Rousseau's "The Social Contract" begins with the words: I don't think that any modern person on the left is as direct as that when it comes to freedom, but in European political thought the idea of the Social Contract is quite central. The idea is that in the end state people would be motivated to work as a way of self actualization and don't need financial incentives to do work. Star Trek has characters who work without getting payed to do so. The observation that today many people need money to be motivated to work doesn't mean that will always be true in the future and that we shouldn't work on moving society in that direction. The idea of an end state doesn't mean something that can be reached in 10 years a state that can take quite a while to reach.
Could you taboo what Rousseau means by "master" and "slave" in that quote. As is, to me it sounds like deep wisdom attempting to use said words in some metaphorical way that's not at all well-defined. Also I don't see what this has to do with the subject. The problem is that the work that's self-actualizing is not necessarily the same as the work that's needed to keep society running. In other words, attempting to run society like this you'd wind up with a bunch of (mediocre) artists starving and suffering from dysentery because not enough people derive self-actualization from farming or maintaining the sewer system. Historically, many attempts by intellectuals to create planned communities fell into this problem. Fictional evidence [].
Rousseau writes his central work to justify that men is everywhere in chains. Rousseau attempts to legitimize the Social Contract that takes away men's natural freedom. Rousseau later argues that man get's new freedoms in the process, but he's not shy in admitting that men loses his natural freedoms by being bound in the Social Contract.
The full text is readily available online. A "master" is someone with the power to tell others what to do and be obeyed; yet these masters themselves obey something above themselves (laws written and unwritten). Rousseau's answer (SPOILER WARNING!!) is the title of his work. (To which the standard counter-argument is "show me my signature on this supposed contract".) A few more Rousseau quotes: ... ... He is arguing here against theories whereby sovereignty must consist of absolute power held by a single individual beyond any legitimate challenge, his subjects having no rights against him. For Rousseau, sovereignty is the coherent extrapolated volition of humanity -- or in Rousseau's words, "the exercise of the general will". Rousseau's sovereignty is still absolute and indivisible, but is not located in any individual. One can cherry-pick Rousseau to multiple ends. Here's something for HBDers: Libertarians may find something to agree with in this: But to know what Rousseau thought, it is better to read his work.
Here [] is a decent debunking of the notion that modern society is based on a social contract. The basic argument is that if one attempts to explicitly right down the kind of contract these theories require, one winds up with a contract that no court would enforce between private parties. More generally, Nick Szabo argues [] that the concept of sovereignty is itself totalitarian.
I agree with that. It certainly is. Where does that leave FAI? A superintelligent FAI, as envisaged by those who think it a desirable goal, will be a totalitarian absolute ruler imposing the CEVoH and will, to borrow Rousseau's words, be "so strong as to render it impossible to suspend [its] operation." Rather like the Super Happies' plan [] for humanity. The only alternative [] to a superintelligent FAI is supposed to be a superintelligent UFAI.
The open source movement is a better example of voluntary word than star trek.
In this case I don't think so. I didn't want to give an example of work done as a volunteer but an example of a futuristic society where people don't work for money. The Open Source movement also also a bunch of different people doing things for various reasons and incentives.
People in Star Trek work sometimes for patriotism, sometimes for gold-pressed latinum, but mostly toward whatever the plot says they need to be doing. I foresee problems with using narrative tension as a medium of exchange.
I actually agree that running for 100% equality would likely result in 0% freedom. For my money that is an extreme illustration of "you can't satisfy all values simultaneously" , not of "left bad". Christians absolute egalitarianism is view I have never heard articulated before. It seems to be the mirror image of anarcho-capitalism, the philosophy that guns for 100% freedom. To me, it's symmetric. To you there is apparently a "side" that is in contact with reality, and a side that isn't. Yes, there are a lot of things that would go wrong, to the average utility function, with absolute egalitarianism . Ditto for absolute libertarianism. But you never mention that. It's an open question whether a given extremist, of any stripe, is someone who has (1) a one-sided utility function, (2) who wrongly thinks that an average, mixed UF can be satisfied by extreme policies. As such, you don't get to assume that (2) is true of anyone in this discussion.
It wouldn't result in much equality either. (Unless you mean equality in the sense that everyone is equally dead, which is a possible if extreme outcome.) I also never called absolute anarcho-capitalism (I assume that's what you mean by "absolute libertarianism") as a desirable end-state. The problem is that as I pointed out the way these people pursue their one-sided goal won't even maximize the one-sided utility function. Edit: Speaking of freedom and equality don't you also want a term for prosperity in there somewhere?
Or wellbeing, since dollars aren't utilons.
I don't define prosperity in terms of dollars.
If you want to have it articulated in a bit more detail Zeitgeist Appendum [] can give you an impression. With 5 million youtube it there are quite a few people on the internet who profess to follow that ideology. According to it we need a central computer who tells everyone what work to do. People will do what the computer tells them because their education teaches them the value of following what the computer tells them, so perfectly that everybody just does what's in the "public interest" and follows the directions of the central scientific computer program. Because there won't be money anymore, nothing will stop the digging of intercontinental tunnels for transportation needs so that you don't need airplanes. I have meet multiple people who believe that framework. Fortunately people outside of the political process where they won't do much harm. Unfortunately a bunch of them are smart, so intelligence doesn't seem to protect against it. One of them ranks quite well in debating tournaments.
Wow, there so many things wrong with this proposal that I'll just mention the one that disgusts me on a visceral level. One effect of this scheme (if it could somehow be made to work) is that there is a certain organ that consumes nearly one quarter of the body's energy [] that is now completely vestigial.
I can describe ideas without them being mine. In this case we are speaking about ideas in the party program of the SPD.
Is this line of conversation still "just curiosity" about the results of SPD debates, or are you trying to bait an argument?
I'm trying to figure out what Christian, and more generally the typical German, mean by "socialism" these days. Does it have a more moderate end goal then the older socialists, or do they have the same end goal and have simply decided to approach it more slowly.
Thanks, that's helpful.
For Hegel and Marx history is the process of change. Both the amounts of Gods per person and the percentage of people who believe went down over time. Thus history favors atheism.
I don't see why the 'amount of Gods per person' is a valid metric for anything. Progression from poly- to monotheism doesn't imply a future progression to atheism. The actual percent of atheists in society has indeed increased over time, but it's never been significantly above 10% worldwide and it's not clear that's it's rising right now (Wikipedia source []). It's hardly strong enough evidence to conclude that a majority of humanity will be atheistic one day. Other religions surely exhibit or previously exhibited rising trends at least as strong.
In general, neoreactionaries seem to have cribbed this position from Herbert Butterfield's critique of what he called the "Whig Interpretation of History". Butterfield was not himself a neoreactionary, and infact warned against the trap that many neoreactionaries fall into: that of thinking that just because Whig histories are invalid, that this somehow makes Tory histories valid.
There are (at least) two things wrong with "the right side of history". One is that we can't know that history has a side, or what side it might be because a tremendous amount of history hasn't happened yet, and the other error is that history might prefer worse outcomes in some sense. I find the first sort of error so annoying that I normally don't even see the second. My impression is that Eugene is annoyed by both sorts of error, but I hope he'll say where he stands on this.
There's a third thing wrong with it: generally, people use the phrase in order to praise one side of some historical dispute (and implicitly condemn the other) by attributing to them (in part or in whole) some historical change that is deemed beneficial by the person doing the praising. The problem with this is that usually when you go back and look at the actual goals of the groups being praised, they usually end up bearing very little relation to the changes that the praiser is trying to associate them with, if not being completely antithetical. Herbert Butterfield (who I posted about above) initially noticed this in the tendency of people to try to attribute modern notions of religious toleration to the Protestant reformation, when in fact Martin Luthor wrote songs about murdering Jews, and lobbied the local princes to violently surpress rival Protestant sects.
What's the precise sense of "attribute" in that claim? It's not obviously implausible to claim that the more groups are competing with other, the less likely it is that any one can become totally dominant, and so the more likely it is that most of them will eventually see mutual toleration as preferable to unwinnable conflict. This doesn't have to be an intended effect of the new sects to end up being an actual effect.
I hadn't even thought of the first objection, possibly because I stopped considering "what side history is on" a useful concept after noticing the second one.
Speaking of which, let's see what history has to say about Marx. It would appear that the Marxist nations lost to a semi-religious nation. Thus apparently history has judged that the idea that history will tell you what is right to be wrong.
I'm very far from being a reactionary or neoreactionary, but I also don't put much moral weight on history - that is, on what most other people come to believe. For one thing, believing that would mean every moral reformer who predicts for themselves only a small chance of reforming society, should conclude that they are wrong about morals.
If you're on the winning or ascending side, you have more arguments in your this point in history,where democracy and it's twin, rational argument, reign. That doesn't add up to being right because epistemology, ie styles of persuasion, have varied . To know the right epistemology,you need...epistemology. That's why philosophy difficult.
Meaning: You can't spot a trajectory in while you're half way along it? Meaning: You can,but it doesn't mean anything epistemologicaly?
Considering how many centuries it took humanity to get from its first curiosity about how things work to predicting the trajectory of a falling rock (the irony of your handle piles higher and higher), predicting trajectories in history is a fool's task. How many predicted the Internet? How many predicted the end of the Soviet Union? How many can predict developments in Ukraine? "History is on our side" is not an argument, but a cudgel.
Yep. It's nothing but a minor variation on "God is on our side!" X-D
Don't use it then. :-)
Arguing about preferences (=opinions, =values) is pretty pointless.

But understanding human limitations does not mean we can overcome them. It only means we can’t pretend they don’t exist. It should point us toward humility, not hubris.

Yuval Levin in the National Review

To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility [] and rationalization of existing limits (e.g. deathism).

...the utility of a thought experiment is inversely proportional to the size of its departure from reality.

-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking

Are we sure about this? Einstein's idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
Ceteris paribus, then. On average, a thought experiment along the lines of "what if I poured this stuff on my trousers" is of much more practical use and tells you much more about reality than a thought experiment along the lines of "what if I could ride around on [intangible thing]". The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out "what if I rode on a beam of light"-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein's successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness. (note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
This is funny. Until I read your comment, I was misreading the original quote; I didn't notice the "inversely" part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that's my point! It sounds just as profound without the 'inversely'.

A BS detection Heuristic.

You can tell if a discipline is BS if the degree depends severely on the prestige of the school granting it. I remember when I applied to MBA programs being told that anything outside the top 10 or 20 would be a waste of time. On the other hand a degree in mathematics is much less dependent on the shool (conditional on being above a certain level, so the heuristic would apply to the differene betwewn top 10 and top 2000 schools).

The same applies to research papers. In math and physics, a result posted on arXiv (with a minimum hurdle) is fine. In low quality fields like academic finance (where almost all academics are charlatans and all papers some form of complicated storytelling), the "prestige" of the journal is the sole criterion.

Nassim Taleb

This seems false in physics. Prestige of your institution matters. Prestige of the journal matters, too. Arxiv is fine, Physical Reviews is better, PRL is better yet. Nature/Science is so high, if you publish something that is not perceived as top-quality, you may get resented by others for status jumping. And there are plenty of journals which only get to publish second- and third-rate results. Of course, the usual countersignaling caveat applies: once you have enough status, posting on Arxiv is enough, you will get read. Not submitting to journals can be seen as a sign of status, though I don't think the field is there (yet).

My understating is that this effect is a lot smaller in physics than in the humanities.

By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
Can't be. You can't draw a distinction within a category by separating it into two subcategories one of which is empty.
The category being separated is "disciplines", which divides into "BS" and "non-BS". "Academic" disciplines are thus a further subcategory of "BS" disciplines. Actually, "academic" disciplines would probably be a subcategory of "disciplines" which is largely but not entirely subsumed by "BS" disciplines, but I don't usually demand that level of precision from witticisms. [For the record, separating a category into two subcategories and proving one of them empty is just another way of proving the original category is identical with the non-empty subcategory. It is, indeed, valid from a technical perspective.]
You can, though it's usually useless; but it also depends on whether that subcategory is always necessarily empty or it happens to be empty now but in principle it could be non-empty. (But it's still a fallacy of grey: even if all academic disciplines were, in fact, BS disciplines, some disciplines may still be less BS than others.)
I think, by this standard, law is a BS discipline. But I'm not sure what to make of that.

Well - law is, in a strict sense, entirely about convincing other humans that your interpretation is correct.

Whether or not it actually is correct in a formal sense is entirely screened off by that prime requirement, and so you probably shouldn't be surprised that all methods used by humans to convince other humans, in the absence of absolute truth, are applied. :)

Would that include drafting a fire code for buildings? Would it include negotiating a purchase and sale agreement for a business? Would it include filing a lawsuit for unpaid wages? Would it include advising a client about the possible consequences of taking a particular tax deduction? It's hard to see how it would, and yet all of these things are regularly done by lawyers in the course of their work.
Those are, indeed, all examples of persuading human beings. The other two are excellent points.
"persuading human beings" is not exactly the same thing as "convincing other humans that your interpretation is correct." Besides, in negotiating an agreement much of the attorney's job consists of (1) advising his client of issues which are likely to arise; (2) helping the client to understand which issues are more important and which are less important; and (3) drafting language to address those issues. Yes, persuasion comes into it sometimes, but it's usually not primary. Filing a lawsuit for unpaid wages can be seen as persuasion in a general sense. If Baughn wants to claim that in a strict sense, litigation is about getting other people to do stuff, then I would agree. Thank you.
In a narrow, rather than a strict sense. In that same narrow sense: * science is about convincing other humans that your experiments are correct * art is about convincing other humans that what you have made is art * parenting is about convincing other humans that you are a good parent * working for a living is about convincing other humans to pay you a living * competitions are about convincing other humans that you have won * teaching is about convincing other humans that you are teaching * being intelligent is about convincing other humans that you are intelligent * living is about convincing other humans that you are not yet dead
... and the best way to do that is, in theory, to do good experiments. Hence replication and so forth. That's the basic idea of Science. (Alarmingly, some modern "scientists" have indeed been found cutting out the middle-man, as it were.) Yes, this seems like a reasonable assessment. Some people have other goals in influencing the minds of their viewers, although this tends to edge into advertising etc. No, this is only the lower bound that allows you to retain a child. Most of the rewards etc. of parenting are unrelated. Definately. This seems quite analogous to law, if the layer is themself the defendant. In practice, sadly, it is. The incentive structure for teachers is pathetic, because no-one actually cares about it.. NOT SURE WHAT TO MAKE OF THESE LAST TWO, THEY DON'T SEEM REMOTELY ANALOGOUS TO THE OP. More seriously, law is also about predicting what those humans will be most easily convinced of.
What if you convince everyone that you're a good parent while poisoning your child? And everyone else can believe you're dead and you can still be alive. In fact sometimes in order to live you have to convince everyone you're dead.
Quite. Those were all intended to be bad arguments. Bad at a caricature level of badness. But Poe's law, I guess.
Oops, my fail. I thought you were saying 'there's nothing special about the law'. But surely art really is about convincing other human beings that you've made art. What on earth else is going on? And I reckon that there are aspects of law that aren't covered, but a barrister who can't convince is completely useless in court. Off to update my estimate of my own written-irony perception skills.
This is the sort of question that you have to already know the answer to, to be able to ask. I won't attempt a definition, but as we all know, it involves such things as "the creation of things of beauty", "the expression of a truth that nothing else can express", and so on. That is what art is. We all know that that is what art is. But for purposes of contradiction, suppose otherwise. Suppose art was entirely about convincing people that you have made art. Then the statement is a definition of art as being the fixed point of the formula "X is about convincing people you have made X". What in this formula picks out the class of works that, when we look at the real world, we see everyone calling art? There is nothing. If this is truly a statement of everything that art is, we should be able to insert a made-up name in the definition and convey the same information: "pightlewarble is about convincing people you have made pightlewarble". The fact that the revised sentence conveys nothing, yet "art is about convincing people you have made art" conveys something, demonstrates that the latter only communicates something because we already know something about what art is. When such a sentiment is expressed, what it is intended to communicate is a criticism of art as practiced in the speaker's time and place. The claim is that what is being produced is not art, and that it fails to be art precisely because its creators have concerned themselves with nothing more than getting an artistic reputation among a similarly corrupt audience, and have failed to aim at making art at all. The "art" that the sentence is about is being asserted to be not art. The real meaning is the opposite of the literal meaning. A similar analysis applies to every one of the examples I gave. All of them, when seriously uttered, mean the opposite of their literal reading. Of course, the artist wants an audience, the lawyer must persuade the court, and so on. But these are not the terminal goals of
Is it, really? Have you thought about art as pure self-expression? Art as a way to attempt to magically manipulate reality? Even art as a way to make something look pretty?
Then you are a successful Christian Scientist [] , say. Those are all descriptive claims, not normative ones.
Interesting. There are famous cases of self-taught lawyers from previous centuries. I wonder if this says something bad about the modern legal system. Maybe the modern legal system is less about making arguments based on how the law works (or should work) than about the lawyer signaling high status to the judge so that he rules in your favor.
There are famous cases of self-taught specialists in scientific fields, too. There aren't so many of them nowadays. That's because both the law and science are in a state where a practitioner must know a lot of details that didn't exist as part of the field in earlier days.
I don't think I have good reason to think this is the case. At any rate, it's clear enough that the prestige bit seems to come in heavily in hiring decisions, so let's just talk about that. How, in the ideal case, do you think lawyers would be evaluated for jobs? Off hand, I can't think of anything a lawyer could produce to show that she's a good hire.
I'm not a lawyer, and English law is different from American, but I reckon that I can tell the difference between good and bad lawyers by talking to them for a while about various cases in their speciality and listening to them explain the various arguments and counter-arguments. I've heard people who make a good living from the law make incoherent wishful-thinking type arguments about which way a case should have gone, when I can see perfectly well how the judge was compelled to the conclusion that he came to. I wouldn't want such a person defending me. Presumably if you are yourself a good lawyer, it shouldn't be too difficult to do this. The law is fairly logical and rigorous.
Well, if his "reality distortion field" was powerful enough to also affect judges.
I think Spooner got it right: -Lysander Spooner [] from "An Essay on the Trial by Jury []" There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment of virtue attracts sociopaths to the study and practice of law, and drives out all moral and decent empaths from its practice. If not driven out, it renders them ineffective defenders of the good, while enabling the prosecutors who hold the power of "voir dire" jury-stacking to be effective promoters of the bad. The empathy-favoring nature of unanimous, proper (randomly-selected) juries trends toward punishment only in cases where 99.9% of society nearly-unanimously agree on the punishment, making punishment rare. ...As it should be in enlightened civilizations. — Johann Wolfgang von Goethe
They can vote against people who write or enforce unjust laws. There's not much they can do about the judicial branch, but they only need to stop one branch. That's the US anyway. I don't know the details about other countries. If there's that much corruption, as opposed to people simply not voting for what they claim to care about, I don't think juries are going to be much help.
What exactly is meant by the phrase "BS discipline"? Is the claim that most scholarship in law is meaningless nonsense? Or is the claim that there is no societal value at all in law? Or is it something else?
I suppose a discipline is BS if in the case of a science, it fails to systematically track the realities of an object of study. In the case of a trade, like business management or welding, then it's a BS discipline if it fails to make its practitioners more successful than those outside the discipline. I'm not sure what kind of a discipline law is. Taleb's thought, I suppose, is that a discipline is likely to be BS if, instead of directly measuring the capabilities of its practitioners, we tend to measure only indirectly. This only implies that direct measurement is costly enough to outweigh its benefits, however. One reason for its being so may be that there's nothing to measure directly (i.e. the discipline is BS), but another might be that the discipline is so specialized that very few people are competent to judge any given applicant. Yet a third might be that its subject matter is subject to a lot of mind-killing, so that one can confidently judge an applicant without bias.
I agree that it's difficult to tell how good a lawyer is, which leads to a lot of nonsense like firms spending a lot of money of impressive offices and spending hours and hours of time chasing down every last grammatical error before filing court papers.
This is true for a lot of professions. Most of them don't have the problem you're describing.
Would you mind giving me three examples? This would help me think about what you are saying. TIA.
Plumbers, auto-mechanics, doctors.
Thank you for answering. I would have to say that with plumbers and auto-mechanics, it is a lot easier to assess how good they are compared to lawyers since if they do their job properly, the problem they are working on will normally be solved and if they do not do their job properly, the problem will normally not be solved. Do you agree with this? I agree that with doctors, there is a similar problem of difficulty in assessing quality as with lawyers. On the other hand, there are also problems with doctors spending energy on signalling, although perhaps not as bad as with lawyers. For example, caring about where a doctor went to medical school; prestigious internships; and spending money on impressive facilities. Do you agree with this?
And with a lawyer you can tell what the outcome of the trial was. Now obviously, the lawyers might overcharge you, but you also have the same problems with car mechanics and plumbers. Also for some cases what kind of outcome one can reasonably expect can depend on details of the case that may not be obvious to an non-lawyer, but you have the same problem with car mechanics. Also, with all four of plumbers, auto-mechanics, doctors, and lawyers (especially contract lawyers) its possible for them to screw up in ways that aren't immediately obvious but will cause problems down the line. (With lawyers one will at least be more obvious that the lawyer screwed up when the problem finally surfaces.)
If someone is found guilty in a trial, is that a sign of a poor lawyer, or is that a sign that he was, in actual fact, guilty as charged, independent of the ability of his legal team?
I mean it's not like a good legal team has ever allowed a guilty man to get away with it. Also, presumably the person knows whether he is guilty.
A highly competent legal team may allow a guilty man to get away with a crime, yes. And an incompetent legal team may allow an innocent man to get convicted. But a very competent legal team which normally takes cases where the defendant is guilty will do very badly by this metric; while an incompetent legal team might get a lot of innocent clients might do very well by the same metric. If I wish to select a lawyer to defend me in a trial, then I know whether or not I am guilty of whatever I am being charged with. I do not know how many of the lawyer's previous clients were guilty; nor how many were wrongfully convicted, or wrongfully released. Thus, a mere count of previous victories in court is potentially a poor measure of the lawyer's effectiveness.
Yes, and the same problem can exist for plumbers, car mechanics, and doctors.
Academics also.
You have a point - a man who takes on only easy problems, in any field, will have a higher success rate than a man who takes on only hard problems, irrespective of actual skill level. I think that what makes evaluating a lawyer in particular difficult is that it is very hard for a non-lawyer to easily distinguish easy from hard problems. For car mechanics, I know that replacing the oil is a much simpler job than replacing the engine; but when looking over a lawyer's history, I can't easily evaluate the relative difficulty of his previous successes.
On the other hand, if I come in complaining that the car is making funny noises, it's a lot harder to see whether this is an easy or hard problem. Another example, I come in for a routine inspection and he tells me that some part I've never heard of needs replacing and it's going to be expensive. I have no way to check short of going to a different mechanic and then some figuring out who to trust.
Even putting aside the fact that the vast majority of litigation is resolved before trial, there is also the fact that excellent lawyers lose cases all the time due to a lot of extraneous factors. By analogy, if the auto mechanic charged you $500 to change your brakes, and after he was done with the car the brakes still didn't work, you could be pretty confident that you have a lousy auto mechanic. Do you agree that in litigation there is a much more of a problem of extraneous factors making it difficult to assess the lawyer than extraneous factors in auto repair making it difficult to assess the mechanic? Do you agree that with plumbers and auto mechanics it is a lot easier to assess how good they are compared to lawyers since if they do their job properly, the problem they are working on will normally be solved and if they do not do their job properly, the problem will normally not be solved? Do you agree there are also problems with doctors spending energy on signalling, (although perhaps not as bad as with lawyers), for example, caring about where a doctor went to medical school; prestigious internships; and spending money on impressive facilities? These are real questions, not rhetorical questions; they are aimed to get a better grip on where we agree. Please actually answer them as opposed to just answering the argument you imagine is behind them.
What if the brakes now work, but not necessarily quite as well as they did before? If an auto mechanic tells you your car is totaled, how do you know he's correct? That depends on the details of the problem. In a sense the same is true for lawyers. I agree that there are quantitative differences about exactly how likely you are to get a good estimate with what amount of certainty between these examples but I don't think it's large enough to make a qualitative difference in the analysis.
Those are interesting questions, but unfortunately you have basically ignored two of the three questions I asked you. As mentioned above,these were real questions aimed at getting a better grip on where we agree. It's difficult enough to discuss these kinds of things without having the other person dance around the issues. I don't engage with people who do this . . . .goodbye.
I figured the answers to those were easy to extrapolate from what I wrote, in any case here they are. I agree that this is more of a problem for lawyers, although I'm not sure how much more. It is, but I've never heard anyone say that there is no point going to anything besides the top tier medical schools.
How about manifacturers of multivitamins? (BTW, the term for this sort of things is credence goods [].)

There is nothing that can be said by mathematical symbols and relations which cannot also be said by words. The converse, however, is false. Much that can be and is said by words cannot be put into equations — because it is nonsense.

Clifford Truesdell

This is beautiful: I can't turn it into equations. Does that refute it or support it?
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
There are symbol-juxtapositions which are syntactically or semantically disconnected from any model set in ZFC. There are no sets in ZFC which are similarly separated from statements in a suitable language.
This looks like the sort of thing that I usually find enlightening, but I don't understand it. Could you repeat it in baby-speak?
You can write nonsense formulas on paper which don't correspond to theorems about anything. You can't construct nonsense universes which aren't described by theorems anywhere. Words only mean anything because we interpret them to correspond to the real world. In the absence of words, the real world continues existing.
I don't see why an equation can't be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
Equations can be nonsensical, but it's harder to write a nonsense equation than a nonsense sentence (like the old joke: it's easy to lie with statistics, but it's easier to lie without them). In a way this was the unpleasant surprise of Godel's incompleteness theorem; before that we'd hoped that every well-formed proposition was true or false and could be proven to be so.

'There is of course the question of public safety,' said Vetinari. 'Did I hear you say earlier you have blown up... "one or three" I think was the phrase?"

'I made those explode a-purpose, to see exactly how it 'appened. That's the way to get knowledge, you see, sir.'

Raising Steam, Terry Pratchett

Regarding the first steam engine in Pratchett's fictional world.

Relevant is the Amtal Rule on this same page: []
This quote made me read the book, and I wasn't disappointed. The overall arc of the Discworld is stunning; in retrospect, it recapitulates the rise of civilization well. Raising Steam is perhaps not the last book, but it wouldn't be a bad place to stop if it were.

If the best minds were in charge of designing a bridge, I would expect the bridge to hold up well even in a storm. If the best minds were in charge of designing an airplane, I would expect it to fly reliably. But if the best minds were in charge of something no one really knows how to do, I would be ready for a failure, albeit a failure with superb academic credentials.

Terry Coxon

All I'm getting out of this is that the quoted fails to understand the ability of great minds. Is there a context I'm missing?
Being ready for failure is not quite the same thing as considering success impossible. The context is that economics is in shall we say an earlier stage of development than engineering, so we should be more conscious of the risk of economic tinkering failing than we need be of whether our bridge or plane falls apart underneath us.

I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.

Alan Turing (from "Computing Machinery and Intelligence")

Particularly relevant a quote given Yvain's recent []
That is an exceedingly interesting article. Thanks for the link.
Can you provide some context? I don't understand: the claim that the evidence for telepathy is very strong is surely wrong, so is this sarcasm? A wordplay?
Turing's 1950 paper asks, "Can machines think?" After introducing the Turing Test as a possible way to answer the question (in, he expects, the positive), he presents nine possible objections, and explains why he thinks each either doesn't apply or can be worked around. These objections deal with such topics as souls, Gödel's theorem, consciousness, and so on. Psychic powers are the last of these possible objections: if an interrogator can read the mind of a human, they can identify a human; if they can psychokinetically control the output of a computer, they can manipulate it. From the context, it does seem that Turing gives some credence to the existence of psychic powers. This doesn't seem all that surprising for a British government mathematician in 1950. This was the era after the Rhines' apparently positive telepathy research — and well before major organized debunking of parapsychology as a pseudoscience (which started in the '70s with Randi and CSICOP). Governments including the US, UK, and USSR were putting actual money into ESP research.
Yes, but also remember that Turing's English, shy, and from King's College, home of a certain archness and dry wit. I think he's taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He's facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there's still a profound feeling that he's defying the data. Or maybe not. Maybe I just read it that way because I don't buy telepathy.

Hodges claims that Turing at least had some interest in telepathy and prophesies:

These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be the first to go.

Readers might well have wondered whether he[Turing] really believed the evidence to be ‘overwhelming’, or whether this was a rather arch joke. In fact he was certainly impressed at the time by J .B. Rhine’s claims to have experimental proof of extra-sensory perception. It might have reflected his interest in dreams and prophecies and coincidences, but certainly was a case where for him, open-mindedness had to come before anything else; what was so had to come before what it was convenient to think. On the other hand, he could not make light, as less well-informed people could, of the inconsistency of these ideas with the principles of causality embodied in the existing ‘laws of physics’, and so well attested by experiment.

Alan Turing: The Enigma (Chapter 7)

I think Turing's willingness to take all comers seriously is something to emulate.

The representatives of the scientific world-conception resolutely stand on the ground of simple human experience. They confidently approach the task of removing the metaphysical and theological debris of millennia. Or, as some have it: returning, after a metaphysical interlude, to a unified picture of this world which had, in a sense, been at the basis of magical beliefs, free from theology, in the earliest times.

The increase of metaphysical and theologizing leanings which shows itself today in many associations and sects, in books and journals, in talks and university lectures, seems to be based on the fierce social and economic struggles of the present: one group of combatants, holding fast to traditional social forms, cultivates traditional attitudes of metaphysics and theology whose content has long since been superseded; while the other group, especially in central Europe, faces modern times, rejects these views and takes its stand on the ground of empirical science. This development is connected with that of the modern process of production, which is becoming ever more rigorously mechanised and leaves ever less room for metaphysical ideas. It is also connected with the disap

... (read more)
Cool! I've looked for that manifesto on line before, and failed to find it; thanks for the link! Too many people seem to get all of their knowledge of the Vienna Circle and Logical Positivism from its critics. It's good to look at the primary sources. The translation is a little clunky (perhaps too literal), but so much better than not having it available at all.
I agree. The Logical Positivists were, to my mind, the greatest philosophers ever, and it's a shame they have been the target of so much unfair criticism. Of course they were wrong on many issues, but their attitude towards philosophy, knowledge and political action is unsurpassed. If we can revive their spirit again, philosophy will have a bright future.
What the logical positivist position on political action? Are you talking about things like getting evolution out of science classes, or are you talking about something else?
I'm talking primarily of their resistance to nazism, and how they saw intellectual and political strugges as inextricably intertwined. In this they were very similar to the French revolutionaries. See for instance this article [] where Carnap criticizes the nazi philosopher Heidegger in his usual meticulous and over-dry manner. Amazing that he managed to keep so cool in the face of such evil stupidity. After the war, the US and Britain became the heart of analytic philosophy, and much of the seriousness of the Vienna Circle (and also Popper) disappeared. What replaced it was a rather frivolous, smart aleck kind of philosophy personified especially by people like Lewis and Kripke, but to some degree also Quine, Davidson, Austin and others. In his excellent The Decline of the German Mandarins [] Fritz Ringer shows that the German academia grew increasingly dominated by mad romantic reactionaries from 1890 to 1933 (where the book ends). It seems to me (and I think, but am not sure, that Ringer touches upon this at some point) that this, however, spurred real thinkers, in the enlightenment tradition, to greater heights than they otherwise would have reached. They were forced to focus on the big questions, to come up with fundamental reasons for why you should adopt the rationalist perspective, because, unlike in the Anglo-Saxon world, this perspective had a terrifying opponent in the form of romantic reaction. Ringer mostly focuses on the great sociologist Max Weber and others like him, but I think that a similar can be told about the Vienna Circle (I don't recall whether he comments on them).
The Logical Positivists were mostly pretty far left, but they mostly didn't engage in much political advocacy; though this was controversial among members of the movement (Neurath thought they should be more overtly political), most of them seemed to think that helping people think more clearly and make better use of science was a better way to encourage superior outcomes than advocating specific policies. They were also involved in various causes, though; many members of the Vienna Circle were involved in adult education efforts in Vienna, for example. The more I think about it, the more I think it's pretty accurate to say they had a lot in common with the Less Wrong crowd in their approach to politics (though they were almost certainly further left, even taking into account that the surveys suggest Less Wrong itself is further left than many people seem to realize).
This [] quote by Anthony de Jasay echoes the Logical Empiricist stance on political action.

Instead of journalism progressing into becoming more scientific, it is that science that is becoming more an more journalistic.

Nassim Taleb

Is this being quoted as an example of a rhetorical figure ? I can't even see what it's supposed to mean, let alone work out whether it's true.

It wasn’t easier, the ghost explains, you just knew how to do it. Sometimes the easiest method you know is the hardest method there is.

It’s like… to someone who only knows how to dig with a spoon, the notion of digging something as large as a trench will terrify them. All they know are spoons, so as far as they’re concerned, digging is simply difficult. The only way they can imagine it getting any easier is if they change – digging with a spoon until they get stronger, faster, and tougher. And the dangerous people, they’ll actually try this.

Everyone who w

... (read more)
thank you for posting this - now I have something new to read!

Our recent research into team behavior [...] reveals an interesting paradox: Although teams that are large, virtual, diverse, and composed of highly educated specialists are increasingly crucial with challenging projects, those same four characteristics make it hard for teams to get anything done. To put it another way, the qualities required for success are the same qualities that undermine success. Members of complex teams are less likely—absent other influences—to share knowledge freely, to learn from one another, to shift workloads flexibly to break up

... (read more)
Is the paper worth reading in that it offers solutions to this problem?
These are the key points from page 7: I have seen failure at this to lead to a decline in participation esp. by key contributors who didn't see their effort honored or supported. For LW this might mean key contributors supporting the creation or operation of benefits like the new business networking and user page initiaitive or in general the operation of the site. On LW the active members already act as role models. I can only guess that that is what CFAR does. Building real-life relationships is done by meetups. I see the meetup resources as an effort to support this. But maybe someone could actively contact the meetup organizers and look whether there is potential for improvement. I felt this at the Berlin event. I can't quickly evaulate this. Ideas? This follows from LW being a community and no business. There was a post and discussion on roles but I can't find it. Maybe this needs more structure.

Edited OP to make it clear that you can provide a link to the place you found the quote, rather than needing to track down an authoritative original source.

I reject the concept of "me" as some sort of static thing. Instead I see my moist robot container as something I can manipulate to engineer my mood to the situation. There are times when having more ego is useful. There are times when it is better to be humble. I jack my body chemistry as needed.

Scott Adams on consciously controlling your own moods and feelings

It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.

Sir Arthur Eddington, 1939, The Philosophy of Physical Science

General Principle: the solutions (on balance) need to be simpler than the problems.

(Otherwise the system collapses under its complexity).

Nassim Taleb

flying vs aeroplanes?
Scholastic theology comes right to mind.
Scholastic theology was complex, but so are many theories more highly regarded than it nowadays. Like physics. Or consider the modern economy - when I contemplate a wireless mouse, it seems vastly more complex than the problem it solves (a mouse which requires a wire), yet, the wireless mouse still works & is nice to have. In fact, you could probably consider every single technology and economic system a counterexample to this Taleb quote - it's all epicycles upon epicycles, techniques to solve problems which you only have because of earlier techniques you use to solve other problems, in a dizzying spiral of accumulating complexity (think of Kelly's What Technology Wants here) - yet are we really worse off than hunter-gatherers?
In strict hedonic terms, I should say yes.
Um, the entire point of the Occam's Razor/Kolmogorov complexity approach is physics isn't complicated.
And do you think Taleb is speaking in a specific, precise, highly-technical, highly unusual definition of 'complex' rarely found outside of computer science circles and extremely niche interest groups like LessWrong? Or do you think he is speaking in the usual colloquial sense of 'complex' which everyone understands and under which physics is indeed extremely complex and difficult?
Chaos theory is a part of physics that deals with complexity that Taleb would probably call complex. On the other hand I don't think that Taleb would call classical Newtonian physics complex. I don't think that Taleb would be a fan of wireless mouses. A wireless mouse has the failure mode of the battery dying. A wired mouse doesn't have that problem because it's less complex. Look at his website []. Despite various people offering to him to do the necessary work, he doesn't use a content management system like Wordpress. When it comes to software design Taleb probably favors the 37 signals philosophy. It tries to solve problems in a way that's as simple as possible instead of being complex. Take tax law as another example. Politicians want to encourage certain behavior so they write an exception into the tax law that people who engage into that behavior have to pay less taxes. It adds complexity to the system even if you get some people to shift their behavior in the right direction. The result is that developed countries have very complex tax laws that nobody really understands. Taleb would recommend to make the tax law simpler by not trying to form the law in a way that fixes every single exception and encourages people to engage in specific actions. Whenever congress passes a tax law the tax code shouldn't grow in size but shrink. Changes in tax law should not increase it's Kolmogorov complexity but reduce it. Washington politicians don't understand it. Even economics professors like our Hanson don't []. Low Kolmogorov complexity tax law would also be easier to understand in the more colloquial sense of 'complex'.
These two statements are contradictory. Lots of Newtonian physics problems are chaotic- in fact Poincarre developed the early ideas that became chaos theory to deal with the three body problem.
Would he? More the worse for him. The markets have spoken and they like their wireless mice. Wirelessness, for all the added complexity - complexity which is much more complex than the problem it solves - seems to be carrying its weight, contra the Taleb quote. I doubt that. Here is where using a complex and not widely-understood theory can bite you: the shorter the code, the more computation you expect it to use to yield the specified results. The shortest possible code for anything reasonably complex will take huge amounts of computing power. (How much computation to simulate our universe up to the present moment using the minimal encoding of the Standard Model + initial conditions?) Any shorter version of the current tax code which has the same meaning will be much harder to understand than a version which redundantly spells out details and common cases for you. And if you change the actual meaning of the tax code, well, then you run into the public choice issues which made the meaning what it is now... (It would be better to talk about logical depth and sophistication than simple uncomputable Kolmogorov complexity, but I believe even less that that is what Taleb meant.) Why should they? Do we expect physicists to understand each and every area of physics? Specialization is one of the defining principles of modern societies.

Whilst arguing that uncertainty is best measured using numbers and probabilities:

We want to measure uncertainties in order to combine them. A politician said that he preferred adverbs to numbers. Unfortunately it is difficult to combine adverbs.

  • Dennis V. Lindley, Understanding Uncertainty
[missing the point] On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X - Y is very, very uncertain. [/missing the point] ^_^
Why isn't it "very, very uncertain, uncertain"? Anyway, 'very' is an adjective. 'Verily' is the adverb.
But without the math to prove it, you may wrongly conclude that the uncertainties cancel out and X - Y is quite certain indeed.

it's like arguing that fairies are coming out of my toaster in the middle of the night. You can't prove to me that there aren't fairies in my toaster, but that doesn't mean you should take me seriously. What I have a problem with is not so much religion or god, but faith. When you say you believe something in your heart and therefore you can act on it, you have completely justified the 9/11 bombers. You have justified Charlie Manson. If it's true for you, why isn't it true for them? Why are you different? If you say "I believe there's an all-powerful

... (read more)

This quote seems like it's lumping every process for arriving at beliefs besides reason into one. "If you don't follow the process I understand and is guaranteed not to produce beliefs like that, then I can't guarantee you won't produce beliefs like that!" But there are many such processes besides reason, that could be going on in their "hearts" to produce their beliefs. Because they are all opaque and non-negotiable and not this particular one you trust not to make people murder Sharon Tate, does not mean that they all have the same probability of producing plane-flying-into-building beliefs.

Consider the following made-up quote: "when you say you believe something is acceptable for some reason other than the Bible said so, you have completely justified Stalin's planned famines. You have justified Pol Pot. If it's acceptable for for you, why isn't it acceptable for them? Why are you different? If you say 'I believe that gays should not be stoned to death and the Bible doesn't support me but I believe it in my heart', then it's perfectly okay to believe in your heart that dissidents should be sent to be worked to death in Siberia. It's perfectly okay to beli... (read more)

I don't think it's lumping everything together. It's criticizing the rule "Act on what you feel in your heart." That applies to a lot of people's beliefs, but it certainly isn't the epistemology of everyone who doesn't agree with Penn Jillette. The problem with "Act on what you feel in your heart" is that it's too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible. But if my epistemology is an appeal to an external source (which I guess in this context would be a religious book but I'm going to use "believe whatever Rameses II believed" because I think that's funnier), then that doesn't necessarily have the same problem. You can criticize my choice of Rameses II, and you probably should. But now my epistemology is based on an external source and not just my feelings. Unless you reduce me to saying I trust Rameses because I Just Feel that he's trustworthy, this epistemology does not have the same problem as the one criticized in the quote. All this to say, Jillette is not unfairly lumping things together and there exist types of morality/epistemology that can be wrong without having this argument apply.
'Act on an external standard' is just as generalizable - because you can choose just about anything as your standard. You might choose to consistently act like Gandhi, or like Hitler, or like Zeus, or like a certain book suggests, or like my cat Peter who enjoys killing things and scratching cardboard boxes. If the only thing I know about you is that you consistently behave like someone else, but I don't know like whom, then I can't actually predict your behavior at all. The more important question is: if you act on what you feel in your heart, what determines or changes what is in your heart? And if you act on an external standard, what makes you choose or change your standard?
It looks like there's all this undefined behavior, and demons coming out the nose [] from the outside because you aren't looking at the exact details of what's going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It's still deterministic. As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It's entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout "Nasal demons!", but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure. The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven't bothered to understand, and saying "Who can possibly say what that system will do?" Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren't accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That's undefined behavior with respect to the C/C++ standard. But it's perfectly predictable if you know what platform you're on. Other people who aren't meta-ethical anti-realists' utility functions are not really negotiable either. You can't really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desi
I very much doubt that. At least with present technology you cannot self-modify to prefer dead babies over live ones; and there's presumably no technological advance that can make you want to.
If utility functions are those constructed by the VNM theorem, your utility function is your wants; it is not something you can have wants about. There is nothing in the machinery of the theorem that allows for a utility function to talk about itself, to have wants about wants. Utility functions and the lotteries that they evaluate belong to different worlds. Are there theorems about the existence and construction of self-inspecting utility functions?
That means you can actually make people less harmful if you tell them to listen to their hearts instead of listening to ancient texts. The person who's completely in their head and analyses the ancient text for absolute guidance of action is dangerous. A lot of religions also have tricks were the believer has to go through painful exercises. Just look at a Christian sect like Opus Dei with cilices. The kind of religious believer who wears a cilice loses touch with his heart. Getting someone who's in the habit of causing his own body pain with a cilice to harm other people is easier.
I'd have to disagree here; I think that "faith" is a useful reference class that pretty effectively cleaves reality at the joints, which does in fact lump together the epistemologies Penn Jilette is objecting to. The fact that some communities of people who have norms which promote taking beliefs on faith do not tend to engage in acts of violence, while some such communities do, does not mean that their epistemologies are particularly distinct. Their specific beliefs might be different, but one group will not have much basis to criticize the grounds of others' beliefs. The flaw he's arguing here is not "faith-based reasoning sometimes drives people to commit acts of violence," but "faith-based reasoning is unreliable enough that it can justify anything, in practice as well as principle, including acts of extreme violence."
People who follow the moral code of the Bible versus peopel that don't is also a pretty clear criteria that separates some epistemologies from others. People who uses a pendulum to make decisions as a very different epistemology than someone who thinks about what the authorities in his particular church want him to do and acts accordingly. The kind of people who win the world debating championship also haave no problem justying policies like genocide with rational arguments that win competive intellectual debates. Justifying actions is something different than decision criteria.
Yes, but then you can go a step down from there, and ask "why do you believe in the contents of the bible?" For some individuals, this will actually be a question of evidence; they are prepared to reason about the evidence for and against the truth of the biblical narrative, and reject it given an adequate balance of evidence. They're generally more biased on the question than they realize, but they are at least convinced that they must have adequate evidence to justify their belief in the biblical narrative. I have argued people out of their religious belief before (and not just Christianity,) but never someone who thought that it was correct to take factual beliefs that feel right "on faith" without first convincing them that this is incorrect as a general rule, not simply in the specific case of religion. This is an epistemic underpinning which unites people from different religions, whatever tenets or holy books they might ascribe to. I've also argued the same point with people who were not religious; it's not simply a quality of any particular religion, it's one of the most common memetic defenses in the human arsenal.
-- Rational!Quirrel, HPMoR chapter 20 In other words: how else can you justify a moral belief and consequent actions, except by saying that you really truly believe in your heart that you're Right? We should not confuse between the fact that almost all people other than Manson think he was morally wrong, and the fact that his justification for his action seems to me to be of the same kind as the justifications anyone else ever gives for their moral beliefs and actions.

Unlike Quirrell, Penn Jillette is not referring to "knowing in your heart" that your moral values are correct, but to "knowing in your heart" some matters of fact (which may then serve as a justification for having some moral values, or directly for some action).

In what way is "deserve" a matter of fact?
"Deserving" is a moral theorem, not a moral axiom. You can most definitely test and check whether someone deserves something, by asking about the rules of the game and their position within the game. If there is no game at hand, I would say "deserving" becomes nonsense, but that's just me.
If you're a moral realist, and you think moral opinions are statements of fact (which may be right or wrong), then you think it's possible to "know in your heart" moral "facts". If you're a moral anti-realist (like me), and you think moral opinions are statements of preferences (in other words, statements of fact about your own preferences and your own brain-wiring), then all moral opinions are such. And then surely Manson's statement of his preferences has the same status as anyone else's, and the only difference is that most people disagree with Manson. What else is there? However, it's true that Jillette talks about factual amoral beliefs like fairies and gods. So my comment was somewhat misdirected. I still think it's partly relevant, because people who believe in gods (i.e. most people) usually tie them closely to their moral opinions. It's impossible to discuss morals (of most humans) without discussing religious beliefs.
That leaves the question of how Penn actually knows that Chalie Manson was acting based on what his heart was telling him. Psychopaths are frequently bad at empathy or "listening to their hearts". It might even be the defining characteristic of what makes someone a psychopath.
You missed the point entirely. 'Listening to their (own) hearts' is not empathy, it's just giving credibility to your instinctive beliefs, regardless of wether they have a basis or not. How is believing that everyone is connected by a network of magical energy tethers and acting according to that any different than believing that my soul will be saved if I massacre 40 people and acting on that? The only difference is the actual acts that you take due to the beliefs. Mind you, it's a very important difference, but the quote is not talking about that, it's talking about beliefs themselves and using them as a sufficient justification for acts.

I'm tired of coherent nonsense!

-- Tom Stoppard, The Real Thing

"There are people out there that call themselves Cooks. But that doesn't make them Cooks. They say they're Cooks because they've heard about Cooks and they want to be Cooks. But they're not. Do you understand that?"

"Scip, where you think org'nizations come from? If they say they're Cooks and they do people like Cooks, that makes 'em goddam Cooks."

"Cooks are a specific thing, Reagan, not an idea."

"You don't even think they're real!"

"They don't have to be real to have a definition."

-- Reagan and Scipi... (read more)

Imagine, then, that a man should need to get fire from a neighbour, and, upon finding a big bright fire there, should stay there continually warming himself; just so it is if a man comes to another to share the benefit of a discourse, and does not think it necessary to kindle from it some illumination for himself and some thinking of his own, but, delighting in the discourse, sits enchanted; he gets, as it were, a bright and ruddy glow in the form of opinion imparted to him by what is said, but the mouldiness and darkness of his inner mind he has not diss

... (read more)

Beware of bugs in the above code; I have only proved it correct, not tried it.

Donald Knuth on the difference between theory and practice.

Duplicate. []

The ultimate result of shielding men from the results of folly is to fill the world with fools.

    — Herbert Spencer (1820-1903), ”State Tampering with Money and Banks“ (1891)
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don't take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.

Indeed, an inventor or entrepreneur bears all the costs of bankruptcy

That's exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.

Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.

The inventor is still bearing the costs of the bankruptcy. The creditors are bearing (some of) the costs of the failure, which is not the same thing.
This premise doesn't seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery []. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
That seems right, and it also seems as though the opposite is sometimes right. If a company knows it can reap the benefits of operations (e.g., of product sales) without bearing the cost of those risks associated with its operations (e.g., of pollution), is this a case of risk-taking being oversupplied?
Pollution does not seem particularly well described by risk or risk-taking; it basically a certainty with industrial operations.
In the same way that "product sales" was intended to refer to the result (income), "pollution" was intended to refer to the result (health problems, etc.). While one might think that some result is basically a certainty, the scope and degree of real problems is frequently uncertain. An entrepreneur who weighs potential public health risks does not seem any more difficult to imagine than one who weighs potential bankruptcy risks. At any rate, pollution is merely an example; you can take any other example you find more suitable.

On thrust work, drag work, and why creative work is perpetually frustrating --

"Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. ... Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. ...The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to ca... (read more)

-- Meta --

Shouldn't this be in Main rather than Discussion? I PM'ed the author, but didn't get a response.

EDIT: Thanks.

[This comment is no longer endorsed by its author]Reply

The most significant moment in the course of intellectual development, which gives birth to the purely human forms of practical and abstract intelligence, occurs when speech and practical activity, two previously completely independent lines of development, converge.

1930 Lev Vygotsky in Mind and Society (transcribed by Andy Blunden and Nate Schmolze)


HitaRQ? There have been many theories of child development. What singles this one out as noteworthy?
Because it is a key insight (stated in 1930) into the development of practical intelligence, i.e. intelligence applicable to general and real life problems, which the AI community has arrived at only in the late 1980s []

Education's purpose is to replace an empty mind with an open one.

Attributed to Malcolm Forbes.

If it weren't for the ban on Robin Hanson quotes, the appropriate response would be too obvious.. That said, I really wish I lived in a world where that quotation was true.

"Did many people die?"

"Three thousand four hundred and ninety-two."

"A small proportion."

"It is always one hundred percent for the individual concerned."


"No, no still."

-Ian Banks, Look to Windward

Does this quote have any rationalist content beyond the usual anti-deathism applause light?

9Eliezer Yudkowsky8y
And here I looked at that and saw a negative example of how not to do "shut up and multiply", though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.

Maybe I need to include more context. This conversation occurs after the multiplication was done. This was discussing the aftermath, which had been minimized as much as the minds in question could manage. I took it to mean that, once you have made the best decision you can, there is no guarantee that you will be happy with the outcome, just that it would likely have been worse had you made any other decision.

I think the inability to include that context and make your interpretation clear means that it's a bad rationality quote because it's far too easily taken a 'consequentialism boo!' quote.

Dogs know how to swim, but it’s unlikely they know any truths describing their activities.

-- Richard Fumerton, Epistemology

Really? So, say, if I put a bone on the other side of the river, the dog doesn't know that it can swim across?
How would one tell?
First, you offer them a sequence of bets such that...oh wait.
2Eliezer Yudkowsky8y
"Go work in AI for a while, then come back and write a book on epistemology," he thought.
Upon reading this, he wanted to map out the argumentative space in his head and decided to try to draw a line at one end, saying "Lets not get nuts. Mercury thermometers can react differentially to temperature, but they don't know how hot it is."
[citation needed]
Do dogs not know that bones are nice?

Students have no shortcomings, they have only peculiarities. The job of a teacher is to turn these peculiarities into advantages.

--Israel Gelfand, found here

Far it be for me to argue with Gelfand, but, having done some extensive tutoring, I think that sometimes the best way to "turn these peculiarities into advantages" is to direct the student to a more suitable career path. Face it, some people just naturally suck at math. Sure, they can be drilled to do well on high-school math exams, with many times the effort an average student spends on it (that's what Kumon [] is great at, drills upon more drills with a gradual progress toward System I-level mastery). But this is a waste of time and effort for everyone involved. Their time and effort is more productively spent on creative writing, dancing, debating or whatever else these "peculiarities" hint at. Math is no exception, of course, it gets all the attention as a hard course because of the unreasonably high requirements relative to other subjects.
I think you're right about the very general form of the quote. However, it still might be worth at least some teachers' time to look at how peculiarities might be advantages.
I'm never sure what to do with these kind of rationality quotes. On the one hand, they are obviously literally false, but on the other hand, they may be pushing against our biases in the right direction.
I'd say the obvious thing to do is comment to that effect. So far as karma is concerned, I have no strong opinion.

And this is going to be tricky, because it is definitely still Bella's right to not take Billy's advice. "You're dating a murderer" is not something that she has to let influence her decision, because hey, it's still her life. But all this is happening in the larger social context where women and girls are sometimes told that their new boyfriend is, say, a rapist. And we're told from the day we're born that this is how we're supposed to react: defensively. Defending him.

We (women especially) are regularly told that men are "innocent until p

... (read more)
* Kurtz' English girlfriend, in Heart of Darkness [] by Joseph Conrad, failing to notice confusion
Should be its own quote :)
The law of the excluded middle is a fallacy. And that's what it is to say that there's nothing between "standards of proof strict enough for a court of law" and "not believing anything we didn't see". Also, I suspect she would change her tune about innocent until proven guilty if she was being accused of a serious crime, whether in a court of law or not.

(Edited to add context)

Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.

Jim Taggart [Company president, antagonist]: "What does he expect? That we drop all our other shippers, sacrifice the interests of the whole country and give him all our trains?"

Eddie Willers [Junior exec, sympathetic character]: "Why, no. He doesn't expect anything. He just deals with the Phoenix-Durango."

  • Atlas Shrugged

It gets at the idea talked about here sometimes th... (read more)

Without context, it's a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...

I've read AS a while ago, and I still don't remember enough of the context to interpret this quote...

Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.

Samuel Beckett

Duplicate [], although a good sentiment.
-- Einstein, supposedly

If only it were that easy in real life...

Gratuitous image + obscure reference + anti-deathism not firewalled from rationalism = downvote, sorry.
The term "rationalism" has a previously-established meaning [] quite different from LW-style rationality.
IMO."freethinker" captures the intended reference better.
To me "freethinker" conjures up associations with smug atheist and "skeptic" communities, so I'm not sure if I would consider it better.
You see some difference between Lesswrongians and smug skeptics???
Smug skeptics don't say things like "The fact that there are myths about Zeus is evidence that Zeus exists" [].
In common parlance, "no evidence for" means "no good evidence for". Saying that myths are not evidence for Zeus is not being smug; it's being able to comprehend English. I could just as well complain about people saying "I constantly hear fallacies" by asking them if they hear fallacies when they are asleep, and if not, why they are being so smug about an obviously false statement.
I'm not saying that it's necessary to say things like that to not be a smug skeptic. On the other hand it's sufficient. For a Bayesian there no such things as good or bad evidence. Good or bad indicate approval and disapproval. There's weak and strong evidence but even weak evidence means that your belief in a statement should be higher than without that evidence.

It looks to me to be rather clear that what is being said ("myths are not evidence for Zeus") translates roughly to "myths are very weak evidence for Zeus, and so my beliefs are changed very little by them". Is there still a real misunderstanding here?

You are making a mistake in reasoning if you don't change your belief through that evidence. Your belief should change by orders of magnitude. A change from 10^{-18} to 10^{-15} is a strong change. The central reason to believe that Zeus doesn't exist are weak priors. Skeptics have ideas that someone has to prove something to them for them to believe it. In the Bayesian worldview you always have probabilities for your beliefs. Social obligations aren't part of it. "Good" evidence means that someone fulfilled a social obligation of providing a certain amount of proof. It doesn't refer to how strongly a Bayesian should update after being exposed to a piece of evidence. There are very strong instincts for humans to either believe X is true or to believe X is false. It takes effort to think in terms of probabilities.
Where do those numbers come from?
In this case they come from me. Feel free to post your own numbers. The point of choosing Zeus as an example is that it's a claim that probably not going to mindkill anyone. That makes it easier to talk about the principles than using an example where the updating actually matters.
In other words, you made them up. Fictional evidence. you did say (my emphasis) Why should a myth about Zeus change anyone's belief by "orders of magnitude"?

Why should a myth about Zeus change anyone's belief by "orders of magnitude"?

I'd buy it. Consider all the possible gods about whom no myths exist: I wouldn't exactly call this line of argument rigorous, but it seems reasonable to say that there's much stronger evidence for the existence of Baduhenna, a Germanic battle-goddess known only from Tacitus' Annals, than for the existence of Gleep, a god of lint balls that collect under furniture whom I just made up.

Of course, there's some pretty steep diminishing returns here. A second known myth might be good for a doubling of probability or so -- there are surprisingly many mythological figures that are very poorly known -- but a dozen known myths not much more than that.

Is this a case where orders of magnitude aren't so important and absolute numbers are? I'm not sure how to even assign probabilities here, but let's say we assign Baduhenna 0.0001% chance of existing, and Gleep 0.00000000000001%. That makes Baduhenna several orders of magnitude more likely than Gleep, but she's still down in the noise below which we can reliably reason. For all practical purposes, Baduhenna and Gleep have the same likelihood of existing. I.e. the possibility of Baduhenna makes no more or less impact on my choices or anything else I believe in than does the possibility of Gleep.
The US military budget is billions. Nobody makes sacrifices to Baduhenna. You might spend a hundred dollars to get a huge military advantage by making sacrifices to Baduhenna. If you shut up and calculate a 0.0001% change for Baduhenna to exist might be enough to change actions. A lot of people vote in presidential elections when the chance of their vote turning the election is worse than 0.0001%. If the chance of turning an election through voting was 0.00000000000001% nobody would go to vote. There are probably various Xrisks with 0.0001% chance of happening. Separating them from Xrisks with 0.00000000000001% chance of happening is important.
My point is that we can't shut up and calculate with probabilities of 0.0001% because we can't reliably measure or reason with probabilities that small in day-to-day life (absent certain very carefully measured scientific and engineering problems with extremely high precision; e.g. certain cross-sections in particle physics). I know I assign very low probability to Baduhenna, but what probability do I assign? 0.0001% 0.000001% less? I can't tell you. There is a point at which we just say the probability is so close to zero as to be indistinguishable. When you're dealing with probabilities of specific events, be they XRisks or individual accidents, that have such low probability, the sensible course of action is to take general measures that improve your fitness against multiple risks, likely and unlikely. Otherwise the amount you invest in the highly salient 0.0001% chance events will take too much time away from the 10% events, and you'll have decreased your fitness. For example, you can imagine a very unlikely 0.0001% event in which a particular microbe mutates in a specific way and causes a pandemic. You could invest a lot of money in preventing that one microbe from becoming problematic. Or you could invest the same money in improving the the science of medicine, the emergency response system, and general healthcare available to the population. The latter will help against all microbes and a lot more risks.
Do you vote in presidential elections? Do you wear a seat belt every time you drive a car and would also do so if you make a vacation in a country without laws that force you to do it? How do you know that will reduce and not increase the risk or a deadly bioengineered pandemic? Yes, reasoning about low probability events is hard. You might not have the mental skills to reason in a decent matter about low probability events. On the other hand that doesn't mean that reasoning about low probability events is inherently impossible.
Do you? You were unable or unwilling to say how you came up with 10^-18 and 10^-15 in the matter of Zeus. (And no, I am not inclined to take your coming up with numbers as evidence that you employed any reasonable method to do so.)
Intuition can be a reasonable method when you have enough relevant information in your head. I'm good enough that I wouldn't make the mistake of calling Baduhenna existence or Zeus existence a 10^{-6} event. Is it possible that I might have said 10^{-12} instead of 10^{-15} in I would have been in a different mood the day I wrote the post. When we did Fermi estimates at the European Community Event in Berlin there was a moment where we had to estimate the force that light from the sun exerts on earth. We had no good idea about how to do a Fermi estimate. We settled for Jonas who thought he read the number in the past but couldn't remember it writing down an intuitive guess. He wrote 10^9 and the correct answer was 5.5 * 10^8. As a practical matter telling the difference between 10^{-15} and 10^{-12} isn't that important. On the other hand reasoning about whether the chance that the Large Hadron collider creates a black hole that destroys earth is 10^{-6} or 10^{-12} is important. I think a 10^{-6} chance for creating a black hole that destroys the earth should be enough to avoid doing experiments like that. In that case I think the probability wasn't 10^{-6} and it was okay to run the experiment but with increased power of technology we might have more experiments that actually do have a 10^{-6} xrisk chance and we should avoid running them.
I don't know what this means. On the basis of what would you decide what's "reasonable" and what's not? There is a time-honored and quite popular technique called pulling numbers out of your ass. Calling it "intuition" doesn't make the numbers smell any better.
See “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics [] ” on Slate Star Codex, though I agree that a human's intuition for probabilities well below 1e-9 is likely to be very unreliable (except for propositions in a reference class containing billions of very similar propositions, such as “John Doe will win the lottery this week and Jane Roe will win the lottery next week”).
The only thing that matters is making successful predictions. How they smell doesn't. To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be. Depending on the purpose for which you need the numbers different amounts of accuracy is good enough. I'm not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
You can compute the likelihood ratio [] without knowing the absolute probability.
Being surprised is generally a sign that it's useful to update a belief. I would add that given my model of you it doesn't surprise me that this surprises you.
You can call it heuristics, if you want to...
No, I can't. Heuristics are a kind of algorithms that provide not optimal but adequate results. "Adequate" here means "sufficient for a particular real-life purpose". I don't see how proclaiming that the probability of Zeus existing is 10^-12 is a heuristic.
Intuition (or educated guesses like the ones referred to here), fall under the umbrella of heuristics.
In what way are you arguing that the number I gave for the existence of Zeus is insufficient for a particular real-life purpose?
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
This seems obviously empirically false. Pick something that everyone agrees is made up- there are way more stories about cthulu than there are stories about any random person who happens to exist. One of my kids more readily knows Paul Bunyan and John Henry than any US president. The fiction section of the library is substantially larger than the non-fiction. Probability that A exists, given that A is in a story seems very, very small.
Given that the myths about Zeus attribute vast supernatural properties to him, and we now know better than to believe in any such stuff (we don't need Zeus to explain thunder and lightning), the myths are evidence against his existence. For the ancient Greeks, of course, it was not so, but the question is being posed here and now. Also, myths are generally told more of imaginary entities than real ones, not less. Myths are all that imaginary creatures have going for them. How many myths are there about Pope Francis? I expect there are some unfounded stories going around among the devout, but nothing on the scale of Greek mythology. So no, P(myths about Zeus|Zeus is real) is not larger, but smaller than P(myths about Zeus|Zeus is imaginary). On the other hand, it is larger than P(myths about Zeus|no such entity has even been imagined). The latter is indistinguishable from zero -- to have a myth about an entity implies that that entity has been imagined. So we can conclude from the existence of myths that Zeus has been imagined. I'm fine with that.
I see your problem here, you're restricting attention to things that either exist or have had myths told about them. Thus it's not surprising that you find that they are negatively correlated. If you condition on at least one of A or B being true, then A and B will always negatively correlate.
(BTW, this effect is known as Berkson's paradox [].)
Thanks, I knew it had a Wikipedia entry and spent nearly 10 minutes looking for it before giving up.
What definition of "myth" are you using that doesn't turn the above into a circular argument?
The original context was a slogan about myths of Zeus, but there are myths about real people. Joan of Arc, for example. So this is not true by definition, but an empirical fact. I had no particular definition in mind, any more than I do of "Zeus" or any of the other words I have just used, but if you want one, this from Google seems to describe what we are all talking about here: Great heroes with historical existence accrete myths of supernatural events around them, while natural forces get explained by supernatural beings. Heracles might have been a better example than Zeus. I don't know if ancient Greek scholarship has anything to say on the matter, but it seems quite possible that the myths of Heracles could originate from a historical figure. Likewise Romulus, Jason, and all the other mortals of Graeco-Roman mythology. These have some reasonable chance of existing. Zeus does not. But by that very fact, the claim that "myths about Heracles are evidence for Heracles' existence" is not as surprising as the one about Zeus, and so does not function as a shibboleth for members of the Cult of Bayes to identify one another.
Notice that in the above argument you're implicitly conditioning on gods not existing. We're trying to determine how the existence of myths about Zeus affects our estimate that Zeus exists. You're basically saying "I assign probability 0 to Zeus existing, so the myths don't alter it".
I'm decently calibrated on the credence game and have made plenty of prediction book predictions. The idea of Bayesianism that it's good to boil down your beliefs to probability numbers. If you think my argument is wrong provide your own numbers. P(Zeus exists | Myths exists) and P(Zeus exists | Myths don't exist) There really no point discussing Zeus further if you aren't willing to put number on your own beliefs. Apart from that I linked to a discussion about Bayesianism and you might want to read that discussion if you want a deeper understanding of the claim.
You cannot use the credence game to validate your estimation of probabilities of one-off situations down at the 10^-18 level. You will never see Zeus or any similar entity. I am familiar with the concept. The idea is also that it's no good pulling numbers out of thin air. Bayesian reasoning is about (1) doing certain calculations with probabilities and evidence -- by which I mean numerical calculations with numbers that are not made up -- and (2) where numerical calculation is not possible, using the ideas as a heuristic background and toolbox. Assigning 10^-bignum to Zeus existing confuses the two. Look! My office walls are white! I must increase my estimated probability of crows being bright pink from 10^-18 to 10^-15! No, I don't think I shall. Earlier you wrote: The central reason to believe that Zeus doesn't exist is the general arguments against the existence of gods and similar entities. We don't see them acting in the world. We know what thunder and lightning are and have no reason to attribute them to Zeus. Our disbelief arose after we already knew about the myths, so the thought experiment is ill-posed. "The fact that there are myths about Zeus is evidence that Zeus exists" is a pretty slogan but does not actually make any sense. Sense nowadays, that is. Of course the ancient Greeks were brought up on such tales and I assume believed in their pantheon as much as the believers of any other religion do in theirs. But the thought experiment is being posed today, addressed to people today, and you claim to have updated -- from what prior state? -- from 10^-18 to 10^-15. There really no point discussing Zeus, period.
The point to have the discussion about Zeus is Politics is the Mind-Killer []. The insignificance of Zeus existence is a feature not a bug. If I would make an argument that the average person's estimate of the chance that a single unprotected act of sex with a stranger infects them with AIDS is off by two orders of magnitude, then that topic is going to mind kill. The same is true for other interesting claims.
I agree with this comment, but I want to point out that there may be a problem with equating the natural language concept "strength of evidence" with the likelihood ratio. You can compare two probabilities on either an additive or multiplicative scale. When applying a likelihood ratio of 1000, your prior changes by a multiplicative factor of 1000 (this actually applies to odds rather than probabilities, but for low probability events, the two approximate each other). However, on an additive scale, a change from 10^{-18} to 10^{-15} is really just a change of less than 10^{-15} , which is negligible. The multiplicative scale is great for several reasons: The likelihood ratio is suggested by Bayes' theorem, it is easy to reason with, it does not depend on the priors, several likelihood ratios can easily be applied sequentially, and it is suitable for comparing the strength of different pieces of evidence for the same hypothesis. The additive scale does not have those nice properties, but it may still correspond more closely to the natural language concept of "strength of evidence"
I have not said that it's strong evidence. I said it's evidence.
Yes, that is probably clear to most of us here. But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions (we'll profess 'this is very unlikely' instead of 'this is not true', but what actually happens is the same thing). There's a huge amount of probability 10^{-18} deities out there, we just shrug and assume they don't exist unless enough strong (or 'good', I still don't see the difference there) evidence comes up to alter that probability enough so that it is in the realm of probabilities worth actually spending time and effort thinking about. This hypothetical skeptic, if pressed, would most likely concede that sure, it is /possible/ that Zeus exists. He'd even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists. But he'd say that is fruitless nitpicking, because both of them are overwhelmingly unlikely to exist and the fact that they still might exist does not change our actions in any way. If you wish to argue this point, then that is fine, but if we agree here then there's no argument, just a conflict of language. I'm trying to say that where you would say "Probability for X is very low", most people who have not learned the terminology here would normally say "X is false", even if they would concede that "X is possible but very unlikely" if pressed on it.
Given that someone like Richard Kennaway who's smart and exposed to LW thinking (>10000 karma) doesn't immediately find the point I'm making obvious, you are very optimistic. People usually don't change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it's not enough to give you a new way to relate to reality. The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it's still useful for our society to invest more resources into telescopes to have all near-earth objects covered. The same goes for Yellowstone destroying our civilisation. Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: "Extraordinary claims require extraordinary evidence." That kind of thinking is very dangerous given that human technology get's more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They're in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don't, and shouldn't, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities. But yeah, I'm generally rather optimistic about things. Reading LW has helped me, at that - before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them. As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes 'This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have' -> 'This is very unlikely to be true' -> 'This is not true' and what happens in his brain is he figures it's untrue and does not consider it any further. I would assume that your thinking goes something along the lines of 'This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have' -> 'This is very unlikely to be true', and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don't consider it any further. And both of you are most likely willing to reconsider should additional evidence present itself.
Careful there. Our intuition of what's in the "realm of possibilities that are worth thinking about" doesn't correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn't take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible. The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely. And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable. Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don't believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn't convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way - say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven't found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinit
Disagree. Most people use "unlikely" for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. "Untrue" is used for something that one's model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is 'I disagree, what you said is correct.' To try and restate myself, most people use 'unlikely' like you said, but some, many of whom frequent this site, use it for 'so unlikely it is as good as impossible', and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between "unlikely" and "impossible" doesn't correspond to any probability. In fact there are "unlikely" events that have a lower probability than some "impossible" events.
Assuming you mean that things you believe are merely 'unlikely' can actually, more objectively, be less likely than things you believe are outright 'impossible', then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for "impossible" should be.
True. However, there is no such thing as 'impossible', or probability 0. And while in common language people do use 'impossible' for what is merely 'very improbable', there's no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use 'very improbable' for things that are very improbable, and 'unlikely' for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that. Okay, I just typed that and then I went back and looked and it seems that we've talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I'll leave it here, unless you believe otherwise.
2Said Achmiz8y
I second theruf's "what". The card reads like anti-deathism, not deathism. (Also, what the heck does "not firewalled from rationalism" mean?)
Sorry, I was being jargony. See Firewalling the Optimal from the Rational []. Quite right. Whoops. I'll go fix that now...
5Said Achmiz8y
Ah. This is why I'm a big fan of the Yudkowskian practice of turning all instances of jargon in the text (or at least, the first appearance of any jargony term in a post) into a link to the relevant post/etc.
Er ... what?

"The most amazing thing about philosophy is that even though no nobody knows to do it, and even though it has never achieved anything, it is still possible to do it really badly"

--Oolon Kaloophid

Is there missing context, or did a cat philosopher walk across your keyboard? Or is it meant to evoke "writing but really badly"? Also: strongly disagree that "it has never achieved anything". See also, "successful philosophy stops being philosophy and becomes another science" (not an exact quote).

"Many who are self-taught far excel the doctors, masters and bachelors of the most renowned universities" Ludwig Von Mises

Oft discussed here and is shown to be empirically wrong in math and physics (if you define "excel" as "make notable discoveries"). Probably also wrong in comp. sci., chem and to a lesser degree in engineering. It might still be true in some nascent areas where one does not need 10 years of intense studying to get to the leading edge.
There is one good example of an unschooled mathematician:Ramanujan. The lack of need for special equipment in maths probably has something to do with it.
Yes, he is definitely an exception. Unfortunately, I cannot think of anyone else in the last 100 years. Possibly because these days anyone brilliant like that ends up in the system. Which is a good thing, if true.
That sounds like a list of non-diseased disciplines. Is this by chance? Alternatively, it's the STEM subjects. Same thing? On the other hand, if "excel" is "do well in life" then, I don't know. But that is the reading that the original context of the quote [] suggests to me: Also an interesting view of education. One of the ancients said that the mind is not a pot to be filled but a fire to be ignited(1), and nobler teachers see the aim of their profession as the igniting of that fire in their students. However, Mises appears to take the view that this is impossible (he does not limit his criticism of education to any time and place), that teaching cannot be anything but the filling of a pot, and the igniting of the fire can come only from the inner qualities of the individual, incapable of being influenced from outside. (1) As usually quoted. I've just added the original source [] of this to the quotes thread.
One of the more popular ideals of education is summarized in this quote from Malcolm Forbes: Hmm, probably deserves a top-level comment. Anyway, the reality is that some people are happy with imitations, while others strive for creativity: So good education is beneficial to creative types, as well, since to defy something or to add to something, you have to learn that something first. A bit harsh, given that many people are at least a little bit creative. Not sure if this is Mises' opinion or what he argues against, but, again, seems a bit harsh. There are always the outliers, but for the majority of people this "igniting" is a combination of nature and nurture.
Some numbers would be useful there.
Numbers would be kind of a nit-pick I would think. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning.
Many as an absolute number, or many as a fraction of all self-taught people? I'd agree with the former but not with the latter. IME most self-taught people end up with gross misconceptions because of this [].
Absolute number. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning. But yeah, the reference to the double illusion is spot on and is definitely a kink that has to be ironed out with effort and testing.
Correlation/causation? Selection effects?

Neither. Obviously, the average excellence of "doctors, masters and bachelors" of the most renowned universities is higher than the average excellence of people who are self-taught. Nobody suggests that being self-taught correlates positively with excellence.

The quotation is still undoubtedly true, because there are many more individuals who are self-taught than individuals who have these credentials. It is also plausible that the variance in excellence among the self-taught is much higher. Therefore, it is trivial to identify self-taught individuals who are more knowledgeable than most highly credentialed university graduates.

In fact, as a doctoral student in applied causal inference at a fairly renowned university, I can identify several self-taught Less Wrong community members who understand causality theory better than I do.

Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn" (are indoctrinated with). I'm strongly in agreement with libertarian investment researcher Doug Casey's comments on education []. I also agree that the average indoctrinated idiot or 'pseudo-intellectual []" is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to "lowest-common-denominator" pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry. Of course, I think that people who totally "self-start" have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy [] and other MOOCs are now eliminating even that necessity. Generally, this argument is that "It's a young man's world." This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated "ultra-intelligences" know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally,
[-][anonymous]8y 12

You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn't really matter (hence the existence of both Yale and MIT), as long as they're sufficiently difficult to ensure that few pass.

I'm writing from an elite engineering university, and as far as I can tell, this is more-or-less our tacitly admitted pedagogical method: some students will survive the teaching process, and they will retroactively be declared superior. The question of whether we even should optimize our pedagogy to maximize the conveyance of information from professor to student plays no part whatsoever in our curriculum.

If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes. Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.

New to LessWrong?