I don't ask my friends about their childhoods—I lack social curiosity—and so I don't know how much of a trend this really is:
Of the people I know who are reaching upward as rationalists, who volunteer information about their childhoods, there is a surprising tendency to hear things like: "My family joined a cult and I had to break out," or "One of my parents was clinically insane and I had to learn to filter out reality from their madness."
My own experience with growing up in an Orthodox Jewish family seems tame by comparison... but it accomplished the same outcome: It broke my core emotional trust in the sanity of the people around me.
Until this core emotional trust is broken, you don't start growing as a rationalist. I have trouble putting into words why this is so. Maybe any unusual skills you acquire—anything that makes you unusually rational—requires you to zig when other people zag. Maybe that's just too scary, if the world still seems like a sane place unto you.
Or maybe you don't bother putting in the hard work to be extra bonus sane, if normality doesn't scare the hell out of you.
I know that many aspiring rationalists seem to run into roadblocks around things like cryonics or many-worlds. Not that they don't see the logic; they see the logic and wonder, "Can this really be true, when it seems so obvious now, and yet none of the people around me believe it?"
Yes. Welcome to the Earth where ethanol is made from corn and environmentalists oppose nuclear power. I'm sorry.
(See also: Cultish Countercultishness. If you end up in the frame of mind of nervously seeking reassurance, this is never a good thing—even if it's because you're about to believe something that sounds logical but could cause other people to look at you funny.)
People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits, without feeling nervous about their strangeness. The glue that binds them to their current place has dissolved, and they can walk in some direction, hopefully forward.
Lonely dissent, I called it. True dissent doesn't feel like going to school wearing black; it feels like going to school wearing a clown suit.
That's what it takes to be the lone voice who says, "If you really think you know who's going to win the election, why aren't you picking up the free money on the Intrade prediction market?" while all the people around you are thinking, "It is good to be an individual and form your own opinions, the shoe commercials told me so."
Maybe in some other world, some alternate Everett branch with a saner human population, things would be different... but in this world, I've never seen anyone begin to grow as a rationalist until they make a deep emotional break with the wisdom of their pack.
Maybe in another world, things would be different. And maybe not. I'm not sure that human beings realistically can trust and think at the same time.
Once upon a time, there was something I trusted.
Eliezer18 trusted Science.
Eliezer18 dutifully acknowledged that the social process of science was flawed. Eliezer18 dutifully acknowledged that academia was slow, and misallocated resources, and played favorites, and mistreated its precious heretics.
That's the convenient thing about acknowledging flaws in people who failed to live up to your ideal; you don't have to question the ideal itself.
But who could possibly be foolish enough to question, "The experimental method shall decide which hypothesis wins"?
Part of what fooled Eliezer18 was a general problem he had, with an aversion to ideas that resembled things idiots had said. Eliezer18 had seen plenty of people questioning the ideals of Science Itself, and without exception they were all on the Dark Side. People who questioned the ideal of Science were invariably trying to sell you snake oil, or trying to safeguard their favorite form of stupidity from criticism, or trying to disguise their personal resignation as a Deeply Wise acceptance of futility.
If there'd been any other ideal that was a few centuries old, the young Eliezer would have looked at it and said, "I wonder if this is really right, and whether there's a way to do better." But not the ideal of Science. Science was the master idea, the idea that let you change ideas. You could question it, but you were meant to question it and then accept it, not actually say, "Wait! This is wrong!"
Thus, when once upon a time I came up with a stupid idea, I thought I was behaving virtuously if I made sure there was a Novel Prediction, and professed that I wished to test my idea experimentally. I thought I had done everything I was obliged to do.
So I thought I was safe—not safe from any particular external threat, but safe on some deeper level, like a child who trusts their parent and has obeyed all the parent's rules.
I'd long since been broken of trust in the sanity of my family or my teachers at school. And the other children weren't intelligent enough to compete with the conversations I could have with books. But I trusted the books, you see. I trusted that if I did what Richard Feynman told me to do, I would be safe. I never thought those words aloud, but it was how I felt.
When Eliezer23 realized exactly how stupid the stupid theory had been—and that Traditional Rationality had not saved him from it—and that Science would have been perfectly okay with his wasting ten years testing the stupid idea, so long as afterward he admitted it was wrong...
...well, I'm not going to say it was a huge emotional convulsion. I don't really go in for that kind of drama. It simply became obvious that I'd been stupid.
That's the trust I'm trying to break in you. You are not safe. Ever.
Not even Science can save you. The ideals of Science were born centuries ago, in a time when no one knew anything about probability theory or cognitive biases. Science demands too little of you, it blesses your good intentions too easily, it is not strict enough, it only makes those injunctions that an average scientist can follow, it accepts slowness as a fact of life.
So don't think that if you only follow the rules of Science, that makes your reasoning defensible.
There is no known procedure you can follow that makes your reasoning defensible.
There is no known set of injunctions which you can satisfy, and know that you will not have been a fool.
There is no known morality-of-reasoning that you can do your best to obey, and know that you are thereby shielded from criticism.
No, not even if you turn to Bayescraft. It's much harder to use and you'll never be sure that you're doing it right.
The discipline of Bayescraft is younger by far than the discipline of Science. You will find no textbooks, no elderly mentors, no histories written of success and failure, no hard-and-fast rules laid down. You will have to study cognitive biases, and probability theory, and evolutionary psychology, and social psychology, and other cognitive sciences, and Artificial Intelligence—and think through for yourself how to apply all this knowledge to the case of correcting yourself, since that isn't yet in the textbooks.
You don't know what your own mind is really doing. They find a new cognitive bias every week and you're never sure if you've corrected for it, or overcorrected.
The formal math is impossible to apply. It doesn't break down as easily as John Q. Unbeliever thinks, but you're never really sure where the foundations come from. You don't know why the universe is simple enough to understand, or why any prior works for it. You don't know what your own priors are, let alone if they're any good.
One of the problems with Science is that it's too vague to really scare you. "Ideas should be tested by experiment." How can you go wrong with that?
On the other hand, if you have some math of probability theory laid out in front of you, and worse, you know you can't actually use it, then it becomes clear that you are trying to do something difficult, and that you might well be doing it wrong.
So you cannot trust.
And all this that I have said, will not be sufficient to break your trust. That won't happen until you get into your first real disaster from following The Rules, not from breaking them.
Eliezer18 already had the notion that you were allowed to question Science. Why, of course the scientific method was not itself immune to questioning! For are we not all good rationalists? Are we not allowed to question everything?
It was the notion that you could actually in real life follow Science and fail miserably, that Eliezer18 didn't really, emotionally believe was possible.
Oh, of course he said it was possible. Eliezer18 dutifully acknowledged the possibility of error, saying, "I could be wrong, but..."
But he didn't think failure could happen in, you know, real life. You were supposed to look for flaws, not actually find them.
And this emotional difference is a terribly difficult thing to accomplish in words, and I fear there's no way I can really warn you.
Your trust will not break, until you apply all that you have learned here and from other books, and take it as far as you can go, and find that this too fails you—that you have still been a fool, and no one warned you against it—that all the most important parts were left out of the guidance you received—that some of the most precious ideals you followed, steered you in the wrong direction—
—and if you still have something to protect, so that you must keep going, and cannot resign and wisely acknowledge the limitations of rationality—
—then you will be ready to start your journey as a rationalist. To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught.
No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.
Post Scriptum: On reviewing a draft of this essay, I discovered a fairly inexcusable flaw in reasoning, which actually affects one of the conclusions drawn. I am leaving it in. Just in case you thought that taking my advice made you safe; or that you were supposed to look for flaws, but not find any.
And of course, if you look too hard for a flaw, and find a flaw that is not a real flaw, and cling to it to reassure yourself of how critical you are, you will only be worse off than before...
It is living with uncertainty—knowing on a gut level that there are flaws, they are serious and you have not found them—that is the difficult thing.
I can't help but remember you talking about a teacher who always had an error in his lectures, and over the course of the semester made them harder and harder to find. The last lecture, which was the most complex, didn't have a flaw.
I was in the same class, but you're mistaken: The last three lectures didn't have a flaw. :)
(For posterity: http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/k5c)
I was more or less surrounded by people of average sanity when I grew up, but they still seemed pretty nuts to me. (Completely off-topic, but I really wonder why people tell children known-fantasies such as Santa Clause and the Easter Bunny)
I don't think its really accurate to say most people are insane. Clearly they need to be sane for the world to keep on running. IMO, they are insane when they can afford to be - which is pretty common in politics, religion and untestable hypothesizes, but a LOT less common in the workplace. Most people just aren't interested in truth because truth doesn't pay out in a lot of circumstances. I wonder if science might change in your direction (and how quickly?) if betting markets were more commonly accepted?
You're thinking of Kai Chang's My Favorite Liar. It's linked in the Post Scriptum.
When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.
I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.
I don't know what you were taught, but I expect it was something similar.
All this "emotional understanding" stuff sounds like your personal problem. I don't mean that it isn't important or that I don't have sympathy for any pain you suffered. I just think it's an emotion issue, not a science issue.
I understand the point you're raising, because it caught me for a while, but I think I also see the remaining downfall of science. Its not that science leads you to the wrong thing, but that it cannot lead you to the right one. You never know if your experiments actually brought you to the right conclusion - it is entirely possible to be utterly wrong, and complete scientific, for generations and centuries.
Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas. Science has no step in which you say "and is this idea logically reasonable", and that step is unattainable even if you add it. Science offers two things - gradual improvement, and safety from being wrong with certainty. The first is a weak reward - there is no schedule to science, and by practicing it there's a reasonable chance that you'll go your entire life with major problems with your worldview. The second is hollow - you are defended from taking a wrong idea and saying "this is true" only inasmuch as science deprives you of any certainty. You are offered a qualifier to say, not a change in your ideas.
I don't believe most of the old "obviously wrong" beliefs, like a spirit theory of disease, were ever actually systematically tested. Experimentation doesn't prevent you from coming to silly conclusions, but it can throw out a lot of them.
(A nitpick: Either these things are only obviously wrong in retrospect, or they did not start with reasonable explanations. That is, either we cannot rightfully mock them, or the ideas were ridiculous from the beginning.)
As for the rest, I don't disagree with your assertions - only the (implied) view we should take of them. It is certainly true that science can be slow, and true that you can't ever really know if your explanation is the right one. But I think that emphasis on knowing "the real truth", the really right explanation, is missing the point a little; or, in fact, the idea of the One True Explanation itself is unproductive at best and incoherent at worst. After all, even if we eventually have such an understanding of the universe that we can predict the future in its entirety to the finest level of detail theoretically possible, our understanding could still be totally wrong as to what is "actually" happening. Think of Descartes' Evil Genius, for example. We could be very, very confident we had it right... but not totally sure.
But - once you are at this point, does it matter? The power of science and rationality lies in their predictive ability. Whether our understanding is the real deal or simply an "[apparently] perfect model" becomes immaterial. So I think yes, science can lead you to the right conclusion, if by "right" we mean "applicable to the observed world" and not The Undoubtable Truth. No such thing exists, after all.
The slowness is a disappointment, though. But it's accelerating!
I don't see how what you have said necessitates the "downfall" of science. It seems to me that it only suggests scientists should look at their theories as "the best possible explanation at the current time, which will likely be altered or proven incorrect in the future," rather than the usual "this is right, everything else is wrong." But we already know that this is an improvement everyone should be making to their thought-processes; here scientists are being singled out.
It would be appreciated if someone pointed out flaws in what I have said.
That is why I am a rationalist and a libertarian. Everyone is totally and completely responsible for every choice they make and everything they do. Every one, from toddler to parent, no one can protect you from your responsibility. That is the difference between a child and a real adult - the adult knows and accepts their responsibility, the less than mature tries to deny or hide from it.
@Brian: Emotion is the driver of everything, even rationality.
Why do you lack social curiosity? Do you think it's a neuro-quirk, or just a normal quirk?
I can't speak for him, but I developed below-average social curiosity after I realized that people usually talk about things that aren't really interesting.
this article is GOLD!
People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits.
I'd say instead that this prod produces a high variance response. Some rise to the challenge and become more rational, while others fall even deeper into delusion. Yes the most rational people tend to have experienced this, but so also for the most irrational people.
Science is captitalised, suggesting an abnormal definition of the term. Can this definition be found somewhere? What is "Science" - if it it different from science - and if it is not different, then why captialise it?
I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject were outright confused or in the state of paralyzed curiosity, not deluded in a particular technical way. But so is "Science". The problem is in settling for status quo, walking along the trodden track where it's possible to do better.
Thus, I see this post as a demonstration by example of how important it is to break the trust in all of your cherished curiosity stoppers.
I tried, but didn't find a flaw, anyone else?
No, and I would be surprised if a lot of others have but are mysteriously refusing to show off and tell.
On the whole, this is easily one of my favourite posts on LW, together with "Something to Protect". But that trollish postsciptum doesn't work.
The idea that flaws need to be added - and that the final lecture will be flawless - is both silly and presumptuous. There will almost certainly be flaws, whether they are added or not, and our judgment is not adequate to determine whether our own work has them or not.
Eliezer, all of your problems with "Science" seem to stem not from any problems with the method itself, but from your personal tendency to treat the method as a revelation that people have an emotional investment in: in other words, a religion.
There are a variety of ways people can fail to put science into practice. One of the most pernicious is failing to apply it in situations where it is clearly called for, because we have an emotional investment in holding positions that we don't want to disturb. One especially dangerous subtype of this error is when the important subject is our 'scientific' reasoning and the conclusions we derived from it. It is even more dangerous than the general case because it doesn't just involve a corruption of our ability to deal with one specific set of problems, but a corruption of the general method we must use to rationally investigate the world. Instead of merely having a blind spot, we lose our sight completely, while at the same time losing our ability to detect that we're blind.
You are guilty of this error. That doesn't mean that you've gained a unique insight that must be shared with the world at all costs. This is a very old and trivial insight that most people worth listening to have already produced independently.
Without the second and last paragraphs, this would be a wonderful comment.
I agree with your general view, but I came to the same view by a more conventional route: I got a PhD in philosophy of science. If you study philosophy of science, you soon find that nobody really knows what science is. The "Science" you describe is essentially Popper's view of science, which has been extensively criticized and revised by later philosophers. For example, how can you falsify a theory? You need a fact (an "observation") that conflicts with the theory. But what is a fact, if not a true mini-theory? And how can you know that it is true, if theories can be falsified, but not proven? I studied philosophy because I was looking for a rational foundation for understanding the world; something like what Descartes promised with "cogito ergo sum". I soon learned that there is no such foundation. Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don't have the luxury of starting from scratch. I view science as an evolutionary process. Changes must be made in small increments: "Natura non facit saltus".
One flaw I see in your post is that the rule "You cannot trust any rule" applies recursively to itself. (Anything you can do, I can do meta.) I would say "Doubt everything, but one at a time, not all at once."
@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?
As Eli said in his post, there is a difference between saying the words and knowing, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.
Also, you say: Caledonian: but from your personal tendency to treat the method as a revelation that people have an emotional investment in
Of course people have an emotional investment in this stuff!! Do not make the old mistake of confusing rationality with not being emotional (I guess Star Trek with Mr. Spock is guilty of that, at least for our generation)
And what could be more emotional than dumping the legends of your tribe/parents/priests/elders?
For rationality and emotion in science, read for instance here: The Passionate Scientist: Emotion in Scientific Cognition Paul Thagard http://cogsci.uwaterloo.ca/Articles/Pages/passionate.html
You will have to study [...] and social psychology [...]
Please could you recommend some social psychology material?
As you explain so clearly here, the point is to think for ourselves instead of trusting in any person or system. This valuable insight can be reached by many idiosyncratic paths through life. Your personal path to it, trusting too much in Science itself, is an ironically interesting one, unlikely to be trod by most. That's why your line "Science Isn't Strict Enough" fails to resonate with some readers.
Jared, why should you trust yourself more than someone else? And if there is someone more worthy of trust than you, wouldn't it be a more rational strategy to let him think for you instead of thinking for yourself?
If my own judgment is so faulty that I choose to let somebody else do all my thinking for me, then how can I even trust the thinking behind my choice?
If you think that Science rewards coming up with stupid theories and disproving them just as much as more productive results, I can hardly even understand what you mean by Science beyond the "observe, hypothesize, test, repeat" overview given to small children as an introduction to the scientific method. Was Eliezer-18 blind to anything beyond such simple rote forumulas?
Negative results are forgiven but hardly ever rewarded (unless the theory disproven is widely believed).
If you'd put aside the rather bizarre bitterness and just say: "Bayesian rationality is a good way to pick which theories to test. Here's some non-toy examples worked through to demonstrate how" that would be much more useful than these weird parables and goofy "I am an outcast" rants.
"@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?"
They don't. The mismatch between you and them is that they're busy thinking about something else at the moment. I like the rule Turney gave above: "Doubt everything, but one at a time, not all at once." Of course, a single person can't follow that rule completely (there's not enough time in a lifespan to doubt EVERYTHING), and most people pick the wrong things to doubt or are lazy in applying the rule.
Of course, that rule's going to get in the way of reaching truth in some cases (some falsehoods come in self-reinforcing pairs both of which must be doubted in order to falsify either, and some things can't profitably be denied even for the sake of argument), but that's the case with any process, and this is something we've known since Goedel.
This kind of confuses me about this series... If all he was telling us was that Science is a powerful set of rules, and that therefore it can't eliminate all contradictions nor state all facts, I'd simply agree with him. But he seems to be saying that Bayesianism is different from Science, that somehow applying it instead of Science will have better results. It seems to me that both are processes, and both have blind spots.
I find it difficult to be sympathetic towards someone who complains he wasn't warned that the rule "do not take things on faith" wasn't supposed to be taken on faith.
We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?
We could talk until we're blue in the face, but the simple truth is that you cannot force people to apply a method consistently, rigorously, or intelligently. No amount of adding onto the lesson will make people apply it properly, it merely offers them more things to misunderstand, ignore, and apply inconsistently.
We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?
That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html
@billswift: Emotion might drive every human action (or not). That's beside the point. If an emotion drives you into a dead end, there's something wrong with that emotion.
My point was that if someone tells you the truth and you don't believe them, it's not fair to say they've led you astray. Eliezer said he didn't "emotionally believe" a truth he was told, even though he knew it was true. I'm not sure what that means, but it sounds like a problem with Eliezer, involving his emotions, not a problem with what he was told.
Jared, it is possible to see that someone is more intelligent and trustworthy than you, without therefore being yourself more intelligent and trustworthy than him.
Eliezer didn't trust science too much. He didn't trust it enough. Instead of taking the duties and requirements of skepticism seriously, he treated the scientific method as another faith system.
I'm sure that was a very comforting and familiar approach to take, but it was still wrong. Completely, fundamentally wrong. It's utterly incompatible with the skepticism, open-mindedness, and radical doubt that is essential to the scientific method. And it seems to have had long-lasting implications for the sorts of positions Eliezer takes.
One suggestion for the flaw:
Conclusions from this article: a) you are never safe b) you must understand a) on a emotional basis c) the only way to achieve b) is through an experience of failure after following the rules you trusted
The flaw is that the article actually does the opposite of what it wants to accomplish: by giving the warning(a) it makes people feel safer. In order to convey the necessary emotion of "not feeling safe"(b) Eliezer had to make the PS regarding the flaw.
In a certain sense this also negates c). I think Eliezer doesn't really want us to fail(c) in order to recognize a), the whole point of overcomingbias.com is to prevent humans from failing. So if Eliezer did a good job in conveying the necessary insecurity through his PS then hopefully c) won't happen to you.
That second paragraph was hard for me. Seeing "a)" and "b)" repeated made me parse it as a jigsaw puzzle where the second "a)" was a subpoint of the first "b)", but then "c)" got back to the main sequence only to jump back to the "b)", the second subpoint of the first "b)". That didn't make any sense, so I tried to read each clause separately, and came up with "1. You are never safe. 2. You must understand. 3. On an emotional basis..." before becoming utterly lost. Only after coming back to it later did I get that repeated letters were references to previous letters.
Does anyone disagree that science does not have nearly as strict quantitative constraints as Bayescraft on what you may believe?
why do you say that the problem disappears when you have probabilities?
I guess you still have the same basic problem which is: what are your priors? You cannot bootstrap from nothing and I think that is what the tortoise was hinting at, that there are hidden assumptions in our reasoning that we are not aware of and that you can't think without using those hidden assumptions.
Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).
No, Mr. Nesov, it is not. You and I are talking at cross purposes.
Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.
"The experimental method shall decide which hypothesis wins"?
When there are experiments that can reasonably be done, or have already been done, then this works, right?
"Do you think it's a neuro-quirk, or just a normal quirk?"
Wait, there's another kind?
"Doubt everything, but one at a time, not all at once."
Interestingly, Robin Hanson has an existing post on this subject:
I don't know about you guys, but being wrong scares the crap out of me. Or to say it another way, I'll do whatever it takes to get it right. It's a recursive sort of doubt.
Once again interesting post, but it doesn't apply to my personal case. I've access to no statistics on the issue so I can't claim how exceptional I am, but my own parents are "more rational than average" (they are atheist maths teachers), and I don't think they are insane. Or not more than anyone else, at least.
I did realize that my parents were not perfect, and that if I could trust them in loving me and caring for me and wanting the best for me, I couldn't blindly trust anything they would say. But that didn't require, for me, such a massive emotional break.
Maybe you'll tell I'm not "unusually rational"... but at least, I'm trying to better myself, or I wouldn't spend hours reading LW ;)
Before reading LW, I was not really doubting Science, but neither was I considering it to be an utter sacrilege to claim it was flawed (or that it could be superseded by something better). I knew Bayes' Theorem, without realizing it was much more important than just understanding a mammography result (which was the way I was using it). That's a shortcoming of my part, sure.
I can say that even before reading LW, I had a gut feeling that the Cophenaguen interpretation was just... not right. I was in doubt, torn between that gut feeling, and my own humility saying "well, you're not a physicists, how can you be more right than they are ?" MWI seems much cleaner to me (even if there are a few things that still bother me), but that gut feeling drove me into reading more about QM and rationality. That, and having something to protect.
Well, that's just my own personal experience, hoping it can help understand things better, but not claiming any generalization from it.
Now that we have once again established that 1 and 0 are not probabilities, we have to remember that probabilities are still a strictly ordered set. How do we make it less dangerous?
For me, the discovery that science is too slow was bound up with the realization that science is not safe. My private discovery of the slowness of science didn't come from looking at the process of scientific discovery and reflecting on the time it took - rather, it arose from realizing that the things I learned or discovered via science were slower more painful than those I learned from other methods. "Other methods" encompasses everything from pure mathematics to That Magical Click, the first inescapable and the second, initially, unsupported. Realizing that science was a fairly low-quality set of tools carried with it the realization that its inefficiency was a function of its precautions. Not trusting science as the ideal method for discovery, I ceased to trust it as ideal for reliability.
New to this site, Bayescraft, and rationalism as a whole, I still have a mentor left to distrust. Consciously, I know that these techniques are imperfect, but I have yet to understand them well enough to be failed by them.
I find this to be a very attention grabbing comparison, so much so that I had to re-read this post 5+ times before I could see the forest through the trees (or tree as the case may be).
The reason these two examples strike me so is that I once held both of the underlying beliefs (ie that corn ethanol is bad and so is nuclear power). While I reversed both of these beliefs many years ago (prior to discovering HPMOR and lesswrong) I now see them as "belief as attire" (tree huggers think nuclear is bad, I'm a tree hugger, therefore I think nuclear is bad) and "password guessing" (why is corn ethanol a bad idea?... thermodynamics....Gold Star!)
After gathering more information about these two "controversies" than can be gathered from Mother Jones or Popular Mechanics, I firmly support nuclear power expansion and think it is quite insane that we don't make more ethanol from corn. I would be happy to support my positions, the former would be rather concise, the later would be considerably longer, so I'll save it until asked.
Perhaps this would have been less distracting:
Consider yourself asked!
I obviously haven't logged into LessWrong in a long time. Do you still want the answer?
Dunno about MarkusRamikin, but I'd sure be interested in hearing why you "think it is quite insane that we don't make more ethanol from corn".
Yep, me too.
disclaimer This defense of corn ethanol is by no means “publish ready”, it is simply a gathering of data and concepts obtain during my work that has been sufficient enough to change my mind on the merits of a seemingly insane practice. It could use more work, however I don’t really care enough either way to put much more effort into this particular topic.
The primary data driven argument against corn ethanol is that it takes more energy to make than the fuel contains. A statement that is generally true, which I don’t really care about. The whole point of getting away from fossil fuels is to reduce the emissions of greenhouse gases (GHG) and slow/stop/reverse climate change. My grizzled, old, super-conservative thermo professor in undergrad often complained about hippies wanting to conserve energy. “Energy is always conserve” ,he would suggest , “what we need to conserve is exergy”. Likewise, I (and I believe the collective “we” should feel the same) don’t care about energy balance, I care about carbon balance.
To find the “best” data on carbon balance of fuels, I turn to the California Air Resources Board, which limits carbon intensities (CI) for fuel sold in California, they have lists of every producer of fuel sold in the state and list the CI’s of the fuels. The unit they use is gCO2e/MJ (grams of CO2 equivalent per megajoule). Which can be found here. They also have published pathways for CI, which are documents describing how they arrived at the CI numbers. The one for corn ethanol is here. Reading through the pathway for corn ethanol, the biggest take away is that there is wide variation in production practices that have major impact on the CI of ethanol, for example, the highest CI for corn ethanol listed as of 5/20/15 is 120 (1) gCO2eq/MJ while the lowest is 63(1) gCO2eq/MJ. That’s nearly a factor of 2. For comparison, the CI of standard CA gasoline is 95(1). The difference between the high ethanol CI and the low is primarily the production energy (ie heat for boilers) for the former is coal and the latter is natural gas with some landfill gas and waste wood.
If you look at the breakdown for “average” corn ethanol there are three major sources of carbon emissions, ag chemical production, ethanol production and land use, each being approximately 30 g/MJ. The total number listed for “average” dry mill is 97 (1) gCO2/MJ. I should note that there is a -11g/MJ credit for “co-products”, which is the left-over solids that is used as animal feed call dried distillers grains.
So here is my general belief, making corn ethanol is not inherently bad (insane), however the way we do it is slightly insane. We get a marginally lower CI fuel, which gets blended into gasoline and reduces non-GHG emissions (at least that’s why it’s mandated in CA). However, by shifting the process (which I might outline sometime if folks are interested, but would turn this comment into more of a TLDNR) to one that is more sustainable, and more cost effective, corn ethanol production become perfectly sane.
So why does this mean we should have more corn ethanol? Well more corn ethanol means more corn ethanol plants (building out the infrastructure is costly and time consuming and a large barrier to expansion). Eventually, there will be a revenue incentive to ethanol plants for caring about the CI of their fuel (since in the US, as a whole, it’s only mandated as being non-fossil caring little about GHG’s). California is a good example of this. Gasoline blenders have to buy ethanol because gas in CA has to be 10% ethanol. There is also a limit to the CI of the gas/ethanol blend, which right now is higher than the CI of most ethanol. However, this “low carbon fuel standard” CI drops every year until 2020, where is stays at 89 (1) gCOe/MJ. This means that if the ethanol a company is trying to sell in CA has a CI above 89, the customer would have to purchase carbon credits as well. So companies would then have an incentive to change their production practices to lower their CI, because they could sell their ethanol for higher prices. If/when a US carbon tax (or something akin to the CA Low Carbon Fuel Standard) is adopted, having an existing ethanol infrastructure will make the scale-up and spread of low carbon liquid fuels able to happen much faster.
There are a few other sides to the corn ethanol argument; growing crops for fuel instead of food for example. An argument I find full of holes, since the increase in the corn crop has not been on the same scale as the increase in ethanol production (2) . This is due to the aforementioned animal feed co-product and the fact that before wide spread ethanol production most of the corn grown in the US was used as animal feed (2) . So making ethanol doesn’t displace another crop, only the starch portion of cattle feed. If you have a moral problem with growing a fuel while people in third world countries starve, you should have the same moral problem with growing a crop to raise meat while people starve. Also, if you have a moral problem with displacing food from American mouths, we have an obesity problem, which means we produce and consume too many calories per capita already, we don’t need more corn in our diet. There is also the notion the ethanol is bad for engines, while I believe that the higher anti-knock characteristics of ethanol combined with the higher heat of vaporization means ethanol-only vehicles could have diesel-like compression ratios with otto cycle performance, resulting in a higher efficiency, lower non-GHG emissions vehicle. There are a few other minor facets, but I think they are immaterial, though I did not want to give the impression that I did not consider them.
1) I’ve truncated these numbers as they are reported to the hundredths place.
2) I really need to dig up some good reference for these, because they are based on me looking at old ag reports, which is less than ideal
As for nuclear power, we know that it is a near 100% probability that burning fossil fuels is bad for the planet (and us too) and that can't really be mitigated with existing technology. However, catastrophe from nuclear power has a probability less than 1 and there is technology that can decrease that probability.
edited to fix hyperlinks and to fix unintended text formatting.
Who the heck downvotes comments for answering a question?
I wasn't the one who downvoted, but the comment is not a very good defense of the claim that "it is quite insane that we don't make more ethanol from corn." At most it defends the claim that it would not be unreasonable to make more.
That post scriptum.. It's just so amusing to have someone write out your exact thoughts and worries like that. It is very rarely that i get tears in my eyes from giggling, and I can't stop smirking about that. It is quite bothersome indeed that I am so unskilled in this art of reasoning that the best i can do is follow your words and hope they lead me to somewhere where i can eventually realize said flaws. I feel my journey will be riddled with such flaws no matter how hard i try to to avoid it. It's the nagging feeling I have almost constantly, and of which i have trouble explaining
Forget it. It's an honest question but it'll just appear as attention grabbing.
This is probably my favorite entry ever.
What's wrong with ethanol made from corn, anyway?
It's another subsidy to agribusiness conglomerates, which leech huge sums of money from taxpayers already.
And it uses up the corn so it can't be sold to hungry poor people, which is bad because starvation is bad.
Those sound like fixable problems.
The only flaw I can see it's that this reasoning seems to put a lot of weight on the probability of meeting disaster by sticking with a well thought, well tested but imperfect set of rules, and not enough weight on the probability of meeting disaster by trying to be clever and do better than the rules, either by genuine fear of their limitations, or to follow a course of actions that you favour for reasons that aren't really part of your goals but that you are then free to endorse by using the fear of the rules' limits.
I get that the main point of the post is that you can't just relax, get comfortable and think you finally got the perfect way of thinking if you want to actually get the right answers often enough, and I also agree with it. But I still feel that, when considering if to depart from the rules, even those of the lowly common sense, the possibility that you are just about to shoot yourself in the foot and meet some interesting new disaster for reasons you'll see only after and that could have been avoided by sticking to it, has a significant probability compared to the possibility that you are actually in a situation where the rules aren't enough and you'll have to improve them to succeed, that this probability should be carefully weighted before choosing how to act, and that this warning should have been included in the reasoning above, as it has usually been included in other posts.
A practical example would be making this comment as a newcomer, ignoring the common sense consideration that as a flaw it feels kinda obvious and that someone else would have pointed it out by now if I weren't simply missing the point of the post, but I still think that "don't trust the rules" is a rule that requires an awful lot of caution before being mistrusted too much.
It's easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to "trust and think at the same time".
Are either of those the flaw he found?
What we have to go on are "fairly inexcusable" and "affects one of the conclusions". I'm not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here's an attempt.
More proposed flaws I thought of while spelling out the above:
I suppose that's given me plenty to think about, and I won't try to guess the "real" flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn't read all the existing comments first.)
Does your experience support the claim that rationalists had their trust in their milieu (e.g. parents, cult, etc.) broken?
P.s. that was my personal experience but I'd really like to hear from others.