Great article! I didn't realize how I blank on some of those.
When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets. They clung rigidly to the handed-down procedures. It didn’t occur to them to try to understand, or to improvise.
I'd like to self-centeredly bring up a similar anecdote, which forms part of my frustration how people give unnecessarily-complex explanations, typically based on their own poor understanding.
In chemistry class, when we were learning about radioactive decay and how it's measured in half-lives, we were given a (relatively) opaque formula, "as if from the sky on stone tablets". I think it was
mass_final = mass_initial * exp(-0.693 * t / t_halflife)
And students worked hard to memorize it, not seeing where it came from. So I pointed out, "You know, that equation's just saying you multiply by one-half, raised to the number of half-lives that passed."
"Ohhhhhhhhhhhh! It's so much simpler that way!" And yet a test question was, "What is the constant in the exponent for the radioactive decay formula?" Who cares?
...Sandra runs
I was going to ask where the constant for the exponent came from, but with a calculator and the Wikipedia page on exponentiation, I figured it out myself. This site is good for me.
I have to say, if I saw anyone write the equation that way I'd question how much they understood the concept themselves!
EDIT: Let me also add, if I saw anyone asking that "what's the constant" question, I'd conclude they didn't understand it unless I saw good evidence otherwise...
It looks like that formula is a lot like cutting the ends off the roast.
The answer to "who cares?" is most likely "some 1930s era engineer/scientist who has a great set of log tables available but no computer or calculator".
I am just young enough that by the time I understood what logarithms were, one could buy a basic scientific calculator for what a middle class family would trivially spend on their geeky kid. I remember finding an old engineer's handbook of my dad's with tables and tables of logarithms and various probabilistic distribution numbers, it was like a great musty treasure trove of magical numbers to figure out what they meant.
I don't know where that ended up, but I still have his slide rule.
Of course, even in the day, it would make more sense to share both formula, or simply teach all students enough math to do what Gray does above and figure out for yourself how to calculate the model-enlightening formula with log tables. Since you'd need that skill to do a million other things in that environment.
I'm reminded of a (probably untrue) story about officer training school in the British army: as part of a test, the officer candidates are asked what the correct way to dig a trench is. The correct answer is:
I say "Sergeant, dig me a trench!"
In other words, you saw a broken dishwasher, and you know that the way to fix a broken dishwasher is to find Steve Rayhawk and tell him that there's a dishwasher that needs fixing. Which you did, and it worked. ;)
That's different, though- it's probably a rather good way to teach people that their first impulse as an officer should be delegating that which can be delegated. I'd imagine that promoted engineers today need to train themselves not to start micromanaging the sort of project they'd previously have done themselves, but rather to give it to someone capable and leave them to do it.
Which is why evenness is one of the virtues. Some candidates need to be taught to delegate. Others need to be thought to think for ten seconds before throwing their hands up. Most probably need to be taught both.
Very observant post. I've noticed this 'learned blankness' in a lot of people when it comes to 'nerdy' areas like math and science, probably because I'm not as blank in these areas. (There are plenty of things I don't know very much about, like for example the North American legal system, but my usual thought is "wow I would really like to get a book/do a wikipedia search out on that!") Unfortunately it's not as easy to pinpoint the areas where I am blank.
But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”.
As part of my 'mission to become a real grownup', I've started trying to solve small household problems like this on my own. Sometimes it leads to a lot of time-wasting, like the time I spent half an hour trying to fix the toilet when it turned out my roommate had just turned the water off because the sound kept her awake. I would have saved myself an hour if I'd made the problem not my responsability, but now I have a pattern-recognition schema in my head for toilet problems...the first thing I'll check for next time, after "is is plugged?" will be "is the water on?" I'm assuming that this is how most people become good in these areas...
Not that I disagree with you in general, but I can think of a few cases in which you may actually want to cultivate blankness toward a given subject. In particular, deep and difficult questions have been known to occasionally drive people mad - it's an occupational hazard for mathematicians in particular, and perhaps also for people in other fields. One might reasonably object that correlation does not imply causation in this case, but I have had a couple of experiences in which intense study of math and physics led me to some pretty dark psychological places, and I had to back off for awhile and think about more mundane matters while my mind reset. It's possible that, for some people, some areas of thought really are inaccessible, insomuch as they could irrevocably damage themselves in trying to get there.
I have had a couple of experiences in which intense study of math and physics led me to some pretty dark psychological places
Why do I feel the irrational urge to beg you to do a post on this? What could possibly go wrong? :-)
I react to cookery in the same way many people react to computing.
Sometimes I try to use this to understand the reactions of people who have trouble with computers.
The trouble with explaining this analogy to people is people's instant reaction is to go "cooking isn't scary at all! Look at all these reasons why kitchens are fun and non-scary".
Cooking is a lot like computing in reverse. Instead of being the programmer, you're the cpu. Follow the program, and you'll end up with the result the recipe provides.
The part of cooking where people look like they're just tossing things together is much more advanced. Cuddle your recipe book while you cook, it's your best friend.
I really recommend 'The Joy of Cooking' as a good book to start with, especially older editions. My 'acid test' of a general-purpose cookbook is if it has a real recipe for cream of mushroom soup or if it just says 'add 1 can'. The older editions have the real recipe, as well as massive amounts of information not only about food but also about how to serve it.
Edit - please disregard this post
It seems that people often cling to the "old way" of doing things even if the new way is faster and better because of some emotional attachment to the way they have always done things.
With cooking, the trouble is that it doesn't scale, or rather, the economies of scale come at the inevitable expense of quality. A home-made meal prepared by a skilled cook and with well chosen ingredients is guaranteed to be superior even to the output of restaurants, let alone to something produced on an industrial scale. (Especially when you consider that the home-made meal can be subtly customized to your taste.)
I don't think that it is "old way" versus "new way"; but it seems clear to me that someone has to know the recipe. If you buy a pre-made can of mushroom soup, obviously the manufacturer must have used the recipe. And then there's the issue if none of the brands of mushroom soup are of adequate quality for your purposes.
It's like the difference between a programmer writing his own routines or using a pre-packaged library. I think, in order to be considered a competent programmer, you should be able to write your own routines, even if you don't have to in the majority of cases. A cookbook is open source for food. "Buy 3 cans of Kraft spaghetti sauce" is cheating.
To me, the hard part in this procedure looks to be this step:
try to notice areas you care about, that you’ve been treating as blank defaults.
It seems likely to me that such areas are going to be ones that I habitually don't turn my real attention to, and that if they come briefly to mind it won't necessarily be obvious to me that I am treating them as blanks.
While it's not good for building deep skills, reading (the right sort of) random blogs is a great way to defeat learned blankness. After reading a post or two about something, even if I don't retain much of it, I do retain enough of an outline to treat it as something that's available to reason about and research if it becomes relevant.
Can I ask how you find "random blogs"? Is it truly random, or do you have a method for finding new stuff?
"People who can't or won't think for themselves" is how a friend of mine characterised his customers as a freelance Windows NT admin (a very good one - and good NT admins aren't cheap). "There's a lot of money in sewage."
Outsourcing thinking to anyone who can be convinced or coerced into doing it seems quite common to me. People will so often do things just because someone else demands it of them. I have commented before on how my ridiculously charming daughter [1] is remarkably creative in intellectual laziness, and how I have to be sure not to let her get away with it. She will damn well learn not to be lazy just because she can!
I blank on programming, which is not so good for a sysadmin to a development team. I don't write anything more than shell scripts and I have the algorithmic insight of someone who doesn't. I suppose I should learn more.
Too many people consider computers malevolent boxes of evil completely unamenable to any rational consideration, even in theory. Your "Sandra" example is many programmers I've worked with.
[1] and it works on people other than me, e.g. the man in the coffee shop at 5pm yesterday she asked to get her a babycino (...
I'm a researcher in programming languages, and I've dabbled a little in discrete math and algorithms research. Though my advice may be a little slanted, "algorithmic insight" is what I'm most expert in. Perhaps, then, the following is right.
If you "blank" on programming, but already know system administration and shell scripts, then the "lack" you're describing is probably pretty small.
I strongly believe that what might look like "algorithmic insight" is mostly the product of obsessively picking apart designs and implementations - not just computer programs, but any engineered mechanism. It's a great habit to inculcate, and (I think) leads naturally to gradually understanding how everything works.
I bet, though, that you could massively boost your own algorithmic insight by the following program of reading and practice:
It had just closed. But they know her, so yes. And she'd just burst into tears.
Having a daughter is a serious live-fire exercise in how to think rationally despite your cognitive biases.
Having a daughter is a serious live-fire exercise in how to think rationally despite your cognitive biases.
I would be deeply interested in a post on that subject.
Of course, the other side of the coin is the Dunning--Kruger effect which causes us to overestimate our knowledge about things we're ignorant about.
The illusion of explanatory depth (Rozenblit & Keil, 2002) seems like a particularly relevant example of that other side of the coin. If you ask people if they understand how something works, like a bicycle, a flush toilet, or a zipper, they'll generally say that, yes, they understand it and could explain it. But if you ask them to draw a diagram and actually explain it, they'll often get it wrong, and realize in the process that they don't understand it as well as they thought they did. The main problem seems to be that people have higher-level understanding of the object, and experience using it correctly, which they confuse with a more in-depth knowledge of the mechanisms that make it work.
That doesn't necessarily contradict AnnaSalamon's point about stopping because of learned blankness. Seeing something stop working, and not immediately knowing why it messed up or how to fix it, might be enough to trigger that same lack of confidence that shows up after people try and fail to explain how something works. And in order to fix it you often don't need so much depth of knowledge. Even if you don't have enough knowledge to fully explain the mechanism that makes something w...
Wow. That article is pure gold: the kinds of mistaken explanations they talk about are exactly what I hear from people who give unhelpful explanations -- they don't see the limits of their own understanding of the phenomenon, and obviously can't convey what they lack. And so any explanation they give is thus extremely brittle, as they can't do much more than swap in other terms for the mysterious concepts they invoke.
(This is not to say they're completely unhelpful -- a partial explanation is better than none at all. But in that case, it's preferable that you clarify that your understanding is indeed limited, and can't connect it to a broader understanding of the world.)
I think a lot of learned blankness comes about because of fear of being wrong, or more correctly, fear of someone else blaming them for being wrong. In certain social strata, you aren't supposed to think about a problem, or let others know you're thinking about a problem, unless it is your job to think about it. If you think about a problem, and get it wrong, then you are irresponsible for not going to an expert with the problem.
So that's where learned blankness gets it's traction, in my opinion, and this is the reason why you'll find people spending an ...
I have observed similar behavior in others. Only I called it 'blackboxing', for lack of a better word. I think this might actually be a slightly better term than 'learned blankness', so I hereby submit it for consideration. It's borrowed from the software engineering idea of a black box abstraction.
People tend to create conceptual black boxes around certain processes, which they are remarkably reluctant to look within and explore, even when something does go wrong. This is what seems to have happened with the dishwasher incident. The dishwasher was treate...
Good post - upvoted!
[4] Thanks to Zack Davis for noting that the “good with computers” trait seems to be substantially about the willingness to play around and figure things out.
Quoting Homestuck:
grimAuxiliatrix [GA] began trolling twinArmageddons [TA]
...
TA: 2ee the menu up top?
TA: fiiddle around wiith that tiil you open the viiewport.
GA: I Did Fiddle With It
GA: To No Avail
TA: iif you cant fiigure 2hiit out by fuckiing around you dont belong near computer2.
(twinArmageddons has a "typing quirk" related to the number 2; if you didn't get it...
This is one reason why I worry about overemphasis on "learning styles" in teaching. Yes, we shouldn't overgeneralize from our own brains to those of others, and different people learn differently. But it's too easy to say that because I am Not a Visual Person, Having Been Born Blind and Treated By Surgery, I therefore can't learn to excel at visual tasks.
This internal sense that I am "not a visual learner" caused me serious difficulty in training to do many tasks, until I learned to just compensate by practicing for a longer period of t...
Something I realized is that in some cases, learned blankness is due to overly generalized beliefs that were created in us when we were kids. For example - "I am not a doctor or physical therapist so I need to go to them to fix my body."
The best way that I guard
Write down a list of all that is supremely important in your life and make sure you have studied everything about those things. If you do this often enough, the first thing you do when faced with a problem is to read up about it and send off questions to leading people who work in that
Another part of learned blankness is fear of making a catastrophic mistake-- for example, I've heard that it's possible to wipe out what you were trying to save when you make a backup. I need to find out whether this is still true (if it ever was), and how hard it is to avoid, rather than turning the whole thing into a matter of panic.
Sometimes my critical contribution to helping another programmer solve a problem basically consists of reading the fascinating error message. (Well, the fact that I also programmed the library they are using to show the error message is arguably a critical contribution as well.)
I've always been interested in how stuff works and I've taken apart or built from scratch a lot of the stuff I've owned. I've built stuff as small as a molecule or as big as a hangglider without even considering asking for expert help - it's just so easy and enjoyable, I can think things through, do research and come to understand something new...
But I've never been interested in how people work. It seems to me it's impossible to understand things that are outside my experience and there's a lot I can never experience for myself, to understand. I've never ...
A lot of this learned blankness for me is deliberate. There are domains that I've found through trial and error (mostly error) that I really have no aptitude for. In those cases, it's much more effective for me to find someone that does have the aptitude or skill.
I would like to think that most of the time this came from a conscious decision, but I'm probably just not remembering all the times it didn't.
Edit: I think there's a difference between learned fear and learning that you lack the aptitude. I'm pretty sure I missed the day in elementary school where they taught people that technology is scary, and that just breathing on it wrong will kill it forever.
Edit - please disregard this post
I've often seen this with hooking up computers, TVs and/or audio equipment. Many people seem to treat it as incomprehensible, even though with computers (particularly) it's just cable to connector, no real thinking needed. For a/v equipment it's just "flows" out-to-in.
Specialization is fantastic, but there is real value to cross-training in other disciplines. It's hard to predict what insights in other fields might assist with your primary. Also, even if you use a specialist, it's impossible to evaluate them if you blank-out in the area. For...
I really like this post; I think it highlights an important problem. I just want to add one step to this: quite often it is very difficult to notice that you aren't thinking about something. I've started trying to overcome this by noticing problems that I am not DOING anything about, and asking myself why I am not doing anything about them. If the answer is "I'm lazy" (to the question of "why aren't I doing the laundry?", e.g.) I don't worry about it, but if the answer is "because I don't know how to solve it" I start payin...
Speaking of nano risks, I would love to see a LessWrong-grade analysis of what we can conclude from the debates about the feasibility of molecular nanotech between Foresight/CRNano people on the one side and people like Richard Jones and Philip Moriarty on the other side, if anybody here is up to it.
I am fascinated by the "bad with computers" kind of learned helplessness.
It gives me a strong feeling there's some very deep cultural thing going on, but so far I've failed to work out what it is.
One theory I have is that it's some sort of arts/sciences split. but we also observe scientists who are bad with their computers.
It's like falling and missing the ground. Happens all the time. For some reason people don't let me borrow their computers anymore.
Upvoted just for footnote 5, which I think is an essential and easily explained trick that people in general should be told about.
Ah, footnote [4]. How you have framed my life!
It's simply ASTOUNDING how people will pay you to do something as simple as Google a problem and then follow the steps.
Interesting and useful post, but I'm not sure I agree with the analogy to learned helplessness or using the word "learned" at all. The state you are describing seems to vary greatly between individuals (for contrast, I know many people who believe they can do or know almost anything correctly) and probably correlates to such things as intelligence, openness, risk-tolerance, etc. What makes you think this "blankness" is learned?
Thinking and learning new things is hard. Asking someone to do it for you is easy.
I suspect that even if people were aware that they could, e.g. google their computer problem and solve it, many (most?) would just have an "I can't be bothered to figure it out" attitude. And I'm not sure how many people are already in this position.
- I find it hard to fully try to write fiction -- though a drink of alcohol helps. The trouble is that since I’m unskilled at fiction-writing, and since I find it painful to notice my un-skill, most of my mind prefers to either not write at all, or to write half-heartedly, picking at the page without really trying. Similarly, many pure math specialists avoid seriously trying their hand at philosophy, social science, or other “messy” areas.
I find this only happens at things I care about and want to be able to do. For me, an example is poetry. Trying ...
In yet another attempt to show how this is not irrational, here goes:
There are every year about 100,000 new math theorems produced. To learn each of these theorems would require learning them at a rate of about 1 every minute when sleeping is taken into account. This is excluding all of the theorems needed to understand those theorems. Further this is just math and doesn't included every other field of human endeavor as well as on the job knowledge.
It should be clear from the above that is physically impossible to have all the knowledge in the world, let...
Do you think that this behavior (learned helplessness, learned blankness) might have non-obvious benefits? For example, could too much independence be aggressive - or conversely, could dependence be a way to bring about beneficial social relations?
- Fred finds he has an intuition about how plausible nano risks are. It’s a blank for him; something he can act on or ignore, but not examine. He e.g. doesn’t So, he acts on it (or ignores it, if he has an alternative data source). It doesn’t occur to him that he could examine the causes of his intuition[x], or coul
You're missing the end of a sentence, there. And some other stuff in the following few paragraphs. Was this supposed to be posted yet?
I wonder how I did that.
If you click the "Create new article" button from the main page, you get a "post to" drop down that lets you choose to save to your drafts, to Less Wrong, or to Less Wrong Discussion, with your drafts being the default. This is probably what you expected.
If you click the same "Create new article" button from the discussion section, there is no drop down, and you always save directly to discussion. This is probably what happened.
(I think this difference in behaviors is unnecessarily confusing and should be removed, by making discussion act like the main page.)
Once we know how the software behaves, and how we want it to behave, and that these are different, what do we even mean by asking if the current behavior is a bug?
A lot of the 'learned blankness' or black box problem (I prefer that) seems to me to be directly related to how afraid someone is of feeling (or worse, looking) stupid.
There are exceptions of course, but by and large the people that seem to hit that wall (or, at least have a higher than average number of those walls to hit) are people that were told over and over that they're dumb, or that pursuing 'X' is dumb.
And - they become that, or at least an unreasonable facsimile thereof. Within the realm of their expertise it's very obvious they're highly intell...
Related to: Semantic stopsigns, Truly part of you.
One day, the dishwasher broke. I asked Steve Rayhawk to look at it because he’s “good with mechanical things”.
“The drain is clogged,” he said.
“How do you know?” I asked.
He pointed at a pool of backed up water. “Because the water is backed up.”
We cleared the clog and the dishwasher started working.
I felt silly, because I, too, could have reasoned that out. The water wasn’t draining -- therefore, perhaps the drain was clogged. Basic rationality in action.[1]
But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”. And I’d remembered I “didn’t know how mechanical things worked” (a cached thought). And then -- prompted by my cached belief that there was a magical “way mechanical things work” that some knew and I didn’t -- I stopped trying to think at all.
“Mechanical things” was for me a mental stopsign -- a blank domain that stayed blank, because I never asked the obvious next questions (questions like “does the dishwasher look unusual in any way? Why is there water at the bottom?”).
When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets. They clung rigidly to the handed-down procedures. It didn’t occur to them to try to understand, or to improvise. The students treated math the way I treated broken dishwashers.
Martin Seligman coined the term "learned helplessness" to describe a condition in which someone has learned to behave as though they were helpless. I think we need a term for learned helplessness about thinking (in a particular domain). I’ll call this “learned blankness”[2]. Folks who fall prey to learned blankness may still take actions -- sometimes my students practiced the procedures again and again, hired a tutor, etc. But they do so as though carrying out rituals to an unknown god -- parts of them may be trying, but their “understand X” center has given up.
To avoid misunderstanding: calling a plumber, and realizing he knows more than you do, can be good. The thing to avoid is mentally walling off your own impressions; keeping parts of your map blank, because you imagine either that the domain itself is chaotic, or that one needs some special skillset to reason about *that*.
Notice your learned blankness
Learned blankness is common. My guess is that most of us treat most of our environment as blank givens inaccessible to reason[3]. To spot it in yourself, try comparing yourself to the following examples:
1. Sandra runs helpless to her roommate when her computer breaks -- she isn’t “good with computers”. Her roommate, by contrast, clicks on one thing and then another, doing Google searches and puzzling it out.[4]
2. Most scientists know the scientific method is good (and that e.g. p-values of 0.05 are good). But many not only don’t understand why the scientific method (or these p-values) are good -- they don’t understand that it’s the sort of thing one could understand.
3. Many respond to questions about consciousness, morality, or God by expecting that some other, special kind of reasoning is needed, and, thus, walling off and distrusting their own impressions.
4. Fred finds he has an intuition about how serious nano risks are. His intuition is a blank for him; something he can act on or ignore, but not examine. It doesn’t occur to him that he could examine the causes of his intuition[5], or could examine the accuracy rate of similar intuitions.
5. I find it hard to fully try to write fiction -- though a drink of alcohol helps. The trouble is that since I’m unskilled at fiction-writing, and since I find it painful to notice my un-skill, most of my mind prefers to either not write at all, or to write half-heartedly, picking at the page without *really* trying. Similarly, many pure math specialists avoid seriously trying their hand at philosophy, social science, or other “messy” areas.
6. Bob feels a vague desire to "win" at life, and a vague dissatisfaction with his current trajectory. But he's never tried to write down what he means by "win", or what he needs to change to achieve it. He doesn't even realize that he could.
7. Sandra just doesn’t think about much of anything. She drives to work in a car that works by magic, sits down in her cubicle at a company that makes profits by magic, and thinks through her actual coding work. Then she orders some lunch that she magically likes, chats with coworkers via magically habitual chatting-patterns, does another four hours’ work, and drives home to a relationship that is magically succeeding or failing.
I’m not saying we should constantly re-examine everything. Directed attention, and a focus on your day’s work, is useful. But the “learned blankness” I’m discussing is not goal-oriented. Learned blankness means not just choosing to ignore a domain, but viewing that domain as inaccessible; it means being alienated from the parts of your mind that could otherwise understand the thing.
Analogously, there are often good reasons not to e.g. seek a new job, skillset, or romantic partner... but one usually shouldn’t be in the depression-like state of learned helplessness about doing so.
Reduce learned blankness
There are many reasons folks feel helpless about understanding a given topic, including:
So, if you’d like to reduce your learned blankness, try to notice areas you care about, that you’ve been treating as blank defaults. Then, seed some thoughts in that area: set a ten minute timer, and write as many questions as you can about that topic before it beeps. Better yet: hang out with some people for whom the area isn't blank. Do some mundane tasks that are new to you, so that more of your world is filled in. Ask what subskills can give you stepping-stones.
If fears such as (B) and (C) pop up, try asking “I wonder what it would take to [hit my goals]?”. Like: “I wonder what it would take to feel comfortable dancing?” or “I wonder what it would take write fiction without fear?”.
You don’t even have to try answering the question; if it’s a topic you’ve feared, just asking it will open up space in your mind. Then, look up the answers on Google or Wikipedia or How.com and experience the pleasure of gaining competence.
[1] Richard Feynman, as a kid, surprised people because he could “fix radios by thinking”; apparently it's common to not-notice that reasoning works on machines.
[2] Thanks to Steve Rayhawk for suggesting this term. Also, thanks to Lukeprog for helping me write this post.
[3] Eliezer’s Harry Potter suggests that *not* having learned blankness be pervasive -- not having your world be tiny tunnels of thought, surrounded by large swaths of blankness that you leave alone -- is what it takes to be a “hero”. To quote:
[4] Thanks to Zack Davis for noting that the “good with computers” trait seems to be substantially about the willingness to play around and figure things out. If you’d like to reduce the amount of cached blankness in your life, and you’re not already good with computers, acquiring the “good with computers” trait in Zack’s sense is an easy place to start.
[5] One way to get at the causes of an intuition is to imagine alternate scenarios and see how your intuition changes. Fred might ask himself: "Suppose nanotech was developed via a Manhattan project. How much doom would I expect then?" or "Suppose John (who I learned all this from) changed his mind about doom probabilities. Would that shift my views?".