Related to: Semantic stopsigns, Truly part of you.

One day, the dishwasher broke. I asked Steve Rayhawk to look at it because he’s “good with mechanical things”.

“The drain is clogged,” he said.

“How do you know?” I asked.

He pointed at a pool of backed up water. “Because the water is backed up.”

We cleared the clog and the dishwasher started working.

I felt silly, because I, too, could have reasoned that out.  The water wasn’t draining -- therefore, perhaps the drain was clogged.  Basic rationality in action.[1]

But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”.  And I’d remembered I “didn’t know how mechanical things worked” (a cached thought).  And then -- prompted by my cached belief that there was a magical “way mechanical things work” that some knew and I didn’t -- I stopped trying to think at all.  

“Mechanical things” was for me a mental stopsign -- a blank domain that stayed blank, because I never asked the obvious next questions (questions like “does the dishwasher look unusual in any way?  Why is there water at the bottom?”).

When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets.  They clung rigidly to the handed-down procedures.  It didn’t occur to them to try to understand, or to improvise.  The students treated math the way I treated broken dishwashers.

Martin Seligman coined the term "learned helplessness" to describe a condition in which someone has learned to behave as though they were helpless. I think we need a term for learned helplessness about thinking (in a particular domain).  I’ll call this “learned blankness”[2].  Folks who fall prey to learned blankness may still take actions -- sometimes my students practiced the procedures again and again, hired a tutor, etc.  But they do so as though carrying out rituals to an unknown god -- parts of them may be trying, but their “understand X” center has given up.

To avoid misunderstanding: calling a plumber, and realizing he knows more than you do, can be good.  The thing to avoid is mentally walling off your own impressions; keeping parts of your map blank, because you imagine either that the domain itself is chaotic, or that one needs some special skillset to reason about *that*.

Notice your learned blankness

Learned blankness is common.  My guess is that most of us treat most of our environment as blank givens inaccessible to reason[3]. To spot it in yourself, try comparing yourself to the following examples:

1.  Sandra runs helpless to her roommate when her computer breaks -- she isn’t “good with computers”.  Her roommate, by contrast, clicks on one thing and then another, doing Google searches and puzzling it out.[4]

2.  Most scientists know the scientific method is good (and that e.g. p-values of 0.05 are good).  But many not only don’t understand why the scientific method (or these p-values) are good -- they don’t understand that it’s the sort of thing one could understand.  

3.  Many respond to questions about consciousness, morality, or God by expecting that some other, special kind of reasoning is needed, and, thus, walling off and distrusting their own impressions.  

4.  Fred finds he has an intuition about how serious nano risks are.  His intuition is a blank for him; something he can act on or ignore, but not examine.  It doesn’t occur to him that he could examine the causes of his intuition[5], or could examine the accuracy rate of similar intuitions.

5.  I find it hard to fully try to write fiction -- though a drink of alcohol helps.  The trouble is that since I’m unskilled at fiction-writing, and since I find it painful to notice my un-skill, most of my mind prefers to either not write at all, or to write half-heartedly, picking at the page without *really* trying.  Similarly, many pure math specialists avoid seriously trying their hand at philosophy, social science, or other “messy” areas.

6.  Bob feels a vague desire to "win" at life, and a vague dissatisfaction with his current trajectory.  But he's never tried to write down what he means by "win", or what he needs to change to achieve it.  He doesn't even realize that he could.

7.  Sandra just doesn’t think about much of anything.  She drives to work in a car that works by magic, sits down in her cubicle at a company that makes profits by magic, and thinks through her actual coding work.  Then she orders some lunch that she magically likes, chats with coworkers via magically habitual chatting-patterns, does another four hours’ work, and drives home to a relationship that is magically succeeding or failing.

I’m not saying we should constantly re-examine everything. Directed attention, and a focus on your day’s work, is useful. But the “learned blankness” I’m discussing is not goal-oriented.  Learned blankness means not just choosing to ignore a domain, but viewing that domain as inaccessible; it means being alienated from the parts of your mind that could otherwise understand the thing.

Analogously, there are often good reasons not to e.g. seek a new job, skillset, or romantic partner... but one usually shouldn’t be in the depression-like state of learned helplessness about doing so.

Reduce learned blankness

There are many reasons folks feel helpless about understanding a given topic, including:

  • A.  Simple habit. You aren’t used to thinking about it; and so you just automatically don’t.
  • B.  Desire to avoid initial blunders that will force you to emotionally confront potential incompetence (as with my fear of writing fiction);
  • C.  Avoidance of social conflict, or of status-claims; if your boss/spouse/whoever will be upset by your disagreement, it may be more comfortable to “not understand” the domain.

So, if you’d like to reduce your learned blankness, try to notice areas you care about, that you’ve been treating as blank defaults.  Then, seed some thoughts in that area: set a ten minute timer, and write as many questions as you can about that topic before it beeps.  Better yet: hang out with some people for whom the area isn't blank.  Do some mundane tasks that are new to you, so that more of your world is filled in.  Ask what subskills can give you stepping-stones.

If fears such as (B) and (C) pop up, try asking “I wonder what it would take to [hit my goals]?”.  Like: “I wonder what it would take to feel comfortable dancing?” or “I wonder what it would take write fiction without fear?”.  

You don’t even have to try answering the question; if it’s a topic you’ve feared, just asking it will open up space in your mind. Then, look up the answers on Google or Wikipedia or and experience the pleasure of gaining competence.


[1] Richard Feynman, as a kid, surprised people because he could “fix radios by thinking”; apparently it's common to not-notice that reasoning works on machines.

[2] Thanks to Steve Rayhawk for suggesting this term.  Also, thanks to Lukeprog for helping me write this post.

[3] Eliezer’s Harry Potter suggests that *not* having learned blankness be pervasive -- not having your world be tiny tunnels of thought, surrounded by large swaths of blankness that you leave alone -- is what it takes to be a “hero”.  To quote:

"Ah..." Harry said. His fork and knife nervously sawed at a piece of steak, cutting it into tinier and tinier pieces. "I think a lot of people can do things when the world channels them into it... like people are expecting you to do it, or it only uses skills you already know, or there's an authority watching to catch your mistakes and make sure you do your part. But problems like that are probably already being solved, you know, and then there's no need for heroes. So I think the people we call 'heroes' are rare because they've got to make everything up as they go along, and most people aren't comfortable with that.”

[4] Thanks to Zack Davis for noting that the “good with computers” trait seems to be substantially about the willingness to play around and figure things out.  If you’d like to reduce the amount of cached blankness in your life, and you’re not already good with computers, acquiring the “good with computers” trait in Zack’s sense is an easy place to start.

[5] One way to get at the causes of an intuition is to imagine alternate scenarios and see how your intuition changes.  Fred might ask himself: "Suppose nanotech was developed via a Manhattan project.  How much doom would I expect then?" or "Suppose John (who I learned all this from) changed his mind about doom probabilities.  Would that shift my views?".

186 comments, sorted by
magical algorithm
Highlighting new comments since Today at 10:02 AM
Select new highlight date
Moderation Guidelinesexpand_more

Great article! I didn't realize how I blank on some of those.

When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets. They clung rigidly to the handed-down procedures. It didn’t occur to them to try to understand, or to improvise.

I'd like to self-centeredly bring up a similar anecdote, which forms part of my frustration how people give unnecessarily-complex explanations, typically based on their own poor understanding.

In chemistry class, when we were learning about radioactive decay and how it's measured in half-lives, we were given a (relatively) opaque formula, "as if from the sky on stone tablets". I think it was

mass_final = mass_initial * exp(-0.693 * t / t_halflife)

And students worked hard to memorize it, not seeing where it came from. So I pointed out, "You know, that equation's just saying you multiply by one-half, raised to the number of half-lives that passed."

"Ohhhhhhhhhhhh! It's so much simpler that way!" And yet a test question was, "What is the constant in the exponent for the radioactive decay formula?" Who cares?

Sandra runs helpless to her roommate when her computer breaks -- she isn’t “good with computers”. Her roommate, by contrast, clicks on one thing and then another, doing Google searches and puzzling it out.[4]

Wow, a footnote on this one and not even a link to the xkcd about it? ;-)

I was going to ask where the constant for the exponent came from, but with a calculator and the Wikipedia page on exponentiation, I figured it out myself. This site is good for me.

It looks like that formula is a lot like cutting the ends off the roast.

The answer to "who cares?" is most likely "some 1930s era engineer/scientist who has a great set of log tables available but no computer or calculator".

I am just young enough that by the time I understood what logarithms were, one could buy a basic scientific calculator for what a middle class family would trivially spend on their geeky kid. I remember finding an old engineer's handbook of my dad's with tables and tables of logarithms and various probabilistic distribution numbers, it was like a great musty treasure trove of magical numbers to figure out what they meant.

I don't know where that ended up, but I still have his slide rule.

Of course, even in the day, it would make more sense to share both formula, or simply teach all students enough math to do what Gray does above and figure out for yourself how to calculate the model-enlightening formula with log tables. Since you'd need that skill to do a million other things in that environment.

I have to say, if I saw anyone write the equation that way I'd question how much they understood the concept themselves!

EDIT: Let me also add, if I saw anyone asking that "what's the constant" question, I'd conclude they didn't understand it unless I saw good evidence otherwise...

Just to brighten your day, that would be most teachers and probably most textbook editors.

When I teach College Algebra at the community college where I work, one of the standard applications in the chapter on exponents and logarithms is half-life. The required text doesn't give the half-life formula above, but instead gives

mass_final = mass_initial exp(k t)

and shows how to calculate k by using t_halflife for t (and 1/2 mass_initial for mass_final).

This is a useful general method, but in the course of explaining why radioactive decay is exponential and what half-life means, I naturally derive

mass_final = mass_intial * (1/2) ^ (t / t_halflife),

so I just tell them to use that.

Maybe I'm cheating them because I'm making them do less work, but I like to think that some of them leave the class understanding what the heck a half-life is.

I really think that strip should be in the Related To list at the top...

I'm reminded of a (probably untrue) story about officer training school in the British army: as part of a test, the officer candidates are asked what the correct way to dig a trench is. The correct answer is:

I say "Sergeant, dig me a trench!"

In other words, you saw a broken dishwasher, and you know that the way to fix a broken dishwasher is to find Steve Rayhawk and tell him that there's a dishwasher that needs fixing. Which you did, and it worked. ;)

That's different, though- it's probably a rather good way to teach people that their first impulse as an officer should be delegating that which can be delegated. I'd imagine that promoted engineers today need to train themselves not to start micromanaging the sort of project they'd previously have done themselves, but rather to give it to someone capable and leave them to do it.

Which is why evenness is one of the virtues. Some candidates need to be taught to delegate. Others need to be thought to think for ten seconds before throwing their hands up. Most probably need to be taught both.

Even if you're definitely going to delegate a task, it's a good idea to know a few things about how it's done. You might need to interrupt the Sergeant if he starts digging the trench wrong!

Even if you're definitely going to delegate a task, it's a good idea to know a few things about how it's done.

Yes... except when it isn't ;)

In a vacuum, yes, but there is opportunity cost.

Not just that, you need to know something about what people need to fulfill the orders you're given.

Very observant post. I've noticed this 'learned blankness' in a lot of people when it comes to 'nerdy' areas like math and science, probably because I'm not as blank in these areas. (There are plenty of things I don't know very much about, like for example the North American legal system, but my usual thought is "wow I would really like to get a book/do a wikipedia search out on that!") Unfortunately it's not as easy to pinpoint the areas where I am blank.

But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”.

As part of my 'mission to become a real grownup', I've started trying to solve small household problems like this on my own. Sometimes it leads to a lot of time-wasting, like the time I spent half an hour trying to fix the toilet when it turned out my roommate had just turned the water off because the sound kept her awake. I would have saved myself an hour if I'd made the problem not my responsability, but now I have a pattern-recognition schema in my head for toilet problems...the first thing I'll check for next time, after "is is plugged?" will be "is the water on?" I'm assuming that this is how most people become good in these areas...

For entertaining examples of mechanical reasoning, cartalk is pretty good. I imagine that many of their listeners think of the hosts as "magically knowledgeable" about cars, rather than as having experienced tens of thousands of car related stories in the vein of your toilette example.

Not that I disagree with you in general, but I can think of a few cases in which you may actually want to cultivate blankness toward a given subject. In particular, deep and difficult questions have been known to occasionally drive people mad - it's an occupational hazard for mathematicians in particular, and perhaps also for people in other fields. One might reasonably object that correlation does not imply causation in this case, but I have had a couple of experiences in which intense study of math and physics led me to some pretty dark psychological places, and I had to back off for awhile and think about more mundane matters while my mind reset. It's possible that, for some people, some areas of thought really are inaccessible, insomuch as they could irrevocably damage themselves in trying to get there.

I have had a couple of experiences in which intense study of math and physics led me to some pretty dark psychological places

Why do I feel the irrational urge to beg you to do a post on this? What could possibly go wrong? :-)

Yeah, I could write about this. Look for it tomorrow (Wednesday) or Thursday evening.

I agree, it does sound fascinating! Skatche, please consider expanding on this, supposing you can do so and remain healthy.

I react to cookery in the same way many people react to computing.

Sometimes I try to use this to understand the reactions of people who have trouble with computers.

The trouble with explaining this analogy to people is people's instant reaction is to go "cooking isn't scary at all! Look at all these reasons why kitchens are fun and non-scary".

Cooking is a lot like computing in reverse. Instead of being the programmer, you're the cpu. Follow the program, and you'll end up with the result the recipe provides.

The part of cooking where people look like they're just tossing things together is much more advanced. Cuddle your recipe book while you cook, it's your best friend.

I really recommend 'The Joy of Cooking' as a good book to start with, especially older editions. My 'acid test' of a general-purpose cookbook is if it has a real recipe for cream of mushroom soup or if it just says 'add 1 can'. The older editions have the real recipe, as well as massive amounts of information not only about food but also about how to serve it.

Edit - please disregard this post

My 'acid test' of a general-purpose cookbook is if it has a real recipe for cream of mushroom soup or if it just says 'add 1 can'.

Why is this? It seems that people often cling to the "old way" of doing things even if the new way is faster and better because of some emotional attachment to the way they have always done things. No idea if this applies to you, but as someone who never cooks I'm wondering if this makes some real difference.

I don't think that it is "old way" versus "new way"; but it seems clear to me that someone has to know the recipe. If you buy a pre-made can of mushroom soup, obviously the manufacturer must have used the recipe. And then there's the issue if none of the brands of mushroom soup are of adequate quality for your purposes.

It's like the difference between a programmer writing his own routines or using a pre-packaged library. I think, in order to be considered a competent programmer, you should be able to write your own routines, even if you don't have to in the majority of cases. A cookbook is open source for food. "Buy 3 cans of Kraft spaghetti sauce" is cheating.

So is canned soup with excess sodium the culinary equivalent of a pre-packaged routine library with bad built-in assumptions?

If "add one can" is the new way of cooking then the new new way of cooking is to call up Sichuan Gourmet and order double cooked pork, mapo tofu, and a large white rice. Serves two.

The new way of cooking seems to be never actually touching your food before you eat it. Microwave dinner, slice the plastic and nuke. Frozen pizza into the oven and bake. Nuke the burrito. Ramen into boiling water if you make it the advanced way, or in a cup of cold water and into the microwave if you don't.

Compensate for the particle-board taste with strong enough flavors and people won't care. The most important things are ease of heating and not needing to wait.

Edit - please disregard this post

Why do you say that frozen pizza and microwave dinner tastes like particle-board? There is no good reason why they should be inherently inferior to home cooked meals. Why couldn't you put the 'perfect' dinner in a box and sell it? (I realize that there is no dinner that is perfect for everyone, but you could offer a wide enough array of choices to cover most tastes)

Of course, cooking yourself allows you to fine tune the seasoning, perhaps use fresher ingredients (although frozen ingredients can arguably be more fresh in some cases), and have more variation. There is a lot of crap out there, but I find that the quality of these dinners has improved drastically over the last couple of years.

Having said all this; I do enjoy cooking as well. It it seemed to me that your post showed some biases in need of correcting.

Frozen food is not inherently inferior to home-cooked food at all, given that you can freeze things you make at home without the universe imploding! I made a pizza the other day. Some of it is in my freezer now. It's not as good as it was hot out of the oven, but it's still a fine pizza considering I'd never made one before (future pizzas will be better). I used frozen spinach in the pizza because frozen vegetables are no less healthful or tasty (although there are some applications for which they are unsuitable, like roasting) and easier to keep around.

However, as a contingent, non-inherent fact about commercially available prepared frozen meals, they are often made with inferior ingredients (the details of the process are largely concealed from the consumer so this is likely to be financially worthwhile), designed for bland flavor profiles (to appeal to the broadest customer base), and loaded up with cheap tricks to make them desirable in spite of this blandness (inexpensive fat and starch and salt and sugar). The texture often leaves much to be desired as well.

There are lots of reasons for it to taste worse than real food. The companies that make and sell these things have to make them able to withstand conditions that normal food can't. They have to add preservatives, freeze and possibly even refreeze the food, swap out really delicate ingredients for alternatives that lack flavor but have shelf-stability, and endure breakdown of the compounds that make real food good.

We will be able to overcome all of this with effective nanotech, of course. Right now instant foods are inferior because the companies aren't selecting for taste, they're selecting for cheapness of production and handling. Taste suffers, and they put enough effort into it to be 'good enough' and no more.

I probably do have biases regarding the issue, but I have more objective reasons as well.

Edit - please disregard this post

I might be starting to see why you picked the name Cayenne.

It's my real name, but since I chose it when I got my name changed you're still not wrong.

I do mostly cook the food I eat from scratch, as long as you can accept 'bought the meat and cheese from a grocery store instead of killing or milking the animal personally' as from scratch. Mostly this isn't because I'm that incredibly picky, but instead because for me time is abundant and money is scarce. (I am picky, but I'm not really anti-preservative.)

Edit - please disregard this post

If you wish to bake an apple pie from scratch, you must first invent the universe.

Your legal name is Cayenne? That is super-cool. Or, you know, hot like burning capsaicin.

Yeah, I changed my name a while ago, and decided that as long as I was changing it anyway I may as well choose something fun. I'm hoping that my future will be as spicy as my name.

Edit - please disregard this post

  • LILY: Chantarelle was part of my exotic phase.
  • BUFFY: It's nice. It's a mushroom.
  • LILY: It is? That's really embarrassing.
  • BUFFY: It's an exotic mushroom, if that's any comfort.

Most normal food can actually take freezing pretty well, and freezing should obviate the need for preservatives... what frozen foods are you thinking of that have preservatives in them?

Most frozen pizza does, I believe. I seem to remember ice cream having preservatives too. I think that preservatives are more likely to be in frozen food as the number of processing steps that it's been through increase.

I'll check later today on the pizza and ice cream, it's been long enough that I don't have a clear memory.

Edit - please disregard this post

You're right! -- The dough contains TBHQ. That's the only one, so it's relatively reasonable as far as preservatives go.

I looked at several varieties of ice cream, and none that I found had preservatives. Lots and lots of emulsifiers, but no preservatives.

Edit - please disregard this post

It seems that people often cling to the "old way" of doing things even if the new way is faster and better because of some emotional attachment to the way they have always done things.

With cooking, the trouble is that it doesn't scale, or rather, the economies of scale come at the inevitable expense of quality. A home-made meal prepared by a skilled cook and with well chosen ingredients is guaranteed to be superior even to the output of restaurants, let alone to something produced on an industrial scale. (Especially when you consider that the home-made meal can be subtly customized to your taste.)

I think quality is to some degree subjective when it comes to judging a meal.

I know several people who are widely praised as great cooks, but I have meals at multiple restaurants that I prefer to anything I've had home cooked. I'm not talking about high-dollar places either. Just places your typical middle-class American has access to.

An overlooked factor in how nice something tastes at a given time is whether it "hits the spot" - if it's exactly what you wanted. Since restaurants are usually consistent about what all goes into their food, you can become familiar with what spots those meals will hit, and get them at the best times.

Or you can learn to cook and hit the spot all the time ;) But it's hard to reliably do it for someone else, so if you're eating others' cooking it may not accomplish this.

Interesting point!

I suspect I'll have a problem though.

When I go to a restaurant, I almost always get the same thing I got last time with the thinking: "I may not like what I get if I get something new, and I already know I love X."

My initial reaction to the idea of learning to cook is similar. Why go through the trouble, when I already love what I'm getting!

I suppose food just isn't that important to me.

For certain sufficiently generic, low-value-on-variety preferences, learning to cook could be the last thing on your list for what you need to make your life better. (I dislike certain very common foods and food combinations, and I love variety, so while I can eat out I can't do it that often and be pleased about it.)

I just want to point out that I have low-value-value-on-variety only when it comes to food preferences. :D

Other areas of my life are full of variety and I'm always seeking out more.

Also, just to expand on what's happening here...

Whenever I have new dishes for whatever reason, I don't automatically dislike them because they're something new. For example, I recently found out how much I like red onions on a cold cut sandwich. I think what goes on in my specific case is that there are lots of things that I don't eat now that I would probably like, but eating food I like consistently (by sticking to the things I know) is more important to me than finding the foods I haven't tried but may like.

Of course, these aren't absolutes. I will from time to time become tired of something and try something new.

If you're willing to pay enough, you can get insane numbers of cooks working on a single dish at a restaurant.

As compared to a really good restaurant, a home-made meal is only better because you're not paying the chef or the rent.

A home-made meal prepared by a skilled cook and with well chosen ingredients is guaranteed to be superior even to the output of restaurants, let alone to something produced on an industrial scale.

Not even that skilled. Commercial cooking, including restaurant cooking, is the industry of turning mediocre (at best) ingredients into something people will pay a premium for. Have you ever seen a commercial cook's eyes light up at the prospect of having actually good ingredients to cook with? I'm thinking of an old girlfriend: "I will make you the best meal ever. Buy this list of fairly basic ingredients."

It's a measure of depth of information, I guess. If a cookbook has directions on preparing cream of mushroom soup, then it's really likely to have other very obscure recipes. Also shortcuts like dumping in a can of soup mean that the end result won't taste as good... not important most of the time, but nice when you want a treat.

It's not so much that it's an old way that makes it good, it's more that the long way just gives a much better result that has a really short shelf life. I want at least the option to make the better version.

For what it's worth, I am a supertaster, and I'm picky too.

Edit - please disregard this post


FWIW, knowing how I react to other foods, I predict with a great deal of confidence that I would not care, or that I would even prefer, the recipe with soup from a can.

You can adjust recipes. It is hard to adjust cans. For instance, I think I would find that many commercially available mushroom soups use chicken stock. I can use vegetable or mushroom stock if I make it myself. (Or I did before I detected my mushroom allergy, anyway.)

It wouldn't surprise me to find out that there's a way to make 'partially hydrogenated vegetable and/or soy bean oil' stock.

Edit - please disregard this post

Seconded on "The Joy of Cooking"; it covers topics from the very basic to the very advanced. I found the left-hand side of that spectrum extremely useful when I was just starting out cooking, when I had "silly" questions like:

  • What does "broiling" mean?
  • What should a decent cutting board be made of? (There are a surprising number of cutting boards out there that are made of totally useless materials like glass).
  • How do I tell a good tomato from a bad one?

And so on, all those things that it seemed like I ought to already know, but didn't.

I have a preference for the Fannie Farmer cookbook, personally. I regularly flip between my 1918 edition and my 1986 edition to see how cooking styles, preferences, and procedures have changed. The 1986 edition also has some excellent sections on the process of (for example) baking in general, rather than just a list of recipes.

The trouble with explaining this analogy to people is people's instant reaction is to go "cooking isn't scary at all! Look at all these reasons why kitchens are fun and non-scary".

If you get this from someone who has trouble with computers, just turn it around and point it at them.

I had this problem, too. This stuff helped:

-Asking my good cook friends what they kept around as "staples." This, in my mind, is a collection of stuff that can be combined in any way without producing bad flavor.

-Giving myself way, way more than enough time from start (inspiration or recipe-following) to finish (cleanup).

-Consciously forgiving myself for not being instantly good at something. Also -- and more importantly -- I forgave myself for not caring that much about variety.

This is an excellent example.

Different things will work for different people here, but at the age of very nearly 40, I cooked my first meal on my own that started with chopping an onion less than a year ago. One thing that unexpectedly made a big difference turned out to be learning how to wash my hands properly, by watching the video instructions. Knowing that, I was more confident eg handling raw meat and other ingredients, which made the whole thing much easier.

"cooking isn't scary at all! Look at all these reasons why kitchens are fun and non-scary".

For what it's worth, I tend to say the same thing about both cooking and computers, so I'd suspect it's less a flaw with the analogy and more that this is a common reaction to saying "X is scary" to someone who is good with X. I even get this when I mention my phobias to people.

Consider a book on the science of cooking (I liked this one), I found knowing (roughly) how various processes transform food to be satisfying and helpful.

To me, the hard part in this procedure looks to be this step:

try to notice areas you care about, that you’ve been treating as blank defaults.

It seems likely to me that such areas are going to be ones that I habitually don't turn my real attention to, and that if they come briefly to mind it won't necessarily be obvious to me that I am treating them as blanks.

It seems likely to me that such areas are going to be ones that I habitually don't turn my real attention to, and that if they come briefly to mind it won't necessarily be obvious to me that I am treating them as blanks.

Yep, this is the key point: how to (better) notice what it is you're not noticing. So how do we drill down on this one?

This is the same key problem in working out what you really want.

While it's not good for building deep skills, reading (the right sort of) random blogs is a great way to defeat learned blankness. After reading a post or two about something, even if I don't retain much of it, I do retain enough of an outline to treat it as something that's available to reason about and research if it becomes relevant.

Can I ask how you find "random blogs"? Is it truly random, or do you have a method for finding new stuff?

"People who can't or won't think for themselves" is how a friend of mine characterised his customers as a freelance Windows NT admin (a very good one - and good NT admins aren't cheap). "There's a lot of money in sewage."

Outsourcing thinking to anyone who can be convinced or coerced into doing it seems quite common to me. People will so often do things just because someone else demands it of them. I have commented before on how my ridiculously charming daughter [1] is remarkably creative in intellectual laziness, and how I have to be sure not to let her get away with it. She will damn well learn not to be lazy just because she can!

I blank on programming, which is not so good for a sysadmin to a development team. I don't write anything more than shell scripts and I have the algorithmic insight of someone who doesn't. I suppose I should learn more.

Too many people consider computers malevolent boxes of evil completely unamenable to any rational consideration, even in theory. Your "Sandra" example is many programmers I've worked with.

[1] and it works on people other than me, e.g. the man in the coffee shop at 5pm yesterday she asked to get her a babycino (frothy milk with chocolate on top). He switched the machine back on after he'd switched it off and cleaned it just because the cute little girl asked for a 50p drink.

I'm a researcher in programming languages, and I've dabbled a little in discrete math and algorithms research. Though my advice may be a little slanted, "algorithmic insight" is what I'm most expert in. Perhaps, then, the following is right.

If you "blank" on programming, but already know system administration and shell scripts, then the "lack" you're describing is probably pretty small.

I strongly believe that what might look like "algorithmic insight" is mostly the product of obsessively picking apart designs and implementations - not just computer programs, but any engineered mechanism. It's a great habit to inculcate, and (I think) leads naturally to gradually understanding how everything works.

I bet, though, that you could massively boost your own algorithmic insight by the following program of reading and practice:

  • First learn (if you haven't already) a worthwhile programming language. C has a certain simple charm, but most industry-standard languages are pretty horrible. Java is mediocre, but limiting. I suggest starting with Python, and learning C, Racket, and either Haskell or OCaml. (Again, though - I'm a PL researcher, so this is possibly biased.)
  • Actively, carefully read CLRS. It's detailed, doesn't assume much prior knowledge, and covers 98% of the algorithms any good programmer ever uses outside specialties like graphics or scientific coding. By "actively read", I mean to actually do some of its exercises, and actually implement some of its algorithms. Rephrase its ideas in your own words; stop and review whenever any idea is unclear.
  • Work some of the exercises on Project Euler or SPOJ. These are excellent sources of small, algorithmically-rich problems. You can do them in essentially any language you like, and they give good feedback.
  • As a sysadmin, you've probably already learned some of the pragmatics of managing complex systems. Other than algorithms, as above, and the most core-basic ideas about computers, good programming is about managing system complexity. Thus, implement at least a full program or two that you'd like to see exist. Games and toys are nice, as they're rich with creative opportunities, and so the work you're duplicating isn't so irritating. Demand of yourself the freedom to fiddle with and re-implement that program until the code feels clean - until it feels like solid mathematics, where every piece connects to every other piece for only sound, solid, logical reasons with fairly short descriptions.

I'd mix these activities all together. Learn algorithms and languages by implementing with them; learn the techniques of good implementation by implementing interesting algorithms and programs that you want to exist, and (eventually) solving problems with code.

fiddlemath originally sent this as a private message, and I suggested they post it publicly because it is an excellent comment! I might even do some of the stuff in it ...

[1] and it works on people other than me, e.g. the man in the coffee shop at 5pm yesterday she asked to get her a babycino (frothy milk with chocolate on top). He switched the machine back on after he'd switched it off and cleaned it just because the cute little girl asked for a 50p drink.

Good customer service? Regardless of the 'cuteness' of the customer, I think most employees wouldn't say 'no' unless the shop had already closed.

It had just closed. But they know her, so yes. And she'd just burst into tears.

Having a daughter is a serious live-fire exercise in how to think rationally despite your cognitive biases.

Having a daughter is a serious live-fire exercise in how to think rationally despite your cognitive biases.

I would be deeply interested in a post on that subject.

I wouldn't purport to be able to write a full post of sufficient quality!

But I can say the obvious is true: I become aware just what a soft touch I am, even when I realise it's a bad idea; I have to keep in mind what I'm supposed to be doing and what's a good idea and why I'm not doing the thing that's a good idea; I occasionally come to awareness carrying a Hello Kitty balloon and a fairy princess sticker book and a drink and an ice cream and then doing a stack trace to work out precisely how I got there, while the small child is demanding more things.

Keep the sensible thing firmly in mind as much as possible, and don't put up with tantrums. The child wants candy all the time, but your job is actually raising her properly. Children are highly evolved manipulators, for really obvious reasons. Mine appears particularly charming, based on how others appear similarly susceptible. It helps if I channel her mother, who is not a soft touch at all because this is her third rather than her first. Stuff like that.

your job is actually raising her properly

It's not at all obvious what this means. Have you read Bryan Caplan's book? Or, at least, a selection of his blog posts?

I thought that was the definition of a parent's job, and the arguments come in the details. Perhaps that's dodging the question. I'd think it reasonably uncontroversial to say that the answer wouldn't involve giving in to the child's every demand for physical or mental candy, though.

I haven't read the Caplan book, but I can say that having a child is way cool. Watching a small intelligence grow.

Seems any such post would be hampered by the factor that makes Poker a both a good test of rationality, and a dubious way to develop rationality: Large variance in outcomes despite identical efforts, and (partly because of that) delayed and noisy feedback on the quality of your efforts.

Of course, the other side of the coin is the Dunning--Kruger effect which causes us to overestimate our knowledge about things we're ignorant about.

The illusion of explanatory depth (Rozenblit & Keil, 2002) seems like a particularly relevant example of that other side of the coin. If you ask people if they understand how something works, like a bicycle, a flush toilet, or a zipper, they'll generally say that, yes, they understand it and could explain it. But if you ask them to draw a diagram and actually explain it, they'll often get it wrong, and realize in the process that they don't understand it as well as they thought they did. The main problem seems to be that people have higher-level understanding of the object, and experience using it correctly, which they confuse with a more in-depth knowledge of the mechanisms that make it work.

That doesn't necessarily contradict AnnaSalamon's point about stopping because of learned blankness. Seeing something stop working, and not immediately knowing why it messed up or how to fix it, might be enough to trigger that same lack of confidence that shows up after people try and fail to explain how something works. And in order to fix it you often don't need so much depth of knowledge. Even if you don't have enough knowledge to fully explain the mechanism that makes something work, you still might know enough to identify and fix this particular problem, especially since you have the thing right there to look at, think about, and play around with.

Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562. pdf

Wow. That article is pure gold: the kinds of mistaken explanations they talk about are exactly what I hear from people who give unhelpful explanations -- they don't see the limits of their own understanding of the phenomenon, and obviously can't convey what they lack. And so any explanation they give is thus extremely brittle, as they can't do much more than swap in other terms for the mysterious concepts they invoke.

(This is not to say they're completely unhelpful -- a partial explanation is better than none at all. But in that case, it's preferable that you clarify that your understanding is indeed limited, and can't connect it to a broader understanding of the world.)

This is why study groups work (if you use them properly). Explaining something to someone else makes you think about it much more clearly. Finding out you don't know about something when they ask shows holes in your knowledge.

I think that being able to clearly explain something is the mark of someone truly understanding it.

Edit - please disregard this post

And here's an OB post on evidence limiting the scope and magnitude of that effect.

Kruger and Dunning’s main data is better explained by positing simply that we all have noisy estimates of our ability and of task difficulty

I would add that it seems common for task difficulty distribution to be skewed in various idiosyncratic ways --- sufficiently common and sufficiently skewed that any uninformed generic intuition about the "noise" distribution is likely to be seriously wrong. E.g., in some fields there's important low-hanging fruit: the first few hours of training and practice might get you 10-30% of the practical benefit of the hundreds of hours of training and practice that would be required to have a comprehensive understanding. In other fields there are large clusters of skills that become easy to learn with once you learn some skill that is a shared prerequisite for the entire cluster.

Anna's proposal for reducing blankness seems to be useful only if the noise is systematically biased toward underestimating our ability in unfamiliar tasks.

I think how likley it is someone is to underestimating their ability in a unfamiliar task (like say plumbing or handling a computer) depends both primarily on:

  • the competence of specialists
  • the difficulty of the task
  • the intelligence of the individual

Its optimal for all of us to wall off parts some parts of our lives as magic. What we would gain by expending energy to explore and optimize would not outweigh the cost. The trick is realizing that most of our lives are walled off by our non-rational subsystems or just happen stance, and systematical checking these habits to see what can be improved.

At this point I'm not sure what the best way to approach this is.

I think a lot of learned blankness comes about because of fear of being wrong, or more correctly, fear of someone else blaming them for being wrong. In certain social strata, you aren't supposed to think about a problem, or let others know you're thinking about a problem, unless it is your job to think about it. If you think about a problem, and get it wrong, then you are irresponsible for not going to an expert with the problem.

So that's where learned blankness gets it's traction, in my opinion, and this is the reason why you'll find people spending an incredible amount of money, for example, going to Best Buy and having them install an operating system for you. (I'm sure this site could come up with numerous similar examples of this.)

But the alternative to this is inevitably being wrong time and again, but this needs to be understood as a part of learning, and as a part of the process of inquiry. We need to learn how to make mistakes, how to know when you're making a mistake, and how to learn from it. But you'll inevitably hear "Why didn't you call the X-man!"

Most people I've known who have a "learned blankness" about computers are genuinely scared that they'll cause significantly more damage than the expert charges - usually they're worried they'll basically destroy their computer beyond salvaging, which is probably a good $1,000 - $2,000.

For myself, I had a "learned blankness" about languages, because my only source of education was school, and each failed language class seriously hurt my GPA. Now that I have a friend teaching me a bit of Chinese, and am home-studying on sign language, I'm finding it much easier.

I'd expect a lot of these quite possibly start as a fear of genuinely reasonable consequences. Your example strikes me as a definite subset of this, of course :)

I tend to assume that I'm going to make a mistake, especially with new things. It doesn't help fix them, but at least I'm not surprised when it blows up in my face. Once I'm comfortable with it I assume less failure until it fails, usually in the perfect way to make me look totally foolish.

Somehow the really bad failures seem to happen after I brag about them. I brag a lot less now, but that hasn't stopped them either. Meh.

Edit - please disregard this post

I tend to assume that I'm going to make a mistake, especially with new things. It doesn't help fix them, but at least I'm not surprised when it blows up in my face.

That's exactly how I learned to ride a bicycle at age 30.

I have observed similar behavior in others. Only I called it 'blackboxing', for lack of a better word. I think this might actually be a slightly better term than 'learned blankness', so I hereby submit it for consideration. It's borrowed from the software engineering idea of a black box abstraction.

People tend to create conceptual black boxes around certain processes, which they are remarkably reluctant to look within and explore, even when something does go wrong. This is what seems to have happened with the dishwasher incident. The dishwasher was treated as a black box. Its input was dirty dishes, its output was clean ones. When it malfunctioned, it was hard to see it as anything else. The black box was broken.

Of course, engineers and programmers often go out of their way to design highly opaque black boxes, so it's not surprising that we fall victim to this behavior. This is often said to be done in the name of simplicity (the 'user' is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high. Throwing out a broken dishwasher and buying a new one is far more profitable to a manufacturer than making it easy for the users to pick it apart and fix it themselves.

The open source movement is one of the few prominent exceptions to this that I know of.

This is often said to be done in the name of simplicity (the 'user' is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high.

There's also one much more important reason. To quote A. Whitehead,

Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

Humans (right now) just don't have enough cognitive power to understand every technology in detail. If not for the black boxes, one couldn't get anything done today.

The real issue is, whether we're willing to peek inside the box when it misbehaves.

Good post - upvoted!

[4] Thanks to Zack Davis for noting that the “good with computers” trait seems to be substantially about the willingness to play around and figure things out.

Quoting Homestuck:

grimAuxiliatrix [GA] began trolling twinArmageddons [TA]
TA: 2ee the menu up top?
TA: fiiddle around wiith that tiil you open the viiewport.
GA: I Did Fiddle With It
GA: To No Avail
TA: iif you cant fiigure 2hiit out by fuckiing around you dont belong near computer2.

(twinArmageddons has a "typing quirk" related to the number 2; if you didn't get it, '2' looks like a reversed 's'.)

As a professional programmer, I should note that there's a good kind of blankness - the state of no assumptions, which you should reset your mind to when attempting to figure out a novel problem. Many more things can go wrong in programming than in plumbing, and assuming that you know something about the root cause without sufficient evidence can lead you into a blind alley. Just today, I diagnosed a bug where make_pair(y, z) in user code began failing to compile. This was poorly written to begin with (it should always have been written as make_pair(y, z)), but the original 1998 and 2003 C++ Standards said that it should work anyways - and it did for years. Then the new 2011 C++ Standard changed the definition of make_pair() in such a way that, in general, make_pair(y, z) will fail to compile. (This is intentional - that code is "bad", and the make_pair() change has other consequences which are very good. The needs of the many outweigh the needs of the few, or the one.) I verified that the Standard Library implementation had been changed in accordance with the 2011 Standard, and almost said "case closed" and sent my E-mail. I had seen this before, and I knew that this was an identical manifestation.

Right before hitting Send, I had second thoughts and took another look. All of my previous analysis was correct, but this case was special (and unlike the previous case I had seen). In this code, make_pair(y, z), y's type Y was different from X. (X was unsigned int and Y was int - very simple types, I'm just abstracting it further. Despite appearances they are unrelated as far as we're concerned here, so X and Y are probably better to think about.) The previous case I'd seen had identical types. With different types, successive drafts of the Standard have said different things:

  • 1998/2003: This should compile and work.
  • 2011 v1.0: This should compile and work, but other cases will eat your brains.
  • 2011 v2.0: No brain eating! This shouldn't compile, and neither should those other cases.
  • 2011 v2.1: The best of both worlds: this should compile and work, but those other cases shouldn't eat anyone's brains.

The final piece of the puzzle was that the compiler in question had implemented 2011 v2.0, but not yet v2.1. So the correct thing to do was to change the user's code, and open a compiler bug.

If I had been slightly more distracted/tired, less caffeinated, or (most perniciously of all) more supremely confident in my cached past analysis, I would have arrived at an incorrect conclusion. Instead of making a last-second save, if I had started from a blank slate, I would have been much more likely to notice X != Y and its interaction with the changing Standardese and compiler implementation.

It should be lightness, not blank slate. Some hypotheses are clearly better than others, but one shouldn't typically be confident in things that were not observed with sufficient clarity, which means constant search for experimental tests and alternative explanations.

Programming is probably the most intensive mode of application for basic scientific method, by the sheer volume of hypotheses and experiments required to get anything done.

Sometimes my critical contribution to helping another programmer solve a problem basically consists of reading the fascinating error message. (Well, the fact that I also programmed the library they are using to show the error message is arguably a critical contribution as well.)