Dilbert cartoon

How many of the things you "know" do you have memorized?

Do you remember how to spell all of those words you let the spellcheck catch?  Do you remember what fraction of a teaspoon of salt goes into that one recipe, or would you look at the list of ingredients to be sure?  Do you remember what kinds of plastic they recycle in your neighborhood, or do you delegate that task to a list attached with a magnet to the fridge?

If I asked you what day of the month it is today, would you know, or would you look at your watch/computer clock/the posting date of this post?

Before I lost my Palm Pilot, I called it my "external brain".  It didn't really fit the description; with no Internet access, it mostly held my contact list, class schedule, and grocery list.  And a knockoff of Minesweeper.  Still, in a real enough sense, it remembered things for me.The vast arena of knowledge at our fingertips in the era of constant computing has, ironically, brought it farther away.  It seems nearer: after all, now, if you are curious about Zanzibar, Wikipedia is a few keystrokes away.  Before the Internet, you'd probably have been looking at a trip to the library and a while wrestling with the card catalog; and that would be if you lived in an affluent, literate society.  If you didn't, good luck knowing Zanzibar exists in the first place!

But if you were an illiterate random peasant farmer in some historical venue, and you needed to know the growing season of taro or barley or insert-your-favorite-staple-crop-here, Wikipedia would have been superfluous: you would already know it.  It would be unlikely that you would find a song lyrics website of any use, because all of the songs you'd care about would be ones you really knew, in the sense of having heard them sung by real people who could clarify the words on request, as opposed to the "I think I heard half of this on the radio at the dentist's office last month" sense.

Everything you would need to know would be important enough to warrant - and keep - a spot in your memory.

So in a sense, propositional knowledge is being gradually supplanted by the procedural.  You need only know how to find information, to be able to use it after a trivial delay.  This requires some snippet of propositional data - to find a song lyric, you need a long enough string that you won't turn up 99% noise when you try to Google it! - but mostly, it's a skill, not a fact, that you need to act like you knew the fact.

It's not clear to me whether this means that we should be alarmed and seek to hone our factual memories... or whether we should devote our attention to honing our Google-fu, as our minds gradually become server-side operations.

New Comment
53 comments, sorted by Click to highlight new comments since: Today at 2:24 AM

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

-- Socrates, in Plato's Phaedrus, circa 370 BCE, bemoaning the deleterious effects of the new technology of "writing".

And he was right, too.

I knew that granting the human race the power of psychometric tracery would lead to no good!

I dunno. What good exactly did memorizing texts and dialogues do? The practice seems to survive these days in generally bad contexts - madrassas where the main curriculum is memorizing the Koran and the hadiths, for example. (And in general, memorization seems to go hand in hand with fanaticism.)

My guess is that Socrates was more advocating that his students internalize the concepts he was teaching (and philosophy more generally). Then again, I think he's identifying a problem with all wisdom received and not discovered.

One historical view of what was going on there is that much of the interesting stuff going on during Plato's time was due to the intersection of 'oral culture' and 'print culture'. Eric Havelock and Marshall McLuhan, in particular, wrote about that - I don't have a good online reference though.

I have a short blog post mentioning it here.

EDIT: Fixed link. HT CronoDAS

For a view of memory and literacy in mediaeval Europe, see Mary Carruthers' works. A fundamental skill at that time, for the scholarly, was the use of books to furnish one's mind with ideas to be called forth at will, and the living work held in one's head was, despite Socrates' gloomy prediction, still considered superior to the mere written text.

From a later time, here is Francis Bacon: Reading maketh a full man; conference a ready man; and writing an exact man.

Circumstances have changed, but the thread running through all of these still applies. Knowing only where to look things up is like trying to speak a language from a dictionary instead of learning the vocabulary.

Or the Chinese civil service exam that consisted of reciting (IIRC) the Book of Songs, the Confucian Analects, and a few other things, from memory.

Yeah, basically the "5 classics" and whatever else they felt was important at the time. Not necessarily reciting, though. Often it was being locked in a box for an extended period, expected to transcribe whole passages perfectly from memory.

ETA: Wikipedia link

That we "outsource" our brains to the environment is the main idea behind the distributed cognition paradigm of research.

For instance, one good primer is Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research, from which is the following excerpt:

In several environments we found subjects using space to simplify choice by creating arrangements that served as heuristic cues. For instance, we saw them covering things, such as garbage disposal units or hot handles, thereby hiding certain affordances or signaling a warning and so constraining what would be seen as feasible. At other times they would highlight affordances by putting items needing immediate attention near to them, or creating piles that had to be dealt with. We saw them lay down items for assembly in a way that was unambiguously encoding the order in which they were to be put together or handed off. That is, they were using space to encode ordering information and so were off-loading memory. These are just a few of the techniques we saw them use to make their dedecision problems combinatorially less complex.

We also found subjects reorganizing their workspace to facilitate perception: to make it possible to notice properties or categories that were not noticed before, to make it easier to find relevant items, to make it easier for the visual system to track items. One subject explained how his father taught him to place the various pieces of his dismantled bicycle, many of which were small, on a sheet of newspaper. This made the small pieces easier to locate and less likely to be kicked about. In videos of cooking we found chefs distinguishing otherwise identical spoons by placing them beside key ingredients or on the lids of their respective saucepans, thereby using their positions to differentiate or mark them. We found jigsaw puzzlers grouping similar pieces together, thereby exploiting the capacity of the visual system to note finer differences between pieces when surrounded by similar pieces than when surrounded by different pieces.

Finally, we found a host of ways that embodied agents enlist the world to perform computation for them. Familiar examples of such off-loading show up in analog computations. When the tallest spaghetti noodle is singled out from its neighbors by striking the bundle on a table, a sort computation is performed by using the material and spatial properties of the world. But more prosaically we have found in laboratory studies of the computer game Tetris that players physically manipulate forms to save themselves computational effort [Kirsh 2001; Kirsh and Maglio 1995]. They modify the environment to cue recall, to speed up identification, and to generate mental images faster than they could if unaided. In short, they make changes to the world to save themselves costly and potentially error-prone computations.

Interpreting the idea a bit more radically, from what I wrote elsewhere:

Information processing doesn't only happen inside brains and computers. The paradigm of distributed cognition studies human societies as information-processing systems, with individuals being parts of the larger system. For instance, the operation of an airliner cockpit's crew has been studied from this perpective [1]. For a flight to proceed without trouble, the different crew members need to be aware of information relating to their areas of responsibility at any given moment. If the crew is experienced and well trained, they'll constantly stay up to date by e.g. simply listening to other crew members converse with flight control. As flight control informs the captain of a new flight altitude, the rest of the pilots begin to adjust the altitude even while the captain is still finishing up the communication. The cockpit functions as a unified system, and relevant information is propagated to wherever needed. Several crew members hearing the same information also allows for error correction. If the message is unclear and the captain can't make out flight control's words, he can ask the others for clarification. The co-pilot answers the captain's query: even though one part of the system has failed to absorb the information received from outside the system, the same information has been stored in another part, which may then attempt to re-send it where needed.

Several other fields have been studied in the same manner, ranging from a child's language learning [2] to creativity [3]. A child doesn't learn language by itself and in a vacuum, but via interaction with adults and older children. Creativity, on the other hand, requires common, shared "idea resources" which individuals may use to come up with their own inventions and then give them back for others to refine further. Another theory of innovation considers inventions to be responses to problems encountered by the community. Things such as bad laws or ineffective ways of doing things show up in community, and are considered problems by its members. This leads the community - the system - into a need state, mobilizing its members to seek solutions until they're found.

One central idea is that social communities are cognitive architectures the same way that individual minds are [4]. The argument is as follows. Cognitive processes involve trajectories of information (transmission and transformation), so the patterns of these information trajectories, if stable, reflect some underlying cognitive architecture. Since social organization - plus the structure added by the context of activity - largely determines the way information flows through a group, social organization may itself be viewed as a form of cognitive architecture.

[1] Hutchins, E. & Klausen, T. (1995) Distributed Cognition in an Airline Cockpit.

[2] Spurrett, D. & Cowley, S.J. (2004) How to do things without words: infants, utterance-activity and distributed cognition. Language Sciences, 6, 443-466.

[3] Miettinen, R. (2006) The Sources of Novelty: A Cultural and Systemic View of Distributed Creativity. Creativity and Innovation Management. Vol. 15, no. 2.

[4] Hollan, J. & Hutchins, E. & Kirsh, D. (2000) Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research. ACM Transactions on Computer-Human Interaction. Vol 7, no. 2.

So, no, probably nothing to be worried about. Just normal human use of the environment.

When I visited Beijing a few years back, I could not access Wikipedia due to censorship. This made me aware of how often I unconsciously checked things on the site - the annoyance of not getting the page made me note a previously unseen offloading habit.

I expect that many offloading methods work like this. We do not notice that we use them, and that adds to their usefulness. They do not waste our attention or cognition. But it also means that we are less likely to critically examine the activity. Is the information reliable? Are we paying an acceptable price for it? Would a break in the access be problematic for our functioning? Does the offloading bias us in some way?

The last point might be particularly relevant here. Some resources provide information easily, so they tend to be used in favour of more cumbersome sources. Online papers are easy, trips to the library take time and effort - so we cite online papers more, even when original old sources are more appropriate. If it is problematic to check that a system is calibrated right and important to use it (that deadline is tonight! the customers are waiting!), we might become extensions of biased collective cognition.

Maybe we need to develop check-lists for checking our outsourced cognition?

I am quite happy to offload some of my memory and computation to technology, and I expect myself to be effectively smarter than if I did not have access to that technology.

[-][anonymous]15y00

"Self-sufficiency is the road to poverty." -- Russel Roberts

I'm with you on this one. Before online mapping, I would have to have someone tell me repeatedly what to look for when navigating unfamiliar roads, and would get lost a lot

I know more song lyrics than a random historical peasant.

EDIT: Actually, let me expand on that.

You say propositional knowledge is being gradually supplanted by the procedural. We can see examples of this all over the place. For instance, we used to have to remember a bunch of phone numbers, now we just store them in our phones.

But wait a minute, the random historical peasant doesn't even have a phone number to remember. I at least remember my OWN phone number, that's one more number than he does. What's his address? Zip code? Social security number? Bank account number? What are the PINs to his ATM card, his debit card, his library card? What are the account names and passwords he uses at home, at work, at school, on the various websites that he visits? To how many places does he know the digits of pi?

If you point to a random thing in his environment (Not that there's a lot of things to point at), in how much detail can he explain how it works? Does he know why ice is lighter than water, why plants are green, how his own eyes work?

How many words does he know? Does he know how to spell them? Does the idea of correct spelling even exist yet? I use a spellchecker, but 99% of the words I type are correctly spelled on the first pass, and I know an awful lot of words.

How many people does he at least know the names and faces of? Orders of magnitude less than I do?

I may have more procedural knowledge, but I have a hell of a lot more propositional knowledge too. If you somehow dumped out all the factual knowledge stored in my brain, or the brain of any random modern human, it would be tremendously greater in quantity than that of a random historical peasant. There are people who know more about Pokemon than he knows about his entire life.

IAWYC but,

  1. I suspect you're caricaturing the (lack of) knowledge of a random historical peasant somewhat. It's easy to come up with a list of things that we know that they wouldn't, because we usually know what we know. There are probably many things that an historical peasant knows that we don't, but it's harder to come up with examples, because often we don't know what we don't know. All things considered, I'd still say we know more, but not necessarily by as much as might be suggested by a naive reading of your comment.

  2. FWIW, a random historical peasant is about 50% likely to be female.

  1. FWIW, a random historical peasant is about 50% likely to be female.

I don't know how reliable this source is but it suggests that the sex ratio in medieval Europe, at least, was skewed toward men, and offers some compelling reasons for it.

It's easy to come up with a list of things that we know that they wouldn't, because we usually know what we know.

Then again, do we? We live in a sea of things we know, there's so many layers we can't even see them.

The simple activity of vacuuming a carpet is actually as complex as tilling a field, even if the consequences of doing it wrong are far less severe. You get the vacuum out of the cupboard, you swivel the little prong that keeps the cord wound up, you find an outlet that you can reach a lot of the room from, you plug the cord into the outlet, type A plugs fit in type B outlets but not vice versa, if there's a large prong it has to be put in a certain way up, some outlets only supply electricity when the associated switch is turned on, the switch may not be located near the outlet, etc. etc.

FWIW, a random historical peasant is about 50% likely to be female.

For your convenience, here's a gender neutral copy of my post:

I know more song lyrics than a random historical peasant.

EDIT: Actually, let me expand on that.

You say propositional knowledge is being gradually supplanted by the procedural. We can see examples of this all over the place. For instance, we used to have to remember a bunch of phone numbers, now we just store them in our phones.

But wait a minute, the random historical peasant doesn't even have a phone number to remember. I at least remember my OWN phone number, that's one more number than he or she does. What's his or her address? Zip code? Social security number? Bank account number? What are the PINs to his or her ATM card, his or her debit card, his or her library card? What are the account names and passwords he or she uses at home, at work, at school, on the various websites that he or she visits? To how many places does he or she know the digits of pi?

If you point to a random thing in his or her environment (Not that there's a lot of things to point at), in how much detail can he or she explain how it works? Does he or she know why ice is lighter than water, why plants are green, how his or her own eyes work?

How many words does he or she know? Does he or she know how to spell them? Does the idea of correct spelling even exist yet? I use a spellchecker, but 99% of the words I type are correctly spelled on the first pass, and I know an awful lot of words.

How many people does he or she at least know the names and faces of? Orders of magnitude less than I do?

I may have more procedural knowledge, but I have a hell of a lot more propositional knowledge too. If you somehow dumped out all the factual knowledge stored in my brain, or the brain of any random modern human, it would be tremendously greater in quantity than that of a random historical peasant. There are people who know more about Pokemon than he or she knows about his or her entire life.

Then again, do we? We live in a sea of things we know, there's so many layers we can't even see them.

Fair enough. I think the comparative point stands though: knowledge of what we know > knowledge of what we don't know.

For your convenience, here's a gender neutral copy of my post:

;)

There are three kinds of information:

1) survival information - things you may need without time or access to look them up, like finding or making shelter, food, and water if your plane goes down in the wilderness, self-defense, and first aid.

2) things you use frequently - it's a hassle to look them up repeatedly, but you will eventually start remembering them even without any special effort to memorize them.

3) things you find interesting and might even use someday, which would be a waste of time and effort to memorize.

Many things have moved from number 1 with the advance of civilization, which I don't consider a great loss. Other things, like your mention of songs, have moved from 2 to 3, again I think an improvement.

Useless and harmful information don't seem to fit into this scheme.

I was writing from the point of view of the user; from that POV "useless information" is just noise.

I am not sure I see the point of your proposed information classification scheme.

Information varies in a number of ways: how useful it is; how frequently you need to access it; how often it changes; your level of confidence in its accuracy; its size - and so on. I am not sure if there is much point in trying to collapse any of these dimensions down into a few discrete categories.

I think I see the point. Bill's scheme isn't really for the information itself, but rather for the human latency requirements for information. Any given bit of information in its situational context might require near-zero latency, relatively low latency, or not have a strong requirement at all.

I was just considering how important or useful memorizing the information could be. That is, when and whether the "external brain" is adequate.

It's not clear to me whether this means that we should be alarmed and seek to hone our factual memories... or whether we should devote our attention to honing our Google-fu, as our minds gradually become server-side operations.

I suspect the real answer is that there is no we. For myself, I have no problem offloading my navigation needs onto Google Maps, but others want to internalize those skills. Perhaps we'll see individuals having still yet more diverse skillsets, as different people make different trade-offs on what they want to internalize and what they want to keep?

Unlike Janet D. Stemwedel (author of the blog you cite), I find that using Google Maps increases my knowledge of local geography. When I map directions, I tend to pay attention to the route as displayed on the map, make note of cross streets near where I am going to have to turn, and sometimes play with alternate routes.

Sure, it's almost a cliché: it's not the tools, it's how you use them. Unlike you, I mostly just copy down the directions by rote (which leaves me in a world of trouble when I miss a turn). I would guess that there's a similar effect of technology on public understanding of mathematics: calculators and computers open up whole new worlds to people who like math (algorithms!---the Mandelbrot set!---&c.), but just serve as a crutch to others.

"Self-sufficiency is the road to poverty." -- Russel Roberts

I'm with you on this one. Before online mapping, I would have to have someone tell me repeatedly what to look for when navigating unfamiliar roads, and would get lost a lot. Now I can look at the map, come up with alternate routes. With StreetView I can even see landmarks.

[-]pwno15y20

but others want to internalize those skills

Assuming that these skills don't generalize to areas where we can't depend on Google, I suspect that people desire to internalize these skill merely because they can (possibly for the show off potential or in order to validate their intelligence). Sort of like wanting to be unnecessarily* good at mental arithmetic when a simple calculator can do the job much faster.

I suspect that people desire to internalize these skill merely because they can

Yes, sort of like how people desire to breathe merely because they can. To say that a skill is useful only prompts the question: useful for what? Your goal system has to ground out somewhere; at some point you need some conception of a life worth living and a work worth doing, that don't need to be justified in terms of anything else, or else why should why should we care about being useful? Cf. Eliezer's "High Challenge" and Nick Bostrom's "The Future of Evolution."

The wetware is also highly convenient. Sans Palm Pilot, I don't have a calculator with me everywhere I go. It would be useful to me once or twice a day if I could do arithmetic in my head without messing up nine times out of ten.

I learned chisanbop in the third grade, presented a report on it to my math class, and promptly forgot about it.

After my stroke, a lot of my propositional knowledge became procedural in the sense I think you mean here... I needed to employ various explicit formal methods to focus my attention on retrieving data in my brain, which used to be implicit and unconscious. Asking myself questions out loud was a useful one, for example.

Eventually, I regained the ability to do that implicitly and without formal methods. But I still notice myself asking myself questions out loud sometimes. I just don't think of it as a procedural task anymore.

In other words, for a while my brain was external to "me" when it came to certain functions, and now it isn't anymore.

I suspect something similar will happen with the Internet.

Do you remember how to spell all of those words you let the spellcheck catch? Do you remember what fraction of a teaspoon of salt goes into that one recipe, or would you look at the list of ingredients to be sure? Do you remember what kinds of plastic they recycle in your neighborhood, or do you delegate that task to a list attached with a magnet to the fridge?

I don't know if this says anything important or relevant, but I remember nearly all of these things. Spelling, in my first language of English, anyway, is effortless for me. I always end up memorizing things like recipes because, well, I use them. And hate having to take time to look it up, so I deliberately try to learn "shortcuts", for example "all cookie recipes have 1 teaspoon baking soda". I have occasionally kept calendars, and it was a big stress reliever, but it was also time consuming and mostly unnecessary–I don't forget that I got called in for a random weeknight shift at the hospital, or that I have an appointment on X day.

I think this is secondary to two factors. A) I have a pretty encyclopedic memory. I read literally thousands of books as a child, and to this day I can dredge up a significant fraction of the major plot of almost any of them, and actual quotes if I read them more than once. To me this is normal, but my mom is surprised when I remember things like that, so I guess it's uncommon. B) I don't like wasting time, and having to look stuff up that's easy to remember, for me, and that I use frequently feels like wasting time.

I do tend to store the minimum. For example, once I find out the due date for a school project, I'm pretty unlikely to forget the date and have to look it up, but that doesn't mean I'll bother to learn anything about the content of the project until I plan on starting it, and finishing it shortly thereafter.

Throughout nursing school, I've often "known" things in the sense of "I would have a 100% grasp of it quite easily with 2 minutes of looking stuff up." This is useful in areas like pharmacology, where the underlying structure (drug classes and mechanisms of action) is fairly logical and interesting, and thus easy to remember, but the surface structure (generic and commercial names and the side effects that aren't an example of the drug being 'too effective') is ridiculously difficult to memorize. That being said, my non-looking-up grasp is about 90%–enough to get really good grades in theory classes without being able to look stuff up on the spot, but not enough to feel comfortable actually treating patients without re-looking it up.

I've also noticed that I "purge" my autobiographical memory–as if I'm making more space for data that I find interesting. I'm very good at staying on top of day-to-day life and remembering due dates and work shifts and stuff, but I'm hard pressed to think of more than ten specific episodes that I remember from elementary school. I know the general 'story' of my childhood but I don't bother to retain the details–which means that I often don't understand my younger self's thoughts and decisions very well.

A good follow-up to this would be, given all the externalization potential, what aspects of cognition should we aim to internalize?

Couple of offhand thoughts for this:

I do not mind the externalization, as long as it's sufficiently reliable and I can function without my iPhone in the worst case. (there is a great description of this in Accelerando when Manfred's augmentation gear gets stolen; in short it takes him a while to figure our who he is). This is the practical/operational aspect.

The other part of externalization that concerns me is that it makes me prone to the equivalent to looking up the answer key without having to work through the problem mentally and improving my thinking algorithms.

There are some things you can't look up, simply because you don't know that you even need to look them up. Those things are still very useful to know.

I sometimes refer to Google as ELTM, or Extended Long-Term Memory.

For me, in order for the Memory metaphor to work, Google searches would have to do a better job of meshing with my existing memory. There are two big components that are missing, without which I have trouble of thinking of a search engine as memory replacement or augmentation:

1) Personalized context. When I remember a fact that I read in a book, I don`t just remember that single fact - I also remember a wide range of associated context (what the book looked like, some of the more interesting concepts I learned at the same time, the cute girl that was in the library as I was reading the book).

2) Permanence (or a time dimension to results). Sure, Google can bring up the most relevant Wikipedia article to my search. That may not be what my brain remembers, though. Most likely, it should bring up the most relevant article to my search, given the state of the internet back when I was first learning about the topic. Data on the internet, and search results, update in real time, while my memory lags behind, and until that is bridged somehow, it`ll make for a tough match between the two.

Relying on wetware to remember information in detail can help deal with these problems.

But if you were an illiterate random peasant farmer in some historical venue, and you needed to know the growing season of taro or barley or insert-your-favorite-staple-crop-here, Wikipedia would have been superfluous: you would already know it. It would be unlikely that you would find a song lyrics website of any use, because all of the songs you'd care about would be ones you really knew, in the sense of having heard them sung by real people who could clarify the words on request, as opposed to the "I think I heard half of this on the radio at the dentist's office last month" sense.

Everything you would need to know would be important enough to warrant - and keep - a spot in your memory.

I am not sure how different this example here is any different from Googling.

The first time a farmer wants to know something, such as the season to grow a crop in, he taps into the local pool of knowledge available to him. This pool could consist of his father, or the village elders, or the experienced farmer two miles in the neighbouring farm. And he only bothers remembering this factoid about seasons, and whatever else, because it would be a real pain to run two miles each time he wants to do something on his farm, every day. And by the same argument, things he doesn't have to know on a daily basis or can live without, such as that village song, or the latest gossip in town, he will not bother committing this to memory. He will wait for the next time they gather around a fire or something to gather the latest. Again consider somebody looking at the man page for the arguments to a system call that she uses on a daily basis, and committing this to memory because it is more optimal to just remember it (because the two monitors she has are only big enough to keep her code windows open) and it would improve her productivity do so. She will not bother to commit to memory those calls and their arguments that she doesn't use as often, and will go hunting in the man pages.

I guess what I am trying to say is that our great brains have always been external. There is a bunch of stuff that is local, and there is the bunch that is non-local. The non-local stuff is the collective of stuff that exists in and amongst the people we live with, talk to, our books and now of course the internet. And what is local, a vast majority of it, has always been a subset of that which is non-local. It's only the magnitude of this non-local set that is easily (relatively speaking) accessible that has changed (by orders of magnitude).

As a continuation of what I have said in my previous comment, I'd like to suggest, that what google and the internet in general seem to be doing is aggressively providing candidates for inclusion into the local set. And so, by repetition and easy access they, possibly, help enlarge the local set. If technology gets better, then we can imagine a day where the local set more or less overlaps with the super set, and there really is no difference between the two; a fetch from local set and a fetch from the super set take about the same time and so qualitatively 'feel' the same. Our intuition then has a data set (to draw upon) that is immeasurably vaster than the small set of experiences that a single person can hope to acquire. This is a nice fairy tale.

There is an earlier comment by Kaj Sotala, that I just read, that states, better and more succintly, what I was trying to say with 'our great brains have always been outside'. Let me quote

One central idea is that social communities are cognitive architectures the same way that individual minds are [4]. The argument is as follows. Cognitive processes involve trajectories of information (transmission and transformation), so the patterns of these information trajectories, if stable, reflect some underlying cognitive architecture. Since social organization - plus the structure added by the context of activity - largely determines the way information flows through a group, social organization may itself be viewed as a form of cognitive architecture.

I wrote a paper on this phenomenon a while back - I suggest that it should lead to a new understanding of epistemology (leaning heavily on Keith DeRose's 'contextualism'). Never developed it enough to be actually published, since I don't really work in epistemology and I'm working on a dissertation. It's currently available online in the form of a blog post.

link: Knowledge is Power

ETA: Of course, mine doesn't have comics.

The characters in Poul Anderson's novel The Boat of a Million Years find themselves very disturbed by this trend taken to its logical conclusion; they are upset that school consists entirely of being taught how to look things up - referred to as "Wristpad 101" - instead of actually having to "learn" anything.

Myself, I say, bring it on! I like being able to point people to Wikipedia, or the TV Tropes Wiki, or wherever, when I need to cross an inferential distance.

the problem is that people lose the ability to create connections that aren't spoon fed to them. it would be fun to see the effect of A/B testing on wikipedia where article A has links to articles with positive connotations and article B negative.

TLDR

I find that I have to bridge the inferential distance myself, in order to get people to read things. Maybe I'm just linking to the wrong things.

In the case of TV Tropes, at least, the bigger difficulty is often in getting people to stop reading it.

This is the blessing and curse of all really interesting wikis. They make it easy and fun to build up a large collection of miscellaneous knowledge about some subjects. There has got to be some way to apply this more generally to education.

Added as a reference to the General knowledge article.

Having a good and accurate framework of facts already built up in your brain helps you evaluate new facts, though. The internet is great, but to a certain extent, how useful it is at finding you new facts is directly based on how good you are at evaluating the reliability of those new facts, and that is largely based on having facts already in your head.

And beyond that, you have to have enough facts to know what to look for in the first place. You have to be able to recognize interesting things when you hear them in context. If I say to you "Zanzibar has the most rapidly grown industrial sector of any nation with a similar GDP in the region", you have to have a lot of facts already stored in your brain to make any sense out of that; otherwise it just sounds like I said "Blah blah has the blah of any blah with a blah blah blah in the blah". Wikipedia and Google won't help you at that point.

In 'Proust and the Squid', Maryanne Wolf talks about just that, how external reading and writing skills behave as a kind of storage area for brain contents. I can't remember the exact passage (I guess because I have it written down in a book at home) but she talks about how we don't write things down to remember them, but so that it's okay for us to forget them. She goes into an analysis of a few cultures and their strengths and weaknesses when it comes to writing, reading, and memory. Very related and a good read. It follows along a bit with Plato's Phaedrus, the story of Socrates' objection to the written word.

I think it's interesting the way what you have memorized, exactly, seems to change based on where you are or what you are doing. I'm sure most of us without eidetic memories have experienced the sudden loss of some memorized bit of information, only to remember it with ease a few hours later.

Memories of certain friends seem completely solid and close when you are around them but utter inaccessible otherwise, never entering into your day to day thought processes. I often wonder if this is even a brain-wide effect, with different tools in the toolbox other than just memory being triggered or non-triggered based on your environment. It would be strange if some environments caused tools in your brain to trigger that increased your skill at a task, tools that would go forgotten at another time. I think I ran into an example of that the other Friday. I got my arm stuck past the elbow in a narrow metal slat, reaching for something in a warehouse after hours. Legs off the ground, lacking leverage and totally unable to free myself, I struggled for a while then just sat around thinking, trying to figure out what to do. After half an hour, I realized I could spit on my arm to get it lubricated up and slip it out of the slat -- some gross struggling and a couple minutes later I was free, if really bruised up. I feel like I would have come across that solution faster if my worries weren't tending toward being stuck all weekend in the warehouse.

My father told me long ago that intelligence isn't knowing the answer - it is knowing where to find the answer. After all of my education and learning I find this to be a very true statement.

It's not clear to me whether this means that we should be alarmed and seek to hone our factual memories... or whether we should devote our attention to honing our Google-fu, as our minds gradually become server-side operations.

I do analysis for a living and am constantly asked on the spot detailed questions about very specific things, so I have to hone those memories. However when I was learning how to do this better (through associations and number games) I realized that I was simply being a human google search.

In my profession, one of a number of professional courses addressed the exact problem they were getting from their briefers: Good information, not enough analysis. So, the powers that be re-focused the school and made it analytically focused. The analysis filed has changed dramatically in the past few years and as a result is taking more of a pattern recognition and pre-emptive strategies approach.

So I don't think we should be worried at all, or need to hone our memories in the abstract for bits of data. In fact, with the error prone nature of our mental hardware I would suggest that it does a disservice to factual recall in trying to do so. Instead, we can "outsource" the storage to a more accurate replication medium (like a reliable computerized bookmark favorites) and apply reasoning to this complex data in its unchanged form while re-programming our internal brain data storage to be, rather than the data itself, what database what we want to know is under (The "folder" it's in rather than the contents of the "file") .

The question becomes how do we make the card-catalog of our mind faster and more efficient?