There is an interesting debate happening on the Net currently, which seems to center around one Nicholas Carr and his recent book The Shallows: What the Internet is Doing to Our Brains. It was first brought to my attention by a friend of mine and the conversation has subsequently bloomed into a debate regarding the merits of learning by means of the Internet.
Carr’s analysis is compelling. In a recent Wired article, he suggests the following:
The problem is that skimming is becoming our dominant mode of thought. Once a means to an end, a way to identify information for further study, it’s becoming an end in itself—our preferred method of both learning and analysis. Dazzled by the Net’s treasures, we are blind to the damage we may be doing to our intellectual lives and even our culture.
Carr’s research leads him to a conclusion that, while the Internet does allow us to more quickly connect pieces of information, is ultimately rewiring our brains so that we are now incapable of focusing on a subject for more than minutes at a time. In essence, we have learned the skills to multitask better, but that skill is now dominating what once characterized human thought: the ability to process information singularly and deeply, which fosters understanding and advances the imagination. Again, an excerpt from Carr’s Wired article:
In a Science article published in early 2009, prominent developmental psychologist Patricia Greenfield reviewed more than 40 studies of the effects of various types of media on intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies, she wrote, has led to the “widespread and sophisticated development of visual-spatial skills.” But those gains go hand in hand with a weakening of our capacity for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”
Carr suggests that while there are benefits the Internet has on human cognition (i.e. improved visual spatial skills), these improvements are at the expense of other, more valuable skills.
The crux of Carr’s argument relies on what we know regarding the capacities of our working memory, known in psychological circles as cognitive load, and the amount of time and energy we expend each time our attention is diverted and refocused, termed switching costs. The former refers to how much information we can hold in our working memory at any given time, while the latter refers to the amount of neural activity required to reorient oneself onto a new focus. Switching costs have been found to increase cognitive load, implying that the more we divert our attention, the more our working memory is taxed, and as a result we remember less information.
Carr is not without his critics, however. Steve Pinker has written the following in New York Times article, saying
Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.
Pinker cautions us to recognize that distraction has been with humanity since time immemorial and that diverting attention is nothing new. Neuroscientists affirm that every new experience causes the brain to create new neural pathways. Take walking down the street in a new city for example – there are countless sensory experiences that one’s mind is forced to take in as new experiences: smells, sounds, sights, and so forth. Those novel sensory inputs cause the brain’s activity to come alive as it attempts to learn what to filter, what to habitualize and what to store in long term memory. Pinker also cites research into the diversification of experience: does one experience indirectly affect another? The research seems to suggest that it does not:
Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.
While Pinker points to research to bolster his opinion, Eric Schonfeld over at TechCrunch is even less forgiving, resorting to more cynical, ad hominem attacks (although Schonfeld does cite Pinker’s article in his):
…his arguments also strike me as incredibly self-serving. After all, he is an author who makes money writing books. Of course he is going to argue that they make you smarter than the Web, with all of its neurological distractions. Carr is the master of technological alarmism. It sells his books and provokes debate, and this time is no exception…
He continues to conclude…
Maybe Carr’s neural pathways are set already and this kind of experience is too jarring for him. But I kind of doubt that—he is quite adept at the ways of the Web. I have another theory. Maybe what he really finds objectionable is a world where readers are no longer content to let the full waterfall of an author’s words wash over them, and then sit and contemplate the genius of those words in isolation from any other words, and how fortunate they are to have gotten a glimpse into the author’s mind for only the $18 price of a hardcover from Amazon.
What to make of the debate? Does Carr have a valid point? Does research prove him wrong, as Pinker suggests? Or is he simply being “alarmist” and trying to bash the Internet in order to sell books? I tend to agree with Carr’s analysis, but can’t in good conscience overlook Pinker’s when he relies on research as well. I wouldn’t go so far as the author of the TechCrunch article – which to me sounds like too many assumptions are being made.
Regardless, it warrants further discovery…