Overview of Major Writing Project 1. A 1 ½ Page SUMMARY
Table of Contents
Overview of Major Writing Project 1

– In the first paragraph (introduction), introduce the writer, state the title of the essay, briefly give basic information about the author (not more than a sentence — see bio of Kelly on first page of his essay in the PDF), and give a brief summary statement of Kelly’s essay (one sentence). At this point you will not include a statement of your own thesis (or “I say”). Note: Kevin Kelly identifies as a man (he, his, him).
– In the second paragraph, give a more developed summary of Kelly’s essay, making sure that you inhabit the worldview of the author (play the “believing game”). Use at least 3 verbs from the list on pages 40-41 and highlight them (bold) in your text. Make sure you are using those verbs correctly; when in doubt, always use a dictionary. Make sure you quote correctly and appropriately. Make sure you review “Overview of Major Writing Project 1” to make sure you’re including everything you need in your summary.
– Please double space your document.
Better than Human:
Why Robots Will-and Must- Take Our Jobs KEVIN KELLY

Imagine that 7 out of 10 working Americans got fired tomorrow. What would they all do?
IT’s HARD TO BELIEVE you’d have an economy at all if you gave pink slips to more than half the labor force. But that- in slow motion-is what the industrial revolution did to the workforce of the early 19th century. Two hundred years ago, 70 percent of American workers lived on the farm. Today auto- mation has eliminated all but 1 percent of their jobs, replac- ing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields. Those who
KEviN KELLY was a founding member of Wired and served as its execu- tive editor for six years. He is now “senior maverick” at Wired and
the editor of the Cool Tools website. His books include Cool Tools:
A Catalog of Possibilities (2013), What Technology Wants (2010), and
New Rules for the New Economy (1998). TI1is essay first appeared on the Wired website on December 24, 2012.
2 9 9
KEVIN KELLY
once farmed were now manning the legions of factories that churned out farm equipment, cars, and other industrial prod- ucts. Since then, wave upon wave of new occupations have arrived-appliance repairman, offset printer, food chemist, photographer, web designer-each building on previous auto- mation. Today, the vast majority of us are doing jobs that no
farmer from the 1800s could have imagined.
For more on It may be hard to believe, but before the end of this ways to address century 70 percent of today’s occupations will likewise be
a skeptical ‘ d d ·n reader, see replaced by automation. Yes, ear rea er, even you WI Chapter 6. have your job taken away by machines. In other words,
robot replacement is just a matter ofiirne. This upheaval is bein.g led by a second wave of automation, one that is centered on arti- ficial cognition, cheap sensors, machine learning, and distributed smarts. TI1 is deep automation will touch all jobs, from manual
( labor to knowledge work. First, machines will consolidate their gains in already-
aut~ated industries. After robots finish replacing assembly line workers, they will replace the workers in warehouses. Speedy bots able to lift 150 pounds all day long ~ill retrieve boxes, sort them, and load them onto trucks. Fruit and veg- etable picking will continue to be robotized until no hlll~ans pick outside of specialty farms. Pharmacies will fea~ure a smgle pill-dispensing robot in the back while the pharmacists focus on patient consulting. Next, the more dexterous chores of cleaning in offices and schools will be taken over by late-night robots,
starting with easy-to-do floors and windows and even~ually get- ting to toilets. The highway legs of long-haul truckmg routes will be driven by robots embedded in truck cabs.
All tl while robots will nmtinue their migration into 1e , ~ ~ork. We already ha~e artif~cial intel~igence in many of our machines; we just don t call1t that. W1tness one
3 0 0
Better than Human
piece of software by Narrative Science . . . that can write newspaper stories about sports games directly from the games’ stats or generate a synopsis of a company’s stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by hots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. n~ rote tasks of any information-intensive job can be auto- mated. It doesn’t matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.
And it has already begun.
Here’s why we’re at the inflection point: Machines are acquir- ing smms.
We have preconceptions about how an intelligent robot should look and act, and these can blind us to what is already happening around us. To demand that artificial intelligence be humanlike is the same flawed logic as demanding that artificial flying be birdlike, with flapping wings. Robots will think dif- ferent. To see how far artificial intelligence has penetrated our lives. we need to shed the idea that they will be humanlike.
Consider Baxter, a revolutionary new workbot from Rethink Robotics. Designed by Rodney Brooks, the former MIT profes- sor who invented the best-selling Roomba vacuum cleaner and its descendants, Baxter is an early example of a new class of industrial robots created to work alongside humans. Baxter does not look impressive. It’s got big strong arms and a flatscreen display like many industrial bats. And Baxter’s hands perform repetitive manual tasks, just as factory robots do. But it’s dif- ferent in three significant ways.
Fi~t, it can look around and indicate where it is looking by shifting the cartoon eyes on its h~ad. It can perceive humans
3 01
KEVIN KELLY
Bnxter, n workbot cre~tctl to work alongside hum~ns.
302
Better chan Human
working near it and avoid injuring them. And workers can see ) whether it sees them. Previous industrial robots couldn’t do this, which means that working robots have to be physically segregated from humans. The typical factory robot is impris- oned within a chain-link fence or caged in a glass case. They are simply too dangerous to be around, because they are oblivious to others. This isolation prevents such robots from working in a small shop,~here isolation is not p1,11ct_i<;al. Optimally, workers should be able to get materials to and from the robot or to tweak
its controls by hand throughout the workday; isolation makes that difficult. Baxter, however, is aw2_re. Using force-feedback technology to feel if it is colliding with a person or another bot, it is courteous. You can plug it into a wall socket in your garage and easily work right next to it.
Second, anyone can train Baxter. It is not as fast, strong, 10 or precise as other industrial robots, but it is smarter. To train the bot you simply grab its arms and guide them in the cor- rect motions and sequence. It’s a kind of “watch me do this” routine. Baxter learns the procedure and then repeats it. Any worker is capable of this show-and-tell; you don’t even have to be literate. Previous workbots required highly educated engineers and crack programmers to write thousands of lines of code (and then debug them) in order to instruct the robot in the simplest change of task. The code has to be loaded in batch mode, i.e., in large, infrequent batches, because the robot cannot be reprogrammed while it is being used. Turns out the real cost of the typical industrial robot is not its hardware but its operation. Industrial robots cost $100,000-plus to purchase but can require four times that amount over a lifespan to pro- gram, train, and maintain. The costs pile up until the average lifetime bill for an industrial robot is half a million dollars or more.
303
KEVIN KELLY
The third difference, then, is thllt Baxter is cheap. Priced at $22,000, it’s in a different league compared with the $500,000 total bill of its predecessors. It is as if those established robots, with their batch-mode programming, are the mainframe com- puters of the robot world, and Baxter is the first PC robot. It is likely to be dismissed as a hobbyist toy, missing key features like sub-millimeter precision, and not serious enough. But as with the PC, and unlike the mainframe. the user can interact with it directly, immediately, without waiting for experts to mediate- and use it for nonserious, even frivolous things. It’s cheap enough that small-time manufacturers can afford one to package up their wares or custom paint their product or run their 3-D printing machine. Or you could staff up a factory that makes iPhones.
Baxter was invented in a century-old brick building near the Charles River in Boston. In 1895 the building was a manufactur- ing marvel in the very center of the new manufacturing world. It even generated its own electricity. For a hundred years the factories inside its walls changed the world around us. Now the capabilities of Baxter and the approaching cascade of superior robot workers spur Brooks to speculate on how these robots will shift manufacturing in a dbruption greater than the last revo- lution. Looking out his office window at the former industrial neighborhood, he says, “Right now we think of manufacturing as happening in China. But as manufacturing costs sink because of robots, the costs of transportation become a far greater factor than the cost of production. Nearby will be cheap. So we’ll get
~ this network of locally franchised factories, where most things will be made within 5 miles of where they are needed.”
That may be true of making stuff, but a lot of jobs left in the world for humans are service jobs. I ask Brooks to walk with me through a local McDonald’s and point out the jobs that his kind of robots can replace. He demurs and suggests it
304
Better than Human
might be 30 years before robots will cook for us. “In a fast food place you’re not doing the same task very long. You’re always changing things on the fly, so you need special solutions. We are no!..E:_Ying to sell a specific solution. We are building a 2eneral- purpose machine that other workers can set up themselves and work alongside.” And once we can cowork with robots right next to us, it’s inevitable that our tasks will bleed together, and ~ our old work will become theirs-and our new work will become something we can hardly imagine.
To understand how robot replacement will happen, it’s useful to break down our relationship with robots into four categories, as summed up in this chart:
EXISTING JOBS
NEW JOBS
Jobs to8ny that humans do-but machine~ will
evenrJ;;ITy d~ better.
Jobs that only humans will be able
to Uo-at first. .;’
HUMAN
Current jobs that humans·cart’t do but
machines can.
Robot jobs that we can’t even
imagine yet.
MACHINE
TI1e rows indicate whether robots will take over existing jobs IS or make new ones, and the columns indicate whether these jobs seem (at first) like jobs for humans or for machines.
305
KEVIN KELLY
Let’s begin with quadrant A: jobs humans can do but robots can do even better. Humans can weave cotton cloth with great effort, but automated looms make perfect cloth, by the mile, for a few cents. The only reason to buy handmade cloth today is because you want the imperfections humans introduce. We no longer value irregularities while traveling 70 miles per hour, though-so the fewer humans who touch our car as it is being
made, the better. And yet for more complicated chores, we still tend to believe
computers and robots can’t be trusted. That’s why we’ve been slow to acknowledge how they’ve mastered some conceptual routines, in some cases even surpassing their mastery of physical routines. A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot “just in case.” In the 1990s, computerized mortgage appraisals replaced human appraisers wholesale. Much tax preparation has gone to computers, as well as routine x-ray analysis and pretrial evidence-gathering- all once done by highly paid smart people. We’ve accepted utter reliability in robot manufacturing; soon we’ll accept it in
robotic intelligence and service. Next is gy.adrant B: jobs that humans can’t do but robots
can. A trivial example: Humans have trouble making a single brass screw unassisted, but automation can produce a thousand exact ones per hour. Without automation, we could not make a single computer chip-a job that requires degrees of precision, control. and unwavering attention that our animal bodies don’t possess. Likewise no human, indeed no group of humans, no matter their education, can quickly search through all the web pages in the world to uncover the one page revealing the price of eggs in Katmandu yesterday. Every time you click on the search button you are employ-
Better than Human
ing a robot to do something we as a species are unable to do alone.
While the displacement of formerly human jobs gets all the headlines, the greatest benefits bestowed by robots and auto- mation come from their occupation of jobs we are unable to d~. ~e don’t have the attention span to inspect every square mtlltmeter of every CAT scan looking for cancer cells. We don’t have the millisecond reflexes needed to inflate molten glass into the shape of a bottle. We don’t have an infallible memory to keep track of every pitch in Major League Baseball and calculate the probability of the next pitch in real time.
We aren’t giving “good jobs” to robots. Most of the time we ar; giving them jobs we could never do. Without them these jobs would remain undone. ‘
Now .let’s ~onsid:r quadrant C, the new jobs created by 20 automatton- mcludmg the jobs that we did not know we wanted done. This is the greatest genius of the robot takeover: With the assistance of robots and computerized intelligence, we already can do things we never imagined doing 150 years ago. We can remove a tumor in our gut through our navel, make a talking-picture video of our wedding, drive a cart on Mars, print a pattern on fabric that a friend mailed to us through the air. We are doing, and are sometimes paid for doing, a million new activities that would have dazzled and shocked the fanners of 1850. These new accomplishments are not merely chores that were difficult before. Rather they are dreams that are created chiefly by the capabilities of the machines that can do them. They are jobs the machines make up.
Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Ath- ens in climate-controlled comfort. Two hundred years ago not
KEVIN KELLY
a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty Ais embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers- a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations-occupations we would not have fantasized about without the prompting of the
automation. T 0 reiterate, the bulk of new tasks created by automation
are tasks only other automation can handle. Now that we have search engines like Google, we set the servant upon a thou- sand new errands. Google, can you tell me where my phone is? Google, can you match the people suffering depression with the doctors selling pills? Google, can you predict when the next viral epidemic will erupt? Technology is indiscriminate this way, pil- ing up possibilities and options for both humans and machines.
It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can’t see these jobs from here, because we can’t yet see the machines and technologi:s that will make them possible. I\Qbots create jobs that we d1d
!}Qt even know we wanted don.e.
Finally, that leaves us with quadrant D, the jobs that only humans can do-at first . The one thing humans can do that robots can’t (at least for a long while) is to decide what it is ) that humans want to do. This is not a trivial trick; our desires are inspired by our previous inventions, making this a circular
question.
308
Better than Human
When robots and automation do our most basic work, mak- zs ing it relatively easy for us to be fed, clothed, and sheltered, then we are free to ask, “What are humans for?” Industrializa- tion did more than just extend the average human lifespan. It led a greater percentage of the population to decide that humans were meant to be ballerinas, full-time musicians, math- ematicians, athletes, fashion designers, yoga masters, fan-fiction authors, and folks with one-of-a kind titles on their business cards. With the help of our machines, we could take up these roles; but of course, over time, the machines will do these as well. We’ll then be empowered to dream up yet more answers to the question “What should we do?” It will be many genera- tions before a robot can answer that.
This postindustrial economy will keep expanding, even though most of the work is done by bots, because part of your task tomorrow will be to find, make, and complete new things to do, new things that will later become repetitive jobs for the moots. In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation of trip optimizer, a person who tweaks the traffic system for optimal energy and time usage. Routine robo-surgery will necessitate the new skills of keep- ing machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data. And of course we will need a whole army of robot nan- nies, dedicated to keeping your personal bots up and running. Each of these new vocations will in turn be taken over by robots later.
The real revolution erupts when everyone has personal workbots, the descendants of Baxter, at their beck and call .
309
KEVIN KELLY
Imagine you run a small organic farm. Your fleet of worker bots do all the weeding, pest control, and harvesting of produce, as
directed by an overseer bot, embodied by a mesh of probes in the soil. One day your task might be to research which variety of heirloom tomato to plant; the next day it might be to update your custom labels. The bats perform everything else that can
be measured. Right now it seems unthinkable: We can’t imagine a bot that
can assemble a stack of ingredients into a gift or manufacture spare parts for our lawn mower or fabricate materials for our new kitchen. We can’t imagine our nephews and nieces run- ning a dozen workbots in their garage, churning out inverters
for their friend’s electric-vehicle startup. We can’t imagine our children becoming appliance designers, making custom batches of liquid-nitrogen dessert machines to sell to the millionaires in China. But that’s what personal robot automation will enable.
Everyone will have access to a personal robot, but simply Jll
owning one will not guarantee success. Rather, success will go to those who innovate in the organization, optimization, and
customization of the process of getting work done with bats and machines. Geographical clusters of production will matter, not for any differential in labor costs but because of the differential in human expertise. I.U human-robot symbiosis. Our human assignment wm be to keep making jobs for robots-and that is a task that will never be finished. So we will always have at least that one “job.”
—- In the coming years our relationships with robots will become ever more complex. But already a recurring pattern is emerg- ing. No matter what your current job or your salary, you will progress through these Seven Stages of Robot Replacement,
again and again:
Better than Human
1. A robot/computer cannot possibly do the task I d [Later.] s
0
·
2. OK, it can do a lot of them, but it can’t do everything I d {Later.] o.
3. OK, it can do everyth’ I d b k d mg o, except it needs me when it rea s own, which is often.
{Later.]
4. ?K, it operates flawlessly on routine stuff but I d . It for new tasks. ‘ nee to tram
[Later.]
5. OK, it can have my old b · . b b onng JO , ecause it’s ob · h was n t · b h h v1ous t at
o a JO t at umans were meant to d [Later.] o.
6. Wow, now that robots are doing my old. b . . much more fun d JO ‘ my new job 1s
an pays more!
[Later.]
7. ~~:.so glad a robot/computer cannot possibly do what I do
This is not a race against the h’ them, we lose . . . mac mes. If we race against . h 6 . This IS a race wrth the machines. You’ll be ‘d m t e !tJJre based on how w 11 . par e you work wrth robots N’ percent of your coworkers will b . . mety h e unseen machmes M f
~e :t~t” d~·willhnot be p=ible without them. And ;b,;·~~l – urry lflll etween what you do d h
(
might no longer think of it as a job :rnlea:t at t~ley do. You anything that seems like drudgery wi’tl b d atbfmtb, because
W d e one y ro ots e nee to let robots take over. They will d . b .h
bee d · d d o JO s we ave n omg, an o them much better than we .
do jobs we can’t do at all. They will 1 . b can. ~hey wrll L o JO s we never Imagined 3 11
KEVIN KELLY
even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.
Let the robots take the jobs, and let them help us dream up new work that matters.
] oining the Conversation
1. Kevin Kelly argues that machines will eventually take over many of the jobs that we now perform. This scenario may seem dire, yet he doesn’t appear at all worried. To the con- trary, in fact. Why not? Find statements in the article that explain his attitude.
2. TI1is article appeared in Wired, a magazine for people who know and care about digital technology. How is the article geared toward a pro-technology audience? How might Kelly have presented his argument for a readership that was less enthusiastic about technology?
3. though he acknowledges that some of his ideas are “hard to believe,” Kelly does not begin by saying explicitly what other ideas or assumptions he’s responding to. How does he begin, and how does that beginning set the stage for his argument?
4. Nicholas Carr (pp. 313-29) is less optimistic than Kelly about the future impact of technology. who do you find more persuasive, and why?
5. Kelly concludes by saying that robots will help us find “new work that matters.” Does that outcome seem likely? Write an essay responding to that assertion, perhaps focusing on one profession that interests yoti.
Is Google Making Us Stupid?
NICHOLAS CARR
“DAVE, STOP. STOP, WILL You? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “brain.” “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncom- fortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going-so far as I can tell-but it’s changing. I’m not thinking the way I used to think. I can
NICHOLAS CARR writes frequently on issues of technology and culture.
His books include The Big Switch: Rewiring the World, from Edison to Google (2008), The Shallows: What the Internet Is Doing to Our Brains (2010), The Glass Cage: How Our Computers Are Changing Us (2014) and Utopia Is Creepy (2016). Carr also has written for periodicals includ-
ing the Guardian, the New York Times, the Wall Street journal , and
Wired, and he blogs at roughtype.com. This essay appeared originally as
the cover article in the July/August 2008 issue of the Atlantic.
424
Is Google Making Us Stupid?
Dave (Keir Dullea) removes HAL’s “brain” in 2001: A Space Odyssey.
feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of librar- ies can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to
425
NICHOLAS CARR
link to link. (Unlike footnotes , to which they’re sometimes likened, hyperlinks don’t merely point to related works; they
propel you toward them.) For me, as for others, the Net is becoming a universal
medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theo- rist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with s reading to friends and acquaintances-literary types, most of them-many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I think has
changed?”
426
Is Google Making Us Stupid?
Bruce Friedman, who blogs regularly about the use of com- puters in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cogni- tion. But a recently published study of online research habits, conducted by scholars from University College London, sug- gests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research pro- gram, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consor- tium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:
427
NICHOLAS CARR
It is clear that users are not reading online in the traditional sense;
indeed there are signs that new forms of “reading” are emerging
as users “power browse” horizontally through titles, contents pages
and abstracts going for quick wins. It almost seems that they go
online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to men- tion the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking- perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic char- acters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry
428
Is Google Making Us Stupid?
for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter- 10 a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was
Friedrich Nietzsche and his Mailing-Hansen Writing Ball.
429
NICHOLAS CARR
able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language
often depend on the quality of pen and paper.” “You are right,” Nietzsche replied, “our writing equipment
takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler, Nietzsche’s prose “changed from arguments to apho- risms, from thoughts to puns, from rhetoric to telegram style.”
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” accord- ing to Olds, “has the ability to reprogram itself on the fly,
altering the way it functions.” As we use what the sociologist Daniel Bell has called our
“intellectual technologies”- the tools that extend our mental rather than our physical capacities-we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock
430
Is Google Making Us Stupid?
“disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being 15 the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the concep- tion of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operat- ing “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathema- tician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasur- ably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock,
4 3 1
NICHOLAS CARR
our printing press and our typewriter, our calculator and our tele-
phone, and our radio and TV. When the Net absorbs a medium, that medium is re-created
in the Net’s image. It injects the medium’s content with hyper- links, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a news- paper’s site. The result is to scatter our attention and diffuse
our concentration. The Net’s influence doesn’t end at the edges of a computer
screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, the New York Times decided to devote the second and third pages of every edition to article abstracts, its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to
play by the new-media rules. Never has a communications system played so many roles in 20
our lives–or exerted such broad influence over our thoughts- as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.
About the same time that Nietzsche started using his type- writer, an earnest young man named Frederick Winslow Taylor
432
Is Google Making Us Stupid?
carried · a stopwatch into the Midvale Steel plant in Phila- delphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions-an “algorithm,” we might say today-for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy
A testing engineer (possibly Taylor) observes a Midvale Steel worker c. 1885.
433
NICHOLAS CARR
and its philosopher. Taylor’s tight industrial choreography-his “system,” as he liked to call it-was embraced by manufactur- ers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to orga- nize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”- the perfect algorithm-to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California-the Googleplex-is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data
434
Is Google Making Us Stupid?
The Googleplex.
it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize zs the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to tum their search engine into an artificial intelli- gence, a HAL-like machine that might be connected directly to
435
NICHOLAS CARR
our brains. “The ultimate search engine is something as smart as people-or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and
to do it on a large scale.” Such an ambition is a natural one, even an admirable one,
for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t
Brin and Page want to be the ones to crack it? Still, their easy assumption that we’d all “be better off” if
our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data- processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web–the more links we click and pages we view-the more opportunities Google and other companies gain to collect information about us and to feed
436
Is Google Making Us Stupid?
us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link-the more crumbs the better. The last thing these companies want is to encourag~ leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glo- 30 rify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong- the new technology did often have the effects he feared-but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian human- ist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less stu- dious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious author- ity, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky
437
See pp. 31-33 for tips on
putting yourself In their shoes .
NICHOLAS CARR
notes, “Most of the arguments made against the print- ing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings
that the printed word would deliver. So, yes, you should be skeptical of my skepticism. Perhaps
those who dismiss critics of the Internet as Luddites or nostal- gists will be proved correct, and from our hyperactive, data- stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intel- lectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard
Foreman eloquently described what’s at stake:
I come from a tradition of Western culture, in which the ideal (my
ideal) was the complex, dense and “cathedral-like” structure of
the highly educated and articulate personality-a man or woman
who carried inside themselves a personally constructed and unique
version of the entire heritage of the West. [But now) I see within
us all (myself included) the replacement of complex inner density
with a new kind of self-evolving under the pressure of information
overload and the technology of the “instantly available.”
438
Is Google Making Us Stupid?
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’- spread wide and thin as we connect with that vast net- work of information accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poi- 35 gnant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut-“! can feel it. I can feel it. I’m afraid”-and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feel- ing contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 200 I, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Joining the Conversation
I. “Is Google making us stupid?” How does Nicholas Carr answer this question, and what evidence does he provide to support his answer?
2. What possible objections to his own position does Carr introduce-and why do you think he does so? How effec- tively does he counter these objections?
3. Carr begins this essay by quoting an exchange between HAL
and Dave, a supercomputer and an astronaut in the film 2001: A Space Odyssey-and he concludes by reflecting on
439
NICHOLAS CARR
that scene. What happens to HAL and Dave, and how does this outcome support his argument?
4. How does Carr use transitions to connect the parts of his text and to help readers follow his train of thought? (See Chapter 8 to help you think about how transitions help
develop an argument.) 5. In his essay on pages 441-61, Clive Thompson reaches a
different conclusion than Carr does, saying that “At their best, today’s digital tools help us see more, retain more, com- municate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance … what is happening is deeply positive.” Write a paragraph or two discussing how Carr might respond. Wnat would he agree with, and what would he disagree with?
6. This article sparked widespread debate and conversation when it first appeared in 2008, and the discussion contin- ues today. Go to theysayiblog.com and click on “Are We in a Race against the Machine?” to read some of what’s been
written on the topic recently.
440
Smarter Than You Think:
How Technology Is Changing
Our Minds for the Better
CLIVE THOMPSON
WHo’s BETTER AT .CHESS–computers or humans? The question has long fascinated observers, perhaps because
chess seems like the ultimate display of human thought: the players sit like Rodin’s Thinker, silent, brows furrowed, mak- ing lightning-fast calculations. It’s the quintessential cognitive activity, logic as an extreme sport.
So the idea of a machine outplaying a human has always provoked both excitement and dread. In the eighteenth cen- tury, Wolfgang von Kempelen caused a stir with his clockwork Mechanical Turk-an automaton that played an eerily good game of chess, even beating Napoleon Bonaparte. The spec- tacle was so unsettling that onlookers cried out in astonishment
CLIVE THOMPSON is a journalist and blogger who writes for the New York Times Magazine and Wired. He was awarded a 2002 Knight Science
Journalism Fellowship at MIT. He blogs at clivethompson.net. This
essay is adapted from his book, Smarter Than You Think: How Technology
Is Changing Our Minds for the Better (2013) .
4 41
NICHOLAS CARR
that scene. What happens to HAL and Dave, and how does
this outcome support his argument? 4. How does Carr use transitions to connect the parts of his
text and to help readers follow his train of thought? (See Chapter 8 to help you think about how transitions help
develop an argument.) 5. In his essay on pages 441-61, Clive Thompson reaches a
different conclusion than Carr does, saying that “At their best, today’s digital tools help us see more, retain more, com- municate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance .. . what is happening is deeply positive.” Write a paragraph or two discussing how Carr might respond. Wnat would he agree
with, and what would he disagree with? 6. This article sparked widespread debate and conversation
when it first appeared in 2008, and the discussion contin- ues today. Go to theysayiblog.com and click on “Are We in a Race against the Machine?” to read some of what’s been
written on the topic recently.
440
Smarter Than You Think:
How Technology Is Changing
Our Minds for the Better
CLIVE THOMPSON
WHo’s BETTER AT .CHESs–computers or humans? The question has long fascinated observers, perhaps because
chess seems like the ultimate display of human thought: the players sit like Rodin’s Thinker, silent, brows furrowed, mak- ing lightning-fast calculations. It’s the quintessential cognitive activity, logic as an extreme sport.
So the idea of a machine outplaying a human has always provoked both excitement and dread. In the eighteenth cen- tury, Wolfgang von Kempelen caused a stir with his clockwork Mechanical Turk-an automaton that played an eerily good game of chess, even beating Napoleon Bonaparte. The spec- tacle was so unsettling that onlookers cried out in astonishment
CLIVE THOMPSON is a journalist and blogger who writes for the New
Y ark Times Magazine and Wired. He was awarded a 2002 Knight Science
Journalism Fellowship at MIT. He blogs at clivethompson.net. This
essay is adapted from his book, Smarter Than You Think: How Technology
Is Changing Our Minds for the Better (2013).
4 41
CLIVE THOMPSON
The Thinker, by French sculptor Auguste Rodin (1840-1917) .
4 42
Smarter Than You Think
when the Turk’s gears first clicked into motion. But the gears, and the machine, were fake; in reality, the automaton was con- trolled by a chess savant cunningly tucked inside the wooden cabinet. In 1915, a Spanish inventor unveiled a genuine, honest-to-goodness robot that could actually play chess-a simple endgame involving only three pieces, anyway. A writer for Scientific American fretted that the inventor “Would Sub- stitute Machinery for the Human Mind.”
Eighty years later, in 1997, this intellectual standoff clanked to a dismal conclusion when world champion Garry Kasparov was defeated by IBM’s Deep Blue supercomputer in a tourna- ment of six games. Faced with a machine that could calcu- late two hundred million positions a second, even Kasparov’s notoriously aggressive and nimble style broke down. In its final game, Deep Blue used such a clever ploy-tricking Kasparov into letting the computer sacrifice a knight-that it trounced him in nineteen moves. “I lost my fighting spirit,” Kasparov said afterward, pronouncing himself “emptied completely.” Riveted, the journalists announced a winner. The cover of Newsweek proclaimed the event “The Brain’s Last Stand.” Doom-sayers predicted that chess itself was over. If machines could out-think even Kasparov, why would the game remain interesting? Why would anyone bother playing? What’s the challenge?
Then Kasparov did something unexpected.
The truth is, Kasparov wasn’t completely surprised by Deep Blue’s victory. Chess grand masters had predicted for years that computers would eventually beat humans, because they under- stood the different ways humans and computers play. Human chess players learn by spending years studying the world’s best opening moves and endgames; they play thousands of games,
443
CLIVE THOMPSON
slowly amassing a capacious, in-brain library of which strategies triumphed and which flopped. They analyze their opponents’ strengths and weaknesses, as well as their moods. When they look at the board, that knowledge manifests as intuition-a eureka moment when they suddenly spy the best possible move.
In contrast, a chess-playing computer has no intuition at all. It analyzes the game using brute force; it inspects the pieces currently on the board, then calculates all options. It prunes away moves that lead to losing positions, then takes ~e pr~m ising ones and runs the calculations again. After dm~g th.ts a few times-and looking five or seven moves out-lt arrtves at a few powerful plays. The machine’s way of “thinking” is fundamentally unhuman. Humans don’t sit around crunching every possible move, because our brains can’t hold that much information at once. If you go eight moves out in a game of chess, there are more possible games than there are stars in our galaxy. If you total up every game possible? It outnumbers the atoms in the known universe. Ask chess grand masters, “How many moves can you see out?” and they’ll likely deliver the answer attributed to the Cuban grand master Jose Raul
Capablanca: “One, the best one.” The fight between computers and humans in chess was, as
Kasparov knew, ultimately about speed. Once computers could see all games roughly seven moves out, they would wear hum~s down. A person might make a mistake; the computer wouldn t. Brute force wins. As he pondered Deep Blue, Kasparov mused
on these different cognitive approaches. It gave him an audacious idea. What would happen if,
instead of competing against one another, humans and com- puters collaborated? What if they played on teams together- one computer and a human facing off against another human and a computer? That way, he theorized, each might benefit
444
Smarter Than You Think
from the other’s peculiar powers. The computer would bring the lightning-fast-if uncreative-ability to analyze zillions of moves, while the human would bring intuition and insight, the ability to read opponents and psych them out. Together, they would form what chess players later called a centaur: a hybrid beast endowed with the strengths of each.
In June 1998, Kasparov played the first public game of 10 human-computer collaborative chess, which he dubbed “advanced chess,” against Veselin Topalov, a top-rated grand master. Each used a regular computer with off-the-shelf chess software and databases of hundreds of thousands of chess games, including some of the best ever played. They considered what moves the computer recommended, they examined historical databases to see if anyone had ever been in a situation like theirs before. Then they used that information to help plan. Each game was limited to sixty minutes, so they didn’t have infinite time to consult the machines; they had to work swiftly.
Kasparov found the experience “as disturbing as it was excit- ing.” Freed from the need to rely exclusively on his memory, he was able to focus more on the creative texture of his play. It was, he realized, like learning to be a race-car driver: He had to learn how to drive the computer, as it were–developing a split-second sense of which strategy to enter into the computer for assessment, when to stop an unpromising line of inquiry, and when to accept or ignore the computer’s advice. “Just as a good Formula One driver really knows his own car, so did we have to learn the way the computer program worked,” he later wrote. Topalov, as it turns out, appeared to be an even better Formula One “thinker” than Kasparov. On purely human terms, Kasparov was a stronger player; a month before, he’d trounced T opalov 4- 0. But the centaur play evened the odds. This time, Topalov fought Kasparov to a 3-3 draw.
445
CLIVE THOMPSON
Garry Kasparov (right) plays Veselin Topalov (left) in Sofia, Bulgaria, on
May 3, 1998.
In 2005, there was a “freestyle” chess tournament in which a team could consist of any number of humans or comput- ers, in any combination. Many teams consisted of chess grand masters who’d won plenty of regular, human-only tournaments, achieving chess scores of 2,500 (out of 3,000). But the winning team didn’t include any grand masters at all. It consisted of two young New England men, Steven Cramton and Zackary Stephen (who were comparative amateurs, with chess rankings
down around 1,400 to 1,700), and their computers. Why could these relative amateurs beat chess players with far
more experience and raw talent? Because Cramton and Stephen were expert at collaborating with computers. They knew when to rely on human smarts and when to rely on the machine’s advice. Working at rapid speed-these games, too, were limited
446
Smarter Than You Think
to sixty minutes-they would brainstorm moves, then check to see what the computer thought, while also scouring databases to see if the strategy had occurred in previous games. They used three different computers simultaneously, running five different pieces of software; that way they could cross-check whether different programs agreed on the same move. But they wouldn’t simply accept what the machine accepted, nor would they merely mimic old games. They selected moves that were low-rated by the computer if they thought they would rattle their opponents psychologically.
In essence, a new form of chess intelligence was emerging. You could rank the teams like this: ( 1) a chess grand master was good; (2) a chess grand master playing with a laptop was better. But even that laptop-equipped grand master could be beaten by (3) relative newbies, if the amateurs were extremely skilled at integrating machine assistance. “Human strategic guidance combined with the tactical acuity of a computer,” Kasparov concluded, “was overwhelming.”
Better yet, it turned out these smart amateurs could even IS outplay a supercomputer on the level of Deep Blue. One of the entrants that Cramton and Stephen trounced in the freestyle chess tournament was a version of Hydra, the most powerful chess computer in existence at the time; indeed, it was prob- ably faster and stronger than Deep Blue itself. Hydra’s owners let it play entirely by itself, using raw logic and speed to fight its opponents. A few days after the advanced chess event, Hydra destroyed the world’s seventh-ranked grand master in a man-versus-machine chess tournament.
But Cramton and Stephen beat Hydra. They did it using their own talents and regular Dell and Hewlett-Packard com- puters, of the type you probably had sitting on your desk in 2005, with software you could buy for sixty dollars. All of which
447
CLIVE THOMPSON
brings us back to our original question here: Which is smarter
at chess-humans or computers?
Neither. It’s the two together, working side by side.
We’re all playing advanced chess these days. We just haven’t
learned to appreciate it. Our tools are everywhere, linked with our minds, working zo
in tandem. Search engines answer our most obscure questions; status updates give us an ESP-like awareness of those around us; online collaborations let far-flung collaborators tackle prob- lems too tangled for any individual. We’re becoming less like Rodin’s Thinker and more like Kasparov’s centaurs. This trans- formation is rippling through every part of our cognition- how we learn, how we remember, and how we act upon that knowledge emotionally, intellectually, and politically. As with Cramton and Stephen, these tools can make even the amateurs among us radically smarter than we’d be on our own, assuming (and this is a big assumption) we understand how they work. At their best, today’s digital tools help us see more, retain more communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d
argue, what is happening is deeply positive .. · · Th ” d d . d” In a sense, this is an ancient story. e exten e mm
theory of cognition argues that the reason humans are so intel- lectually dominant is that we’ve always outsourced bits of cogni- tion, using tools to scaffold our thinking into ever-more-rarefied realms. Printed books amplified our memory. Inexpensive paper and reliable pens made it possible to externalize our thoughts quickly. Studies show that our eyes zip around the ~age w~~le performing long division on paper, using the handwntten dtgtts as a form of prosthetic short-term memory. “These resources
448
Smarter Than You Think
enable us to pursue manipulations and juxtapositions of ideas and data that would quickly baffle the unaugmented brain,” as Andy Clark, a philosopher of the extended mind, writes.
Granted, it can be unsettling to realize how much thinking already happens outside our skulls. Culturally, we revere the Rodin ideal-the belief that genius breakthroughs come from ~ur gray matter alone. The physicist Richard Feynman once got mto an argument about this with the historian Charles Weiner. F~ynman .understood the extended mind; he knew that writing hts equattons and ideas on paper was crucial to his thought. But when Weiner looked over a pile of Feynman’s notebooks, he called them a wonderful “record of his day-to-day work.” No, no, Feynman replied testily. They weren’t a record of his thinking process. They were his thinking process:
“I actually did the work on the paper,” he said.
“Well,” Weiner said, “the work was done in your head, but the record of it is still here.”
“No, it’s not a record, not really. It’s working. You have to work on paper and this is the paper. Okay?”
Every new tool shapes the way we think, as well as what we think about. The printed word helped make our cognition linear and abstract, along with vastly enlarging our stores of knowledge. Newspapers shrank the world; then the telegraph shrank it even n:ore dramatically. With every innovation, cultural prophets btckered over whether we were facing a technological apocalypse or a utopia. Depending on which Victorian-age pundit you asked, the telegraph was either going to usher in an era of world peace (“I.t i~’ impossible that old prejudices and hostilities should longer extst, as Charles F. Briggs and Augustus Maverick intoned) or drown us in a Sargasso of idiotic trivia (“We are eager to tunnel
449
CLIVE THOMPSON
under the Atlantic … but perchance the first news that will leak
through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough,” as Thoreau opined). Neither prediction was quite right, of course, yet neither was quite wrong. The one thing that both apocalyptics and utopians understand and agree upon is that every new technology pushes us toward new forms of behavior while nudging us away from older, familiar ones. Harold Innis-the lesser-known but arguably more interesting intellectual midwife of Marshall McLuhan- called this the bias of a new tool. Living with new technologies
means understanding how they bias everyday life. What are the central biases of today’s digital tools? There are
many, but I see three big ones that have a huge impact on our cognition. First, they allow for prodigious external memory: smart- phones, hard drives, cameras, and sensors routinely record more information than any tool before them. We’re shifting from a stance of rarely recording our ideas and the events of our lives to doing it habitually. Second, today’s tools make it easier for us to find connections-between ideas, pictures, people, bits of news- that were previously invisible. Third, they encourage a superfluity of communication and publishing. This last feature has many surprising effects that are often ill understood. Any economist can tell you that when you suddenly increase the availability of
a resource, people do more things with it, which also means they do increasingly unpredictable things. As electricity became cheap and ubiquitous in the West, its role expanded from things you’d expect- like night-time lighting-to the unexpected and seem- ingly trivial: battery-driven toy trains, electric blenders, vibrators. The superfluity of communication today has produced everything from a rise in crowd-organized projects like Wikipedia to curious new forms of expression: television-show recaps, map-based story-
telling, discussion threads that spin out of a photo posted to a
450
Smarter Than You Think
smartphone app, Amazon product-review threads wittily hijacked for political satire. Now, none of these three digital biases is immu- table, because they’re the product of sofrware and hardware, and can easily be altered or ended if the architects of today’s tools (often corporate and governmental) decide to regulate the tools or find they’re not profitable enough. But right now, these big effects dominate our current and near-term landscape.
In one sense, these three shifts-infinite memory, dot 25 connecting, explosive publishing-are screamingly obvious to anyone who’s ever used a computer. Yet they also some- how constantly surprise us by producing ever-new “tools for thought” (to use the writer Howard Rheingold’s lovely phrase) that upend our mental habits in ways we never expected and often don’t apprehend even as they take hold. Indeed, these phenomena have already woven themselves so deeply into the lives of people around the globe that it’s difficult to stand back and take account of how much things have changed and why. While [here I map] out what I call the future of thought, it’s also frankly rooted in the present, because many parts of our future have already arrived, even if they are only dimly understood. As the sci-fi author William Gibson famously quipped: “The future is already here-it’s just not very evenly distributed.” This is an attempt to understand what’s happening to us right now, the better to see where our augmented thought is headed. Rather than dwell in abstractions, like so many marketers and pundits- not to mention the creators of technology, who are often remarkably poor at predicting how people will use their tools- ! focus more on the actual experiences of real people.
To provide a concrete example of what I’m talking about, let’s take a look at something simple and immediate: my activities while writing the pages you’ve just read.
4 51
CLIVE THOMPSON
As I was working, I often realized I couldn’t quite remember a detail and discovered that my notes were incomplete. So I’d zip over to a search engine. (Which chess piece did Deep Blue sacrifice when it beat Kasparov! The knight!) I also pushed some of my thinking out into the open: I blogged admiringly about the Spanish chess-playing robot from 1915, and within min- utes commenters offered smart critiques. (One pointed out that the chess robot wasn’t that impressive because it was playing an endgame that was almost impossible to lose: the robot started with a rook and a king, while the human opponent had only a mere king.) While reading Kasparov’s book How Life Imitates Chess on my Kindle, I idly clicked on “popular highlights” to see what passages other readers had found interesting-and wound up becoming fascinated by a section on chess strategy I’d only lightly skimmed myself. To understand centaur play better, I read long, nuanced threads on chess-player discus- sion groups, effectively eavesdropping on conversations of people who know chess far better than I ever will. (Chess players who follow the new form of play seem divided-some think advanced chess is a grim sign of machines’ taking over the game, and others think it shows that the human mind is much more valuable than computer software.) I got into a long instant-messaging session with my wife, during which I realized that I’d explained the gist of advanced chess better than I had in my original draft, so I cut and pasted that explanation into my notes. As for the act of writing itself? Like most writers, I constantly have to fight the procrastinator’s urge to meander online, idly checking Twitter links and Wikipedia entries in a dreamy but pointless haze-until I look up in horror and realize I’ve lost two hours of work, a missing-time experience redolent of a UFO abduction. So I’d switch my word processor into full-screen mode, fading my computer desktop to black so
452
Smarter Than You Think
I could see nothing but the page, giving me temporary mental peace.
[Let’s] explore each of these trends. First off, there’s the emergence of omnipresent computer storage, which is upend- ing the way we remember, both as individuals and as a cul- ture. Then there’s the advent of “public thinking”: the ability to broadcast our ideas and the catalytic effect that has both inside and outside our minds. We’re becoming more conversa- tional thinkers- a shift that has been rocky, not least because everyday public thought uncorks the incivility and prejudices that are commonly repressed in face-to-face life. But at its best (which, I’d argue, is surprisingly often), it’s a thrilling develop- ment, reigniting ancient traditions of dialogue and debate. At the same time, there’s been an explosion of new forms of expres- sion that were previously too expensive for everyday thought- like video, mapping, or data crunching. Our social awareness is shifting, too, as we develop ESP-like “ambient awareness,” a persistent sense of what others are doing and thinking. On a social level, this expands our ability to understand the people we care about. On a civic level, it helps dispel traditional politi- cal problems like “pluralistic ignorance,” catalyzing political action, as in the Arab Spring.
Are these changes good or bad for us? If you asked me twenty years ago, when I first started writing about technology, I’d have said “bad.” In the early 1990s, I believed that as people migrated online, society’s worst urges might be uncorked: pseudonymity would poison online conversation, gossip and trivia would domi- nate, and cultural standards would collapse. Certainly seep. es for some of those predictions have come true, as anyone ways to make
the “l”m of two who’s wandered into an angry political forum knows. minds” move. But the truth is, while I predicted the bad stuff, I didn’t fore- see the good stuff. And what a torrent we have: Wikipedia, a
453
CLIVE THOMPSON
global forest of eloquent bloggers, citizen journalism, political fact-checking–or even the way status-update tools like Twitter have produced a renaissance in witty, aphoristic, haikuesque expression. If [I accentuate] the positive, that’s in part because we’ve been so flooded with apocalyptic warnings of late. We need a new way to talk clearly about the rewards and pleasures of our digital experiences–one that’s rooted in our lived experi- ence and also detangled from the hype of Silicon Valley.
The other thing that makes me optimistic about our cog- JO nitive future is how much it resembles our cognitive past. In the sixteenth century, humanity faced a printed-paper wave of information overload-with the explosion of books that began with the codex and went into overdrive with Gutenberg’s movable type. As the historian Ann Blair notes, scholars were alarmed: How would they be able to keep on top of the flood of human expression? Who would separate the junk from what was worth keeping? The mathematician Gottfried Wilhelm Leibniz bemoaned “that horrible mass of books which keeps on grow- ing,” which would doom the quality writers to “the danger of general oblivion” and produce “a return to barbarism.” Thank- fully, he was wrong. Scholars quickly set about organizing the new mental environment by clipping their favorite passages from books and assembling them into huge tomes-florilegia, bouquets of text-so that readers could sample the best parts. They were basically blogging, going through some of the sa~e arguments modem bloggers go through. (Is it enough to cltp a passage, or do you also have to verify that what the author wrote was true? It was debated back then, as it is today.) The past turns out to be oddly reassuring, because a pattern emerges. Each time we’re faced with bewildering new thinking tools, we panic-then quickly set about deducing how they can be used
to help us work, meditate, and create.
454
Smarter Than You Think
History also shows that we generally improve and refine our tools to make them better. Books, for example, weren’t always as well designed as they are now. In fact, the earliest ones were, by modem standards, practically unusable–often devoid of the navigational aids we now take for granted, such as indexes, paragraph breaks, or page numbers. It took decades-centuries, even-for the book to be redesigned into a more flexible cogni- tive tool, as suitable for quick reference as it is for deep reading. This is the same path we’ll need to tread with our digital tools. It’s why we need to understand not just the new abilities our tools give us today, but where they’re still deficient and how they ought to improve.
I have one caveat to offer. If you were hoping to read about the neuroscience of our brains and how technology is “rewiring” them, [I] will disappoint you.
This goes against the grain of modem discourse, I real- ize. In recent years, people interested in how we think have become obsessed with our brain chemistry. We’ve marveled at the ability of brain scanning-picturing our brain’s electrical activity or blood flow-to provide new clues as to what parts of the brain are linked to our behaviors. Some people panic that our brains are being deformed on a physiological level by today’s technology: spend too much time flipping between windows and skimming text instead of reading a book, or interrupting your conversations to read text messages, and pretty soon you won’t be able to concentrate on anything- and if you can’t concentrate on it, you can’t understand it either. In his book The Shallows, Nicholas Carr eloquently raised this alarm, arguing that the quality of our thought, as a species, rose in tandem with the ascendance of slow-moving, linear print and began declining with the arrival of the zingy,
455
CLIVE THOMPSON
flighty Internet. “I’m not thinking the way I used to think,”
he worried. I’m certain that many of these fears are warranted. It has
always been difficult for us to maintain mental habits of con- centration and deep thought; that’s precisely why societies have engineered massive social institutions (everything from univer- sities to book clubs and temples of worship) to encourage us to keep it up. It’s part of why only a relatively small subset of people become regular, immersive readers, and part of why an even smaller subset go on to higher education. T oday’s multitasking tools really do make it harder than before to stay focused during long acts of reading and contemplation. They require a high level of “mindfulness”-paying attention to your own atten- tion. While I don’t dwell on the perils of distraction [here], the importance of being mindful resonates throughout these pages. One of the great challenges of today’s digital thinking tools is knowing when not to use them, when to rely on the powers of older and slower technologies, like paper and books.
That said, today’s confident talk by pundits and journalists 35 about our “rewired” brains has one big problem: it is very prema- ture. Serious neuroscientists agree that we don’t really know how our brains are wired to begin with. Brain chemistry is particularly mysterious when it comes to complex thought, like memory, creativity, and insight. “There will eventually be neuroscientific explanations for much of what we do; but those explanations will tum out to be incredibly complicated,” as the neuroscientist Gary Marcus pointed out when critiquing the popular fascina- tion with brain scanning. “For now, our ability to understand how all those parts relate is quite limited, sort of like trying to understand the political dynamics of Ohio from an airplane window above Cleveland.” I’m not dismissing brain scanning; indeed, I’m confident it’ll be crucial in unlocking these mysteries
456
Smarter Than You Think
in the decades to come. But right now the field is so new that it is rash to draw conclusions, either apocalyptic or utopian, about how the Internet is changing our brains. Even Carr, the most diligent explorer in this area, cited only a single brain-scanning study that specifically probed how people’s brains respond to using the Web, and those results were ambiguous.
The truth is that many healthy daily activities, if you scanned the brains of people participating in them, might appear outright dangerous to cognition. Over recent years, professor of psychiatry James Swain and teams of Yale and University of Michigan scien- tists scanned the brains of new mothers and fathers as they listened to recordings of their babies’ cries. They found brain circuit activ- ity similar to that in people suffering from obsessive-compulsive disorder. Now, these parents did not actually have OCD. They were just being temporarily vigilant about their newborns. But since the experiments appeared to show the brains of new par- ents being altered at a neural level, you could write a pretty scary headline if you wanted: BECOMING A PARENT ERODES YOUR BRAIN FUNCTION! In reality, as Swain tells me, it’s much more benign. Being extra fretful and cautious around a newborn is a good thing for most parents: Babies are fragile. It’s worth the trade-off. Simi- larly, living in cities-with their cramped dwellings and pounding noise-stresses us out on a straightforwardly physiological level and floods our system with cortisol, as I discovered while research- ing stress in New York City several years ago. But the very urban density that frazzles us mentally also makes us 50 percent more productive, and more creative, too, as Edward Glaeser argues in Triumph of the City, because of all those connections between people. This is “the city’s edge in producing ideas.” The upside of creativity is tied to the downside of living in a sardine tin, or, as Glaeser puts it, “Density has costs as well as benefits.” Our digital environments likely offer a similar push and pull. We tolerate
457
CLIVE THOMPSON
their cognitive hassles and distractions for the enormous upside of being connected, in new ways, to other people.
I want to examine how technology changes our mental hab- its, but for now, we’ll be on firmer ground if we stick to what’s observably happening in the world around us: our cognitive behavior, the quality of our cultural production, and the social science that tries to measure what we do in everyday life. In any case, I won’t be talking about how your brain is being “rewired.”
Almost everything rewires it … . The brain you had before.you read this paragraph? You don’t
get that brain back. I’m hoping the trade-off is worth it.
The rise of advanced chess didn’t end the debate about man versus machine, of course. In fact, the centaur phenomenon only complicated things further for the chess world-raising questions about how reliant players were on computers and how their presence affected the game itself. Some worried that if humans got too used to consulting machines, they wouldn’t be able to play without them. Indeed, in June 2011, chess master Christoph Natsidis was caught illicitly using a mobile phone during a regular human-to-human match. During tense moments, he kept vanishing for long bathroom visits; the ref- eree, suspicious, discovered Natsidis entering moves into a piece of chess software on his smartphone. Chess had entered a phase similar to the doping scandals that have plagued baseball and cycling, except in this case the drug was software and its effect
cognitive. This is a nice metaphor for a fear that can nag at us in our 40
everyday lives, too, as we use machines for thinking more and more. Are we losing some of our humanity? What happens if the Internet goes down: Do our brains collapse, too? Or is the question naive and irrelevant-as quaint as worrying about
458
Smarter Than You Think
whether we’re “dumb” because we can’t compute long division without a piece of paper and a pencil?
Certainly, if we’re intellectually lazy or prone to cheating and shortcuts, or if we simply don’t pay much attention to how our tools affect the way we work, then yes-we can become, like Natsidis, overreliant. But the story of computers and chess offers a much more optimistic ending, too. Because it turns out that when chess players were genuinely passionate about learn- ing and being creative in their game, computers didn’t degrade their own human abilities. Quite the opposite: it helped them internalize the game much more profoundly and advance to new levels of human excellence.
Before computers came along, back when Kasparov was a young boy in the 1970s in the Soviet Union, learning grand-master-level chess was a slow, arduous affair. If you showed promise and you were very lucky, you could find a local grand master to teach you. If you were one of the tiny handful who showed world-class promise, Soviet leaders would fly you to Moscow and give you access to their elite chess library, which contained laboriously transcribed paper records of the world’s top games. Retrieving records was a painstaking affair; you’d contemplate a possible opening, use the catalog to locate games that began with that move, and then the librarians would retrieve records from thin files, pulling them out using long sticks resembling knitting needles. Books of chess games were rare and incomplete. By gaining access to the Soviet elite library, Kasparov and his peers developed an enormous advan- tage over their global rivals. That library was their cognitive augmentation.
But beginning in the 1980s, computers took over the library’s role and bested it. Young chess enthusiasts could buy CD-ROMs filled with hundreds of thousands of chess games.
459
CLIVE THOMPSON
Chess-playing software could show you how an artificial oppo- nent would respond to any move. This dramatically increased the pace at which young chess players built up intuition. If you were sitting at lunch and had an idea for a bold new opening move, you could instantly find out which historic players had tried it, then war-game it yourself by playing against software. The iterative process of thought experiments-“If I did this, then what would happen?”-sped up exponentially.
Chess itself began to evolve. “Players became more creative and daring,” as Frederic Friedel, the publisher of the first popu- lar chess databases and software, tells me. Before computers, grand masters would stick to lines of attack they’d long stud- ied and honed. Since it took weeks or months for them to research and mentally explore the ramifications of a new move, they stuck with what they knew. But as the next generation of players emerged, Friedel was astonished by their unusual gambits, particularly in their opening moves. Chess players today, Kasparov has written, “are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if
it doesn’t.” Most remarkably, it is producing players who reach grand 45
master status younger. Before computers, it was extremely rare for teenagers to become grand masters. In 1958, Bobby Fischer stunned the world by achieving that status at fifteen. The feat was so unusual it was over three decades before the record was broken, in 1991. But by then computers had emerged, and in the years since, the record has been broken twenty times, as more and more young players became grand masters. In 2002, the Ukrainian Sergey Karjakin became one at the tender age
of twelve.
460
Smarter Than You Think
So yes, when we’re augmenting ourselves, we can be smart W’ b · ~ e re ecommg centaurs. But our digital tools can also leave us smarter even when we’re not actively using them.
Joining the Conversation
1. Clive ~ompson lists three shifts-infinite memory, dot connectmg, and explosive publishing-that he believes have strongly affected our cognition. What exactly does he mean by these three shifts, and in what ways does he think they have changed our thinking?
2. Thompson starts paragraph 20 by sayt’ng “0 I ur too s are everywhere, link~d with our minds, working in tandem.” What. do yo~ thmk? Does his statement reflect your own expenence wtth technology?
3. I~ paragraphs 33-35, Thompson cites Nicholas Carr, whose vtews about technology differ from his. How does he respond to. Carr-and how does acknowledging views he disagrees wtth help support his own position?
4. So ~hat? Has Thompson convinced you that his topic mat- ters. If so, how and where does he do so?
5. ~rit~ an essay reflecting on the ways digital technologies ave mfluenced your own intellectual development drawing
from Thompson’s text and other readings in this chapter- and on your own experience as support for your argument. Be sure to acknowledge views other than your own.
4 61