ZAPS Assignments 3
Table of Contents
ZAPS is a set of interactive online experiments and demonstrations that will allow you to experience the various psychological phenomenon, as well as, serve as an additional tool to reinforce the theoretical basis behind each experiment and demonstration. All of the experiments will also be discussed in a real-world context. Your grade will be based on these summaries, NOT the grade provided by the ZAPS website when you finish the experiment
Please answer the ZAPS question below:
ZAPS 5: Serial Position Effect—The goal of the current ZAPS is to understand how we store and retrieve information from memory.
In your summary, answer the following questions: What is the primacy effect? Why do we see the primacy effect? What is the recency effect? Why do we see the recency effect? Did you tend to remember the first few words and the last few words? What strategies did you use to remember the items?
ZAPS 6: False Memory Task—the goal of this ZAPS is to introduce you to the DRM paradigm and explain how schemas can influence our memory.
In your summary please report your results for the three conditions. Were your results similar to the reference results? Why or why not? Based on your reading this week, and the ZAPS, what is a schema? How can a schema result in a false memory?
|Criteria||Theassignmentdoes not address any aspects of theassignment asoutlined.||Theassignmentaddresses a few aspects of theassignmentand indicates that you paid attention totheinstructions.||Theassignment addressesmost of the aspects of theassignmentand issupported by course material.||Theassignmentaddresses all aspects of theassignmentanddemonstrates a thoughtful considerationof the subject matter and is supported by course material.|
Did you ever have a clear memory of an event only to find that someone you were with at the event remembers it differently? Has anything like the following ever happened to you?
· You and your friend are reminiscing about a great party the two of you went to last year.
· Your friend says that the party took place outside, but you remember being inside the entire night.
· Your friend says that your hair was long, but you think that can’t be right since you have been growing out your hair since last year, so it must have been short.
· Your friend starts laughing as she reminds you that you fell asleep on the sofa and that the other guests then poked you and took pictures. At first you have no memory of this, but as your friend speaks, you start feeling embarrassed as you suddenly remember the sensation of being prodded in your sleep and the burst of camera flashes that night.
The truth is that the party was actually outside. You were remembering another party, which took place indoors. Your hair was only an inch shorter then than it is now, though you believe it has grown a lot more in the past year than it actually has. And your friend made up the whole story about you falling asleep, getting poked, and having your picture taken. In fact, you only had a Diet Coke that night, but your mind tricked you into thinking it actually occurred as she had said.
These are all examples of the different forms of memory distortion—a collection of phenomena that demonstrate how our long-term memories are not always permanent. In this ZAPS lab, your memory will be tested. Click on the Experience tab above to proceed.
As you learned in the Introduction, memory is far from perfect. In this ZAPS lab, we will do a memory experiment in which you will see that your own memory is not necessarily flawless.
After you click “Start Trial,” a series of 12 words will appear one-by-one on the screen. Then, you will be asked to select from a new list those words that you believe appeared in the original list. You may take as much time as you want to select your words. When you have completed your selections, click “Submit” to try another series of words. There are six series total.
After reading a list of 12 words, I will be asked to _______ .
· √ select words on a new list that appeared in the original list
· write out all the words that appeared in the list
· arrange the words from the list alphabetically
· write a short story that uses as many words as possible from the list
You may have noticed that many of the words within each list were conceptually related to each other—for example: bed, rest, and awake. What you may not have noticed is that a centrally related word was absent from the initial list. For example, the word sleep did not actually appear in the initial list of sleep-related words. However, that centrally related word, called the critical lure, was present in the list of words to choose from.
In the Data tab we will show you whether you mistakenly remembered seeing the critical lure for each of the six word lists presented (highlighted in yellow), as well as how accurately you remembered seeing (or not) other words from the lists.
The graph shows the percentage of words you selected. The first column (“word shown”) shows the correctly selected words. These are words that you correctly identified as having been present in the original list. The second column (“word not shown, related”) shows whether you mistakenly remembered a highly related word (such as the lure) as having been presented in the list. The third column (“word not shown, unrelated”) shows whether you mistakenly remembered unrelated words as having been presented in the list.
Memories are imperfect; they can be forgotten or distorted. The purpose of this ZAPS lab experiment was to show how easily our minds can create false memories.
The task you completed comes from a test that researchers use to study memory illusions (Roediger & McDermott, 1995). Researchers find that participants generally recall seeing the critical lure with just as high a frequency as words that actually appear on the original list. Moreover, people report feeling very confident that they indeed saw the critical lure in the original list (Weber & Brewer, 2004). This is an important finding because it reminds us that the certainty with which someone states a claim cannot be used as a gauge for how truthful that claim is.
Because our memory is limited, we have to store the things we want to remember in an efficient way. One way to do this is to use a schema—a cognitive structure that helps us perceive, organize, process, and use information. For instance, you probably have a schema (a general knowledge structure) about what to expect when you enter an airport. You may also have event schemas (also known as scripts), such as for a birthday party: This might consist of the guests entering, unwrapping gifts, singing a birthday song, and eating cake. Our schemas help us navigate the world efficiently; without schemas, our world would be a very overwhelming place.
The words in this experiment were all meaningfully related to a relevant schema. The schema activation you experienced enabled you to create false memories by giving you enough information to make you believe that the lure word was also present. This type of memory distortion is referred to as suggestibility, which is defined as the development of false memories from misleading information.
One real-life application of the research on suggestibility and false memories is related to eyewitness testimony in court cases. Elizabeth Loftus has conducted a series of classic experiments showing that information presented after someone has witnessed a crime or an accident can overwrite the accuracy of the original memory.
For instance, in 1974, Loftus and Palmer conducted a classic study in which they asked undergraduates to watch a short film clip of a car accident. They then queried the students about their memory of the film clip. One of the questions was, “How fast was the car going when it HIT the other car?” The other question was, “How fast was the car going when it SMASHED the other car?” Although both groups had seen the same accident on film, not only did the smashedgroup give a higher miles-per-hour (mph) estimate, they were also more likely to misremember having seen broken glass when asked about the film a week later.
Loftus claimed that post-event information led to suggestibility. Seeing the word smashed on the initial questionnaire influenced people to think that the accident was more severe, thus leading to higher mph ratings, and a tendency to remember broken glass (even though there had not been any). Research has also found that we tend to be most easily misled by facts that are consistent with our schema for an event; it should be easier to convince people that they (mistakenly) saw broken glass after an accident than that they saw a clown get out of one of the cars. This is another type of memory error known as memory bias, which is defined as the changing of memories to fit current beliefs or attitudes.
A third type of memory distortion may also cause some participants to remember broken glass. If a participant had seen a car crash with broken glass in her past, she might mistakenly apply this memory to the car crash that Loftus showed her. By misremembering the time and place that she saw the broken glass, the participant would be experiencing source misattribution. Source misattribution can happen with the time, place, people, or circumstances involved with a memory.
What were your reactions to your personal data from this ZAPS lab and to the data of your classmates? What, if anything, surprised you?
You will initially receive full credit for any answer, but your instructor may review your response later.
See if you are able to identify an example from your own life where your memory of an event seems to differ from somebody else’s memory of the same event. For example, you and a parent remember a family event differently. Or you and a friend might have different recollections about a social outing.
You will initially receive full credit for any answer, but your instructor may review your response later.
Answer the following questions to complete this ZAPS activity. Your performance in this section accounts for 10% of your grade.
Which of the following is a key finding from research using memory tests like the one in this ZAPS lab?
· Memory, although not perfect, is generally highly reliable.
· √ People believe the lure word was present in the list just as frequently as other words that actually did appear in the list.
· People very accurately remember the content of word lists.
· People rarely feel confident in their assessments of whether lure words appeared on the original lists.
A cross-country driver decides to eat at a local restaurant she has never heard of. She walks in and sees a counter that contains cash register machines; a menu hangs above the counter. Behind the counter, employees wearing headsets and paper hats hustle to and fro, pulling food from a service window and placing it onto trays. These images will most likely trigger a ________ that will lead the traveler to believe she should ________ .
· semantic association; order at the counter and then seat herself
· semantic association; wait by the door until a host shows her to her seat where a waiter will take her order
· √ schema; order at the counter and then seat herself
· schema; wait by the door until a host shows her to her seat where a waiter will take her order
Serial Position Effect
You are vacationing on the beach with a group of friends. You volunteer to walk to a food stand on the boardwalk to order lunch for everyone. Before you leave, you find yourself repeating each order to make sure you won’t forget any details by the time you reach the stand.
Why is it necessary to repeat these orders before leaving, rather than simply hearing them once and walking away? It’s because you learned in your psychology class that your working memory—the processing system that keeps information available for current use—can only maintain five to nine pieces of information for up to 20 to 30 seconds. You would not remember all the details of the lunch orders by the time you reached the food stand after hearing them just once. So you rehearsed everyone’s orders by repeating them to yourself. This transferred the information into your long-term memory, enabling you to access it when placing the orders a few minutes later.
In the Experience section of this ZAPS lab, your task will be to remember numerous pieces of information, much like the list of lunch orders in the example above. You will see, however, that the order in which the information is presented makes a difference in how well you remember it.
In each trial of this ZAPS lab, you will see 12 words on the screen, one after another. After these words are presented, 12 boxes will appear. Try to remember as many of the 12 words as possible and type them in the boxes. The order in which you fill in the words does not matter, and you can take as much time as you want to type in the words. However, be sure that your final answers do not contain any typing errors. Once you type in all the words you remembered, you can continue to the next trial. There are no practice trials; the experiment starts immediately. In total, you will see five trials with a series of 12 words each.
After viewing each series of words, I must:
· Fill in the words I remember without regard for typing errors.
· √ Fill in the words I remember in any order, being careful that I do not make any typing errors.
· Fill in the words I remember in the exact order in which they appeared.
In this Experience, you were presented with five trials of 12 words each. After each series of words, you recorded as many words as you could remember from that series. You probably began to notice a pattern after attempting to recall words in the first few series: most of the words you successfully recalled had occupied similar positions within their series of 12 words.
This phenomenon is known as the serial position effect. It is a form of memory bias in which our ability to recall a given item from a series depends on its relative position within that series.
In the graph, you will observe how the relative positioning of certain words within a series influences your ability to recall those words. On the x-axis, you will find the position of the word within a series, from one through 12. Plotted on the y-axis is the percentage.
What patterns do you think you will observe in your data from this Experience? For which positions of words within the five series do you hypothesize that you had the highest and lowest percentages of recall?
In viewing your data from the Experience section, you probably noticed a pattern in your responses. You probably weren’t able to remember all 12 items from any given series, and your highest percentages of recall were clustered around the words in the beginning of a series and those near the end. If we perform such an experiment with a large group of people and average all of their responses, the results will resemble the following graph:
This graph, called a serial position curve, demonstrates how people tend to recall items from the beginning and the end of a list. Because the serial position effect involves two different relative positions within a list, we refer to the phenomenon of better recall of the items at beginning as the primacy effect and that of better recall at the end as the recency effect.
Why is it that we don’t simply remember the items from a list in the reverse order in which they were presented? Why is our recall for items in the middle of the list rather low? Researchers Rundus and Atkinson (1970) set out to explore this phenomenon. They found that our ability to mentally rehearse items from the beginning of a list explains our enhanced recollection of them. We cannot rehearse items from the middle of a list as frequently as those at the beginning, explaining our decreased recollection of items from the middle.
Rehearsing items over and over again leads to the storage of these items in long-term memory. For example, if the first word you hear in a memory experiment is apple you may be able to rehearse “apple, apple, apple” before the next word—corner—comes along. Then, you may rehearse “corner, apple, corner” before the next word appears, and so on. However, how would the amount of rehearsal explain better recall of the items at the end of the list? It does not.
Rundus and Atkinson concluded that the recency effect must take place by means of a different cognitive mechanism than that of the primacy effect. While the primacy effect takes place when items enter long-term memory through frequent rehearsal, the recency effect occurs because the items near the end of a list are still present in working memory at the time of recall. In this way, we can easily retrieve the last few items presented in a series.
Under certain circumstances, the primacy and recency effects do not appear in the ways we would expect. Researchers found that if there is a delay in time between the presentation of a list and the participants’ recall of that list, the recency effect disappears, and only the primacy effect remains (Bjork et al., 1974). This occurs because the items from the beginning of the list remain in long-term memory, whereas the items at the end of the list have since disappeared from working memory at the time of recall. In contrast, people who can no longer form new long-term memories—a condition called anterograde amnesia—observe the recency effect when there is no delay, but not the primacy effect. (Carlesimo et al., 1996).
Suppose you are presented with a list of 50 words and you must immediately recall them. What do you hypothesize would happen to the primacy effect when recalling 50 items in comparison to recalling 12 items? Besides list length, what other factors might influence the primacy and recency effects during the recall of a list?
When you remember the first few items from a list better than items in the middle of the list, this effect takes place because the first items were still present in your working memory upon recall.
· √ False
Which of the following best describes the serial position curve for an experiment conducted with a delay between the presentation of a list and participants’ recall of that list?
· √ high percentages of recall for the first few positions in the list and low recollection for all other items in the list
· high percentages of recall for the first few positions in the list, low recollection of the middle items of the list, and high recollection of items at the end of the list
· high percentages of recall in the middle of the list and low recollection of items at the beginning and end of the list
A friend tells you the seven-digit passcode to enter her home. How long will this information remain in your working memory if you do not rehearse it?
· 1 minute
· √ 20–30 seconds
· 3 minutes
· 5–7 seconds
Psych 105: ZAPS Labs
Lab 1- Split Brain
· Common misconception by media: that people are right or left-brain dominant.
· Misconception started with concept of lateralization (diff parts of brain have different functions; L & R hemispheres have own specializations)
· Lateralization dates back to roman empire where greek physician/surgeon Galen of Pergamon noticed gladiators suffered language impairments when injured on left side of skill.
· Left hemisphere = dominant for language; Right Hemisphere = global spatial information
· 2 hemispheres connected by corpus callosum (large fiber of axons)
· nerve impulses travel from left brain to right brain through corpus callosum
· “Split-brain” operation patients= doctors sever corpus callosum to prevent severe epileptic attacks from spreading to both sides of brain.
· 2 important brain facts:
· The right hemisphere of the brain interprets information presented in the left visual field, and vice versa.
· The right hemisphere of the brain controls the left hand, and vice versa.
· Split brain patients:
· have difficulty recognizing/naming objects when info is presented in left visual field (which is processed by right hemisphere)
· RH has enough language capability to comprehend simple words, doesn’t have “phonological” (speech) capabilities to allow work of object to be spoken.
· So while RH can read the word and direct left hand to pick up correct object, the patient can’t state the name of object.
· When word presented in right visual field, patient’s LH can easily process word (since language abilities reside in LH), direct patient say word out loud, and cause right hand to find object.
· BUT when there is mismatch between visual field that the word is presented to and hand the patient is asked to use, the patient is unable to grasp the hidden object behind the screens, even if word was processed in the LH, and easily recognized.
· Gazzinga (first discovered in 1960’s)
1. The word banana appears on the left screen, and the split-brain patient is told to use her left hand to select the object named on the screen. Will she be able to fetch the banana? (Answer = Yes)
2. The word banana appears on the left screen, and the split-brain patient is told to use her right hand to select the object named on the screen. Will she be able to fetch the banana? (Answer: No)
3. The word banana appears on the right screen, and the split-brain patient is told to use her left hand to select the object named on the screen. Will she be able to fetch the banana? (Answer: No)
4. The word banana appears on the right screen, and the split-brain patient is told to use her right hand to select the object named on the screen. Will she be able to fetch the banana? (Answer: Yes)
Lab 2: Ponzo Illusion
· Our brains perceive depth in 2D images through depth cues .
· Our brains automatically & instantly apply these 2D patterns to same mechanisms it uses to figure out spatial relationships btwn objects in real 3D world.
· Retina : all direct info you visually see
· When image of something reaches retina, size is dependent on distance of object from observer.
· Retinal image will appear smaller the farter the observer is from object.
· Once brain receives the sensory info from retina and interprets it, you perceive that the objects would be same size in 3D. (even though yellow car seems further than red car)
· Goal- to adjust length of bottom line to match perceived length of top line in context of the ponzo illusion and then examine numerical error percentage to see how “off” people are.
· Ponzo illusion (optical illusion)
· Demonstrates how brain relies on “depth perception” to guess properties of object it cannot directly sense.
· When asked to adjust bottom to match top, most people overadjust (making bottom longer than top.
· This is bc in ponzo illusion, the oblique (slanted) vertical lines provide brain with distance cues by making image appear 3D.
· we perceive 2 oblique lines (parallel, converging into distance – like railroad tracks) thus making top appear further than bottom.
· This means: when given 2 horizontal lines of same length, brain will perceive top line as longer.
· Explanation: even though 2 equally long horizontal lines take up same space on retina, our visual system perceives the top line to be longer – bc if top line was actually further away, it really would be longer than a closer line that took up same space on retina.
· Size constancy – brain’s use of contextual info
· Objects closer take up more retina than objects further away. BUT we know that nearby cars and far cars = same size.
· Why? Brain uses other distance cues to override fact that closer cars take up more visual space.
· “Judgement error” in ponzo occurs bc you’re exposed to artificial situation that tricks brain into improperly applying a visual heuristic (a rule of thumb/ size constancy) that is usually reliable so brain automatically trusts it.
· Muller-Lyer Illusion :
· the vertical lines don’t appear to be of equal length. The v shaped lines at ends of vertical lines provide contextual depth info for visual system.
1. A psychological scientist asks study participants to indicate whether the green line at the front corner of the wall pictured here is longer than, shorter than, or equal in length to the green line at the back corner of the wall. Participants regularly say that the front line is shorter than the back line (even though the two are perfectly equal in length). The participants’ brains are using ___________ to guide perception.
a. (answer choices: rulers, context cues about depth, previous experience, geometric insights; answer = context cues about depth)
2. The process we call perception involves our eyes sensing objects in our world and our brain interpreting the information that our eyes sense. Keeping this definition in mind, which of the following is a true statement about the Ponzo illusion?
a. Answer choices: our eyes lead us to perceive something that is not actually true, our eyes mistakenly sense information about line length, the ponzo illusion is due to errors in both sensation and perception, the brain leads us to perceive something that is not actually true; answer = brain leads us to perceive…)
3. A real-life visual illusion is the “moon illusion.” When the moon is close to the horizon, the full moon seems larger than it does when high up in the sky. Rationally we know that the moon does not change size according to its position in the sky. Based on what you have learned about optical illusions, why do you think the moon illusion occurs?
a. Answer choices: repeated exposure to movies that show large moons as backdrops in romantic scenes train the brain to misperceive the size of the moon in real life, only children who havnt had a lot of opportunity to observe and perceive the moon are likely susceptible to the moon illusion, when the moon is near the horizon, objects on Earth, which are near the moon on your retina, lead to a misperception of depth and size – the brain discerns the low moon as closer and thus bigger than the moon high in the sky; answer: last one)
Lab 3- Attentional Blink
· Every day we continuously process info from surroundings. We take in more info than we can process because our attentional capacities at given moment are limited.
· We ignore certain stimuli and attend to others.
· In the alphanumeric sequences, each stimulus shown for 90 ms before next one replaced it. There were 4 different types based on distance between letters.
· Observations: 1. Able to recognize first target letter. 2. Recognition of target letter was affected by how many numbers appear between the two target letters.
· Reeves & Sperling – we process info episodically.
· Process 1st target letter as part of an attentional “episode” followed by suppression of attention (attentional blink) for about 180-450ms.
· If % of correct id’s for 2nd letter was significantly less than for 1st letter, you experienced attentional blink
· Your attention was occupied in processing the 1st letter so anything appearing after that suffered in comparison.
· Summary: when you perceived and attended to the first letter, your brain took time to fully process it. If the second letter followed the first one immediately or soon after, the time it took to process the first letter likely spilled over into the second letter’s critical period of processing. The overlap in processing between the two letters caused a “perceptual blind spot”, and prevented you from attending to the second letter.
· Our attentional system = selective attention .
· Ability to focus our attention on one event/piece of info/ while blocking out background noise/irrelevant stimuli.
· Aka “cocktail party phenomenon”
· Suggests that we can ignore irrelevant stimuli but important stimuli like our name being called is likely to stand out amongst background noise.
· Attentional blink = manifestation of selective attention.
· When we focus on one event, it takes few moments to fully recover our info and shift to a novel stimulus.
· Ability to process any info (especially sequences where occurrence of AB is inconsistent) is affected by factors: alertness and motivation to complete task.
· Reason for AB not understood but tendency to engage in selective attention allows us to maximize processing info we deem most relevant.
· Divided attention = when our attentional resources are allocated to at least 2 pieces of info at once.
· Ulric Neisser & Robert Becklen had subjects watch a double-exposure movie – 2 movies overlapped on same screen- to evaluate attentional capacities.
· 1 group (selective attention group) – tasks with detecting important events in only one of the videos;
· 2nd group (divided attention group) told to detect events in both.
· Results: divided attention missed a lot more info and made 8x number of detection errors than selective attention group.
· Selective attention followed by attentional blink can cause even more errors in processing.
· Optimum strategy for allocating attention = flexibility (which permease selective or divided attention depending on what task demands)
1. Studying for an important exam while watching tv is not very efficient bc it is an example of: (selective attention, divided attention, attentional blink, cocktail party phenomenon)
a. Answer: divided attention)
2. Our attentional systems cope with the overwhelming amount of info that we encounter on a daily basis by engaging in selective attention, and by extension, attentional blink. (True)
3. Being able to drive a car while you carry on a convo with a friend in passenger seat is explained by: (answer: ability to automatize driving and dedicate the majority of your attention to the conversation)
4. Which of the following scenarios is an example of attentional blink?
a. Answer: a waiter is telling 2 friends about the daily specials when a neighboring table erupts in laughter. The friends ask the waiter to repeat the last few items because their attention was diverted by the sudden loud laughs.
Lab 4 – Serial Position Effect
· working memory—the processing system that keeps information available for current use
· can only maintain 5-9 pieces of information for 20-30 seconds
· serial position effect – form of memory bias in which our ability to recall a given item from a series depends on its relative position within that series.
· graph = serial position curve
· demonstrates how people tend to recall items from beginning and end of a list.
· Serial position effect involves 2 diff. relative positons within a list
· Primacy effect- better recall of items at beginning of list
· Recency effect: better recall at the end.
· Why are our recall for middle of list items lower?
· Rundus and Atkinson (1970) = found our ability to mentally rehearse items from beginning of a list explains our enhanced recollection of them.
· We cannot rehearse items from middle of list as frequently as those at the beginning (hence our decreased recollection of items in middle)
· Rehearsing items over and over again leads to the storage of these items in long-term memory.
· Rundus and Atkinson concluded that the recency effect must take place by means of a different cognitive mechanism than that of the primacy effect.
· While the primacy effect takes place when items enter long-term memory through frequent rehearsal, the recency effect occurs because the items near the end of a list are still present in working memory at the time of recall.
· In this way, we can easily retrieve the last few items presented in a series.
· If there is a delay in time between the presentation of a list and the participants’ recall of that list, the recency effect disappears, and only the primacy effect remains (Bjork et al., 1974).
· This occurs because the items from the beginning of the list remain in long-term memory, whereas the items at the end of the list have since disappeared from working memory at the time of recall.
· People who can no longer form new long-term memories—a condition called anterograde amnesia—observe the recency effect when there is no delay, but not the primacy effect. (Carlesimo et al., 1996).
1. True or False: When you remember the first few items from a list better than items in the middle of the list, this effect takes place because the first items were still present in your working memory upon recall.
a. False – Better memory for the first few items in a list, or the primacy effect, takes place because you can rehearse these items and transfer them to your long-term memory.
2. Which of the following best describes the serial position curve for an experiment conducted with a delay between the presentation of a list and participants’ recall of that list?
a. High percentages of recall for the first few positions in the list and low recollection for all other items in the list.
b. High percentages of recall for the first few positions in the list, low recollection of the middle items of the list, and high recollection of items at the end of the list.
c. High percentages of recall in the middle of the list and low recollection of items at the beginning and end of the list.
i. Answer: a. When there is a delay between the presentation of a list and recollection of that list, we still observe the primacy effect, but we no longer observe the recency effect because items at the end of a list are no longer in working memory at the time of recall.
3. A friend tells you the seven-digit passcode to enter her home. How long will this information remain in your working memory if you do not rehearse it?
a. Answer: 20-30 seconds.
Lab 5 – False Memory
· Memory distortion: a collection of phenomena that demonstrate how our long-term memories are not always permanent
· Critical lure: a centrally related word was absent from the initial list. For example, the word sleep did not actually appear in the initial list of sleep-related words. However, that centrally related word, called the critical lure, was present in the list of words to choose from.
· False memories: memories are imperfect; can be forgotten or distorted.
· Researchers find that participants generally recall seeing the critical lure with just as high a frequency as words that actually appear on the original list.
· People report feeling very confident that they indeed saw the critical lure in the original list (Weber & Brewer, 2004).
· This is an important finding because it reminds us that the certainty with which someone states a claim cannot be used as a gauge for how truthful that claim is.
· Because our memory is limited, we have to store the things we want to remember in an efficient way.
· One way to do this is to use a schema—a cognitive structure that helps us perceive, organize, process, and use information.
· You may also have event schemas (also known as scripts), such as for a birthday party: This might consist of the guests entering, unwrapping gifts, singing a birthday song, and eating cake.
· Our schemas help us navigate the world efficiently; without schemas, our world would be a very overwhelming place.
· The words in this experiment were all meaningfully related to a relevant schema. The schema activation you experienced enabled you to create false memories by giving you enough information to make you believe that the lure word was also present.
· 1. This type of memory distortion = suggestibility, which is defined as the development of false memories from misleading information.
· 2. Memory bias -the changing of memories to fit current beliefs or attitudes.
· 3. Source misattribution can happen with the time, place, people, or circumstances involved with a memory.
1. Which of the following is a key finding from research using memory tests like the one in this ZAPS lab?
a. People rarely feel confident in their assessments of whether lure words appeared on the original lists
b. Memory although not perfect is generally highly reliable
c. People believe that the lure word was present in the list just as frequently as other words that actually did appear in the list.
d. People very accurately remember the content of word lists.
i. Answer: c
2. A cross-country driver decides to eat at a local restaurant she has never heard of. She walks in and sees a counter that contains cash register machines; a menu hangs above the counter. Behind the counter, employees wearing headsets and paper hats hustle to and fro, pulling food from a service window and placing it onto trays. These images will most likely trigger a ________ that will lead the traveler to believe she should ________ .
a. Schema; wait by the door until a host shows her to her seat where a waiter will take her order
b. Schema; order at the counter and then seat herself
c. Semantic association; wait by the door until a host shows her to her seat where a waiter will take her order
d. Semantic association; order at the counter and then seat herself.
i. Answer: b
Lab 6 – Sentence Verification
· semantic category knowledge—general knowledge of facts, ideas, meanings and concepts.
· Based on reaction times in sentence verification tasks, Collins and Quillian hypothesized that our knowledge network for animals looks like this:
· The semantic distance between two words is determined by the connections in the network and is defined as the shortest interval between these two words
· For instance, the semantic distance between “canary” and “bird” is 1, while between “canary” and “animal” it is 2. Thus, the semantic distance in the sentence “A canary has skin” is 2.
· Research indicates that people take longer to make a decision about a correct sentence as the semantic distance increases.
· The semantic distance between two words is determined by the connections in the network and is defined as the shortest interval between these two words
· For instance, the semantic distance between “canary” and “bird” is 1, while between “canary” and “animal” it is 2. Thus, the semantic distance in the sentence “A canary has skin” is 2.
· Research indicates that people take longer to make a decision about a correct sentence as the semantic distance increases.
1. network has a hierarchical structure.
a. If an item is found under one concept, which is itself nested within another higher concept, the original item belongs to both higher-level concepts.
i. because the concept “canary” belongs to the class of “bird”, then “canary” also belongs to the higher class of “animal” because “bird” belongs to “animal”.
2. principle of inheritance applies in it.
a. This means that features are only stored once and as high as possible in the hierarchy.
i. In the figure above, for instance, “lay eggs” is not stored at “canary” but at “bird”, since all birds lay eggs. Likewise, “breathes” is stored at the level of “animal”, since all animals breathe (whether with lungs or through gills).
3. typicality effect – Researchers routinely find that people react more quickly to sentences like “A robin is a bird” than to sentences like “A chicken is a bird”. This is because a robin is a “better” example, or a more typical exemplar for the category “bird”, than a chicken.
a. The typicality effect points out the limitations of Collin and Quillian’s model, which does not effectively account for the strength of category membership.
b. Strength of category membership is not simply a function of the distance from concept to concept within the hierarchy.
4. spreading activation model—(Collins & Loftus, 1975), is that it explains not only the typicality effect, but also many other effects that have been revealed through sentence verification tasks.
a. features need not be duplicated in multiple hierarchies; “red” is associated with “apple” and “fire engines”, as well as “roses” and “sunsets”.
1. According to the network proposed by Collins and Quillian, common characteristics that different breeds of dogs share (e.g.: fur, tail, sharp hearing) will appear once and as high up as possible in the network. What is this concept called?
b. The principle of inheritance
c. Typicality effect
d. Spreading activation
i. Answer: b
2. True or False: During flu season, people are more likely to ask, “Do you have any Kleenex?” than “Do you have tissues?” This is best explained by the typicality effect.
3. The Collins and Quillian model proposes that categorical information is organized hierarchically. What is one important difference between this model and a spreading activation model?
a. The relationship between concepts in a spreading activation model can strengthen or weaken depending on typicality and frequency of occurrence.
b. The relationships between concepts in a spreading activation model are determined by the principle of inheritance
c. There are no differences between the Collins and Quillian model and a spreading activation model.
d. The relationship between concepts in the Collins and Quillian model can strengthen or weaken depending on typicality and frequency of occurrence.
i. Answer: a
Lab 7 – Lexical Decision
· All the words you know are stored in your mental lexicon
· Mental lexicons organize words by meaning.
· Morpheme- smallest unit of language that carries a meaning (i.e.: suburb, subconscious)
· David Meyer and Roger Schvaneveldt (1971) proposed a serial-decision model which suggests that the physical layout of the letter strings on the screen makes you decide on them sequentially.
· You first decide whether the top letter string is a word, and then you move onto the bottom letter string.
· In order to judge whether a string of letters is a real word, you have to search your mental lexicon to see if stored somewhere
· Each word in your lexicon has a threshold that much be satisfied for the word to be recognized.
· Common words (high frequency words) = low thresholds and low response times
· Infrequent words = require more activation, have higher thresholds, and higher response times.
· Repetition priming
· Semantically related words also get partial activation. = semantic priming
· Explained by theory of spreading activation (when a concept is activated, activation spreads from that concept to nearby concepts)
1. Research conducted on homographs (ie: bat or minute) in lexical decision tasks predicts that you are likely to ________.
a. Activate multiple meanings of the words, and partially activate closely related words
b. Take longer to understand the word because it has an ambiguous meaning
c. Activate one meaning of the word, which is not necessarily decided by the context.
d. Access only the meaning of the word that is appropriate in the given context
i. Answer: a
2. Words are most likely stored in the mental lexicon by semantics, so that table would be closer to chair than it would be to tape.
a. Answer: true
3. Hearing or reading a word frequently increases the ease and speed of its recognition because of this phenomenon in lexical decision.
a. Frequency priming
b. Repetition priming
c. Semantic priming
d. Serial priming
i. Answer: b
Lab 8 – Analogical Representation
· Mental representation: a mental “copy” of some phenomenon present in the world
· Key components that form the basis of human thought
· Analogical representation: mental image of the object possesses certain physical attributes that match the actual physical attributes of the real-life object.
· Cooper (1975) asked participants to determine whether two nonsensical, figures were the same or mirror images of each other.
· They showed participants both simple and complex nonsensical figures, and compared the reaction times of both types of trials.
· Found that there was no significant difference in their responses to simple versus complex nonsensical figures.
· Findings suggest that reaction time depends upon the relative angle of rotation between the two figures and not their complexity.
· Finke (1989) discovered that when we rotate an object mentally, it occurs much in the same way as when we rotate an object in real, physical space. He stated that visual imagery has transformational equivalence with rotating items in real space.
· This means that if it takes us longer to rotate an object 200 degrees than it does to rotate it 100 degrees in real space, it will also take us longer to rotate a mental representation of the same object 200 degrees versus 100 degrees.
· Symbolic representation: does not share physical qualities with the concept it represents because it is abstract in nature.
1. Picture your childhood bedroom. The mental image you just created is an example of what type of mental representation?
a. Pictorial representation
b. Symbolic representation
c. Analogical representation
i. Answer: c
2. When rotating a glass pyramid 240 degrees with our hands, it takes a longer amount of time than it does to rotate it 40 degrees. According to Finke’s concept of transformational equivalence, it will take us _____________ amount of time to rotate a mental representation of a pyramid 240 degrees than it would to mentally rotate it 40 degrees.
a. A longer
b. A shorter
c. The same
i. Answer: a
3. It takes people longer to mentally rotate complex nonsensical figures than it does for them to rotate simple nonsensical figures.
a. True or false
i. Answer: false – As Cooper (1975) found, the relative difference in the angle between two figures determined the amount of time it took for participants to mentally rotate them. The complexity of the figures did not affect mental rotation time.
Lab 9 -Decision Making
· Decision making: the selecting of the best alternative from among several options
· Expected utility theory: dominant theory of decision making held that people simply calculate the “expected utility” or value, of the possible choices for any decision they need to make and choose the option that maximizes the desired outcome
· Predicts what individuals would do if they were entirely rational vs. prospect- what they actually do
· Prospect theory: people evaluate psychological prospects of the choices, rather than the objective expected values.
· Weigh the fear of losing $100 against the hope of gaining $125
· Loss aversion: people are generally eager to avoid costs than potentially gain benefits
· Framing: the way a choice is worded (framed) changes the psychology of how people perceive the choice, and prospect theory says it is the value of the choices’ prospects that determine what people do, as opposed to the objective expected values.
· Framing effect ex: whether a choice is framed in terms of gains or losses. People prefer lotteries, which frame the prospect in terms of potential gains, to straight-up bets which frame the prospect in terms of how much one might lose
· Anchoring: when making their estimates, people start from the “anchor”
· Causes ones judgment to be affected by the results of a previous estimate, even though both judgments were intended to have been made independently.
· When making decisions under conditions of uncertainty, we automatically apply heuristics (rules of thumb) to try to reduce the uncertainty.
· Representativeness heuristic: the description of someone is representative of a stereotype/category so our intuitive heuristic tells us it seems plausible.
· Directs us to choose the most plausible conclusion when we are unsure of which conclusion is most probable.
· Availability: go with what comes to mind more readily. Works sometimes but not all the time
· Tells us to replace a question we cannot realistically answer with a similar question whose answer is more readily available.
1. Consider the following gamble. A standard pack of playing cards, including 26 red cards (diamonds and hearts) and 26 black cards (spades and clubs), is shuffled, and the card on top of the deck is turned face up.
· If the face up card is red, the gambler wins $11
· If the face up card is black the gambler loses $9
Expected value theory predicts which:
a) Most people will accept the gamble
b) Most people will reject the gamble
c) People will be equally likely to accept or reject the gamble
I. Answer: a.
2. Consider the same gamble. Prospect theory predicts which?
a. Most people will accept the gamble
b. Most people will reject the gamble
c. People will be equally likely to accept or reject the gamble
I. Answer: b
3. In a 1997 experiment, participants were first asked whether the Indian leader Mahatma Gandhi died before or after a certain age, then were asked to guess the precise age at which Gandhi died. People who were first asked whether or not Gandhi died at age 9 gave an estimate (50 years) much lower on average than those who were first asked whether or not he died at age 140 (67 years). This experiment is a perfect example of which of the following?
a. The anchoring effect
b. Availability heuristic
c. Loss aversion
d. Representativeness heuristic
I. Answer: a
Lab 10 Sudden Insight
· Problem solving: using info available to achieve a goal
· Gestalt psychology = major focus was problem solving with insight
· Particularly interested in how cognitively restructuring a problem (seeing it in a new way) often leads to its solution
· Another way to successfully problem solve is by dividing task into subgoals.
· Mental set: most common tendency to persist with a tried and true strategy
· Can be useful but sometimes hinter us from finding a solution
1. Which of these would a follower of Gestalt psychology be concerned about the least?
a. Seeing problems in a new way
b. Whether a problem is trivial or non-trivial
c. Insight problems
d. The cognitively restructuring of problems
2. True or False. We are always conscious of how we arrive at insight problems
3. Psychologists who study problem solving are interested in puzzles and games because:
a. Psychology is hard work, so its important to relax and play games every so often
b. The big companies that make puzzles and games pay for all this research
c. How people try to achieve goals in puzzles and games is cognitively significant
d. Sudden insight only occurs when people are engaged in solving puzzles/games
|Remembering Complex Events|
Memory Errors, Memory Gaps Where did you spend last summer? What country did you grow up in? Where were you five minutes ago? These are easy questions, and you effortlessly retrieve this information from memory the moment you need it. If we want to understand how memory functions, therefore, we need to understand how you locate these bits of information (and thousands of others just like them) so readily.
But we also need to account for some other observations. Sometimes, when you try to remember an episode, you draw a blank. On other occasions, you recall something, but with no certainty that you’re correct: “I think her nickname was Dink, but I’m not sure.” And sometimes, when you do recall a past episode, it turns out that your memory is mistaken. Perhaps a few details of the event were different from the way you recall them. Or perhaps your memory is completely wrong, misrepresenting large elements of the original episode. Worse, in some cases you can remember entire events that never happened at all! In this chapter, well consider how, and how often, these errors arise. Let’s start with some examples. Memory Errors: Some Initial Examples In 1992, an El Al cargo plane lost power in two of its engines just after taking off from Amsterdam’s Schiphol Airport. The pilot attempted to return the plane to the airport but couldn’t make it. A few minutes later, the plane crashed into an 11-story apartment building in Amsterdam’s Bijlmermeer neighborhood. The building collapsed and burst into flames; 43 people were killed, including the plane’s entire crew.
Ten months later, researchers questioned 193 Dutch people about the crash, asking them in particular, “Did you see the television film of the moment the plane hit the apartment building?” More than half of the participants (107 of them) reported seeing the film, even though there was no such film. No camera had recorded the crash; no film (or any reenactment) was shown on television. The participants seemed to be remembering something that never took place (Crombag, Wagenaar, & van Koppen, 1996).
In a follow-up study, investigators surveyed another 93 people about the plane crash. These people were also asked whether they’d seen the (nonexistent) TV film, and then they were asked detailed questions about exactly what they had seen in the film: Was the plane burning when it crashed, or did it catch fire a moment later? In the film, did they see the plane come down vertically or did it hit the building while still moving horizontally at a considerable with no forward speed speed? Two thirds of these participants reported seeing the film, and most of them were able to provide details about what they had When asked about the plane’s speed, for example, only 23% said that they couldn’t remember. The others gave various responses, presumably based on their “memory” of the (nonexistent) film.
Other studies have produced similar results. There was no video footage of the car crash in which Princess Diana was killed, but 44% of the British participants in one study recalled seeing the footage (Ost, Vrij, Costall, & Bull, 2002). More than a third of the participants questioned about a nightclub bombing in Bali recalled seeing a (nonexistent) video, and nearly all these participants reported details about what they’d seen in the video (Wilson & French, 2006).
It turns out that more persistent questioning can lead some of these people to admit they actually don’t remember seeing the video. Even with persistent questioning, though, many participants continue to insist that they did see the video-and they offer additional information in the film (e.g., Patihis & Loftus, 2015; Smeets et al., 2006). Also, in all about exactly what they sav these studies, let’s emphasize that participants are thinking back to an emotional and much- discussed event; the researchers aren’t asking them to recall a minor occurrence.
Is memory more accurate when the questions come after a shorter delay? In a study by Brewer and Treyens (1981), participants were asked to wait briefly in the experimenter’s office prior to the procedure’s start. After 35 seconds, participants were taken out of this office and told that there actually was no experimental procedure. Instead, the study was concerned with their memory for the room in which they’d just been sitting. Participants’ descriptions of the office were powerfully influenced by their prior beliefs. Surely, most participants would expect an academic office to contain shelves filled with books. In this particular office, though, no books in view (see Fiqure 8.1). Even so, almost one third of the participants (9 of 30) reported seeing books in the office. Their recall, in other words, was governed by their expectations, not by reality. How could this happen? How could so many Dutch participants be wrong in their recall of the plane crash? How could intelligent, alert college students fail to remember what they’d seen in an office just moments earlier? Memory Errors: A Hypothesis In Chapters 6 and 7, we emphasized the importance of memory connections that link each bit of knowledge in your memory to other bits. Sometimes these connections tie together similar episodes, so that a trip to the beach ends up connected in memory to your recollection of other trips. Sometimes the connections tie an episode to certain ideas-ideas, perhaps, that were part of your understanding of the episode, or ideas that were triggered by some element within the episode.
It’s not just separate episodes and ideas that are linked in this way. Even for a single episode, the elements of the episode are stored separately from one another and are linked by connections. In fact, the storage is “modality-specific,” with the bits representing what you saw stored in brain areas devoted to visual processing, the bits representing what you heard stored in brain areas specialized for auditory processing, and so on (e.g., Nyberg, Habib, McIntosh, & Tulving, 2000; Wheeler Peterson, & Buckner, 2000; also see Chapter 7, Figure 7.4, p. 245).
With all these connections in place-element to element, episode to episode, episode to related ideas-information ends up stored in memory in a system that resembles a vast spider web, with was the each bit of information connected by many threads to other bits elsewhere in the web. This idea that in Chapter 7 we described as a huge network of interconnected nodes. However, within this network there are no boundaries keeping the elements of one episode separate from elements of other episodes. The episodes, in other words, aren’t stored in separate “files,” each distinct from the others. What is it, therefore, that holds together the various bits within each episode? To a large extent, it’s simply the density of connections. There are many connections linking the various aspects of your “trip to the beach” to one another; there are fewer connections linking this event to other events.
As we’ve discussed, these connections play a crucial role in memory retrieval. Imagine that you’re trying to recall the restaurant you ate at during your beach trip. You’ll start by activating nodes in memory that represent some aspect of the trip-perhaps your memory of the rainy weather. Activation will then flow outward from there, through the connections you’ve established, and this will energize nodes representing other aspects of the trip. The flow of activation can then continue from there, eventually reaching the nodes you seek. In this way, the connections serve as retrieval paths, guiding your search through memory.
Obviously, then, memory connections are a good thing; without them, you might never locate the information you’re seeking. But the connections can also create problems. As you add more and more links between the bits of this episode and the bits of that episode, you’re gradually knitting these two episodes together. As a result, you may lose track of the “boundary” between the episodes. More precisely, you’re likely to lose track of which bits of information were contained within which event. In this way, you become vulnerable to what we might think of as “transplant” errors, in which a bit of information encountered in one context is transplanted into another context. In the same way, as your memory for an episode becomes more and more interwoven with other thoughts you’ve had about the event, it will become difficult to keep track of which elements are were actually part of the episode itself, and which are linked merely because they were associated with the episode in your thoughts. This, too, can produce linked to the episode because they transplant errors, in which elements that were part of your thinking get misremembered as if they were actually part of the original experience. Understanding Both Helps and Hurts Memory It seems, then, that memory connections both help and hurt recollection. They help because the connections, serving as retrieval paths, enable you to locate information in memory. But connections can hurt because they sometimes make it difficult to see where the remembered episode stops and other, related knowledge begins. As a result, the connections encourage intrusion errors-errors in which other knowledge intrudes into the remembered event.
To see how these points play out, consider an early study by Owens, Bower, and Black (1979). In this study, half of the participants read the following passage:
Nancy arrived at the cocktail party. She looked around the room to see who was there. She went to talk with her professor. She felt she had to talk to him but was a little nervous about just what to say. A group of people started to play charades. Nancy went over and had some refreshments. The hors d’oeuvres were good, but she wasn’t interested in talking to the rest of the people at the party. After a while she decided she’d had enough and left the party.
Other participants read the same passage, but with a prologue that set the stage:
Nancy woke up feeling sick again, and she wondered if she really was pregnant. How would she tell the professor she had been seeing? And the money was another problem.
All participants were then given a recall test in which they were asked to remember the sentences as exactly as they could. Table 8.1 shows the results-the participants who had read the prologue (the Theme condition) recalled much more of the original story (i.e., they remembered the propositions actually contained within the story). This is what we should expect, based on the claims made in Chapter 6: The prologue provided a meaningful context for the remainder of the story, and this helped understanding. Understanding, in turn, promoted recall.
At the same time, the story’s prologue also led participants to include elements in their recall that weren’t mentioned in the original episode. In fact, participants who had seen the prologue made four times as many intrusion errors as did participants who hadn’t seen the prologue. For example, they might include in their recall something like “The professor had gotten Nancy pregnant.” This idea isn’t part of the story but is certainly implied, so will probably be part of participants’ understanding of the story. It’s then this understanding (including the imported element) that is remembered. The DRM Procedure Similar effects, with memory connections both helping and hurting memory, can be demonstrated with simple word lists. For example, in many experiments, participants have been presented with lists like this one: “bed, rest, awake, tired, dream, wake, snooze, blanket, doze, slumber, snore, nap, peace, yawn, drowsy.” Immediately after hearing this list, are asked to recall as many of participants the words as they can.
As you surely noticed, the words in this list are all associated with sleep, and the presence of this theme helps memory: The list words are easy to remember. It turns out, though, that the word “sleep” is not itself included in the list. Nonetheless, research participants spontaneously make the connection between the list words and this associated word, and this connection almost always leads to a memory error. When the time comes for recall, participants are extremely likely to recall that they heard “sleep.” In fact, they’re just as likely to recall “sleep” as they are to recall the actual words on the list (see Figure 8.2). When asked how confident they are in their memories participants are just as confident in their (false) recall of “sleep” as they are in their (correct) memory of genuine list words (Gallo, 2010: for earlier and classic papers in this arena. see Deese, 1957; Roediger & McDermott, 1995, 2000). This experiment (and many others like it) uses the DRM procedure, a bit of terminology that honors the investigators who developed it (James Deese, Henry Roediger II, and Kathleen McDermott). The procedure yields many errors even if participants are put on their guard before the procedure begins-that is, told about the nature of the lists and the frequency with which they produce errors (Gallo, Roberts, & Seamon, 1997; McDermott & Roediger, 1998). Apparently, the mechanisms leading to these errors are so automatic that people can’t inhibit them. Schematic Knowledge Imagine that you go to a restaurant with a friend. This setting is familiar for you, and you have some commonsense knowledge about what normally happens here. You’ll be seated; someone will bring menus; you’ll order, then eat; eventually, you’ll pay and leave. Knowledge like this is often referred to with the Greek word schema (plural: schemata). Schemata summarize the broad pattern of what’s normal in a situation-and so your kitchen schema tells you that a kitchen is likely to have a stove but no piano; your dentist’s office schema tells you that there are likely to be magazines in the waiting room, that you’ll probably get a new toothbrush when you leave, and so on.
Schemata help you in many ways. In a restaurant, for example, you’re not puzzled when someone keeps filling your water glass or when someone else drops by to ask, “How is everything?” Your schema tells you that these are normal occurrences in a restaurant, and you instantly understand how they fit into the broader framework. Schemata also help when the time comes to recall how an event unfolded. This is because there are often gaps in your recollection-either because you didn’t notice certain things in the first place, or because you’ve gradually forgotten some aspects of the experience. (We’ll say more about forgetting later in the chapter.) In either case, you can rely on your schemata to fill in these gaps. So, in thinking back to your dinner at Chez Pierre, you might not remember anything about the menus. Nonetheless, you can be reasonably sure that there were menus and that they were given to you early on and taken away after you placed your order. On this basis, you’re likely to include menus within your “recall” of the dinner, even if you have no memory of seeing the menus for this particular meal. In other words, you’ll supplement what you actually remember with a plausible reconstruction based on your schematic knowledge. And in most cases this after-the-fact reconstruction will be correct, since schemata do, after all, describe what happens most of the time. Evidence for Schematic Knowledge Clearly, then, schematic knowledge helps you, by guiding your understanding and enabling you to reconstruct things you can’t remember. But schematic knowledge can sometimes hurt you, by promoting errors in perception and memory. Moreover, the types of errors produced by schemata are quite predictable. As an example, imagine that you visit a dentist’s office, and this one happens not to have any magazines in the waiting room. It’s likely that you’ll forget this detail after a while, so what will happen when you later try to recall your trip to the dentist? Odds are good that you’ll rely on schematic knowledge and “remember” that there were magazines (since, after all, there usually are some scattered around a waiting room). In this way, your recollection will make this dentist’s office seem more typical, more ordinary, than it actually was. Here’s the same point in more general terms. We’ve already said that schemata tell you what’s typical in a setting. Therefore, if you rely on schematic knowledge to fill gaps in your recollection, you’ll fill those gaps with what’s normally in place in that sort of situation. As a result, any reliance on schemata will make the world seem more “normal” than it really is and will make the past seem more “regular” than it actually was.
This tendency toward “regularizing” the past has been documented in many settings. The classic demonstration, however, comes from studies published long ago by British psychologist Frederick Bartlett. Bartlett presented his participants with a story taken from the folklore of Native Americans (Bartlett, 1932). When tested later, the participants did reasonably well in recalling the gist of the story, but they made many errors in recalling the particulars. The pattern of errors, though, was quite systematic: The details omitted tended to be ones that made little sense to Bartlett’s British participants. Likewise, aspects of the story that were unfamiliar were often changed into aspects that were more familiar; steps of the story that seemed inexplicable were supplemented to make the story seem more logical.
Overall, then, the participants’ memories seem to have “cleaned up” the story they had read- making it more coherent (from their perspective), more sensible. This is exactly what we would expect if the memory errors derived from the participants’ attempts to understand the story and with that, their efforts toward fitting the story into a schematic frame. Elements that fit within the frame remained in their memories (or could be reconstructed later). Elements that didn’t fit dropped out of memory or were changed. the same spirit, consider the Brewer and Treyens study mentioned at the start of this chapter- the study in which participants remembered seeing shelves full of books, even though there were none. This error was produced by schematic knowledge. During the event itself (while the participants were sitting in the office), schematic knowledge told the participants that academic offices usually contain many books, and this knowledge biased what the participants paid attention to. (If you’re already certain that the shelves contain books, why should you spend time looking at the shelves? This would only confirm something you already know-see Vo & Henderson, 2009.) Then, when the time came to recall the office, participants used their schema to reconstruct what the office must have contained- a desk, a chair, and of course lots of books. In this way, the memory for the actual office was eclipsed by generic knowledge about what a “normal” academic office contains.
Likewise, think back to the misremembered plane crash and the related studies of people remembering videos of other prominent events, even though there were no videos of these events. Here, too, the memory errors distort reality by making the past seem more regular, more typical, than it really was. After all, people often hear about major news events via a television broadcast or Internet coverage, and these reports usually include vivid video footage. So here, too, the past as remembered seems to have been assimilated into the pattern of the ordinary. The event as it unfolded was unusual, but the event as remembered becomes typical of its kind-just as we would expect if understanding and remembering were guided by our knowledge of the way things generally unfold. e. Demonstration 8.1: Associations and Memory Error This is a test of immediate memory. Read List 1; then close the list and try to write down, from memory, as many words as you can remember from the list. Then expand the list to read List 2, close the list, and try to write down as many of its words as you can remember. Then do the same for List 3. When you’re all done, read the material that follows.
List 1 List 2 List 3
Door Nose Sour
Glass Breathe Candy
Pane Sniff Sugar
Shade Aroma Bitter
Ledge Hear Good
Sill See Taste
House Nostril Tooth
Open Whiff Nice
Curtain Scent Honey
Frame Reek Soda
View Stench Chocolate
Breeze Fragrance Heart
Sash Perfume Cake
Screen Salts Tart
Shutter Rose Pie Don’t read beyond this point until you’ve tried to recall each of the three lists!
Each of these lists is organized around a theme, but the word that best captures that theme included in the list. All of the words in List 1, for example, are strongly associated with the word “window” but that word is not in the list. All of the words in List 2 are strongly associated with “smell” and all in List 3 are strongly associated with “sweet”; but again, these theme words are not in the lists. In your recall of the lists, did you include seeing “window” in List 1? “Smell” in List 2? “Sweet” in List 3?
This procedure, described in the chapter, is called the DRM procedure, in honor of the researchers who have developed this paradigm (Deese, Roediger, and McDermott). IIn this situation, often as many as half of the people tested do make these specific errors-and with considerable confidence. Of course, the theme words are associated with the list in your memory, and it’s this association that leads many people into a memory error.
Perhaps you read through the material after reading the text’s description of the DRM procedure. Did you make the expected error anyway? Research suggests that these errors appear even when research participants are warned about the DRM pattern, just as you were. Did you show that pattern? Or did you manage to avoid the errors?
Demonstration adapted from McDermott, K., & Roediger, H. (1998). False recognition of associates can be resistant to an explicit warning to subjects and an immediate recognition probe. Journal of Memory and Language, 39, 508-520. See also Roediger, H., &McDermott, K. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory and Cognition, 21(4), 803-814. The Cost of Memory Errors There’s clearly a “good news, bad news” quality to our discussion so far. On the positive side, memory connections serve as retrieval paths, allowing you to locate information in storage. The connections also enrich your understanding, because they tie each of your memories into a context provided by other things you know. In addition, links to schematic knowledge enable you to supplement your perception and recollection with well-informed (and usually accurate) inference.
On the negative side, though, the same connections can undermine memory accuracy, and memory errors are troubling. As we’ve discussed in other contexts, you rely on memory in many aspects of life, and it’s unsettling that the memories you rely on may be wrong-misrepresenting how the past unfolded.
Eyewitness Errors In fact, we can easily find circumstances in which memory errors are large in scale (not just concerned with minor details in the episode) and deeply consequential. For example, errors in eyewitness testimony (e.g., identifying the wrong person as the culprit or misreporting how an event unfolded) can potentially send an innocent person to jail and allow a guilty person to go free.
How often do eyewitnesses make mistakes? One answer comes from U.S. court cases in which DNA evidence, not available at the time of the trial, shows that the courts had convicted people who were, in truth, not guilty. There are now more than 350 of these exonerations, and the exonerees had (on average) spent more than a dozen years in jail for crimes they didn’t commit. Many of them were on death row, awaiting execution. When closely examined, these cases yield a clear message. Some of these men and women of dishonest because convicted were informants; some because analyses of forensic evidence had been botched. But by far the most common concern is eyewitness errors. In fact, according to most analyses, eyewitness errors account for at least three quarters of these false convictions-more than all other causes combined (e.g., Garrett, 2011; Reisberg, 2014)
Cases like these make it plain that memory errors, including misidentifications, are profoundly important. We’re therefore led to ask: Are there ways to avoid these errors? Or are there ways to detect the errors, so that we can decide which memories are correct and which ones are not? Planting False Memories An enormous number of studies have examined eyewitness memory-the sort of memory that police rely on when investigating crimes. In one of the earliest procedures, Loftus and Palmer (1974) showed participants series of pictures depicting an automobile collision. Later, participants were asked questions about the collision, but the questions were phrased in different ways for different groups. Some participants were asked, for example, “How fast were the cars going when they hit each other?” A different group was asked, “How fast were the cars going when they smashed into each other?” The differences among these questions were slight, but had a substantial influence: Participants in the “hit” group estimated the speed to have been 34 miles per hour: those in the “smashed” group estimated 41 miles per hour-20% higher (see Figure 8.3). But what is critical comes next: One week later, the participants were asked in a perfectly neutral way whether they had seen any broken glass in the pictures. Participants who had initially been asked the “hit” question tended to remember (correctly) that no glass was visible; participants who had been asked the “smashed” question, though, often made this error. It seems, therefore, that the change of just one word within the initial question can have a significant effect-in this case, more than doubling the likelihood of memory error.
In other studies, participants have been asked questions that contain overt misinformation about an event. For example, they might be asked, “How fast was the car going when it raced by the barn?” when, in truth, no barn was in view. In still other studies, participants are exposed to descriptions of the target event allegedly written by “other witnesses.” They might be told, for example, “Here’s how someone else recalled the crime; does this match what you recall?”” Of course, the “other witness” descriptions contained some misinformation, enabling researchers to determine if participants “pick up” the false leads (e.g., Paterson & Kemp, 2006; also Edelson, Sharon, Dolan, & Dudai, 2011). In other studies, researchers ask questions that require the participants themselves to make up some bit of misinformation. For example, participants could be asked, “In the video, was the man bleeding from his knee or from his elbow after the fall?” Even though it was clear in the video that the man wasn’t bleeding at all, participants are forced to choose one of the two options (e.g., Chrobak & Zaragoza, 2008; Zaragoza, Payment, Ackil, Drivdahl, & Beck, 2001). These procedures differ in important ways, but they are all variations on the same theme. In each case, the participant experiences an event and then is exposed to a misleading suggestion about how the event unfolded. Then some time is allowed to pass. At the end of this interval, the participant’s memory is tested. And in each of these variations, the outcome is the same: A substantial number of participants-in some studies, more than one third-end up incorporating the false suggestion into their memory of the original event.
Of course, some attempts at manipulating memory are more successful, some less so. It’s easier, for example, to plant plausible memories rather than implausible ones. (However, memories for implausible events can also be planted-see Hyman, 2000; Mazzoni, Loftus, & Kirsch, 2001; Pezdek Blandon-Gitlin, & Gabbay, 2006; Scoboria, Mazzoni, Kirsch, & Jimenez, 2006; Thomas & Loftus, 2002.) Errors are also more likely if the post-event information supplements what the person remembers, in comparison to contradicting what the person would otherwise remember. It’s apparently easier, therefore, to “add to” a memory than it is to “replace” a memory (Chrobak & Zaragoza, 2013). False memories are also more easily planted if the research participants don’t just hear about the false event but, instead, are urged to imagine how the suggested event unfolded. In one study, participants were given a list of possible childhood events (going to the emergency room late at night; winning a stuffed animal at a carnival; getting in trouble for calling 911) and were asked to “picture each event as clearly and completely” as they could. This simple exercise was enough to increase participants’ confidence that the event had really occurred (Garry, Manning, Loftus, & Serman. 1996; also Mazzoni & Memon, 2003; Sharman & Barnier, 2008; Shidlovski, Schul. & Mayo. 2014). Even acknowledging these variations, though, let’s emphasize the consistency of the findings. We can use subtle procedures (with slightly leading questions) to plant false information in someone’s memory, or we can use a more blatant procedure (demanding that the person make up the bogus facts). We can use pictures, movies, or live events as the to-be-remembered materials. In all cases it’s remarkably easy to alter someone’s memory, with the result that the past as the person remembers it can differ markedly from the past as it really was. This is a widespread pattern, with numerous implications for how we think about the past and how we think about our reliance on our own memories. (For more on research in this domain, see Carpenter & Schacter, 2017; Cochran, Greenspan, Bogart, & Loftus, 2016; Frenda, Nichols, & Loftus, 2011; Laney, 2012; Loftus, 2017; Rich & Zaragoza, 2016. For research documenting similar memory errors in children, see, e.g., Bruck & Ceci, 1999, 2009; Reisberg, 2014.)
Are There Limits on the Misinformation Effect? The studies just described reflect the misinformation effect-a term referring to memory errors that result from misinformation received after an event was experienced. What sorts of memory errors can be planted in this way?
We’ve mentioned studies in which participants remember broken glass when really there was none or remember a barn when there was no barn in view. Similar procedures have altered how people are remembered-and so, with just a few “suggestions” from the experimenter, participants remember clean-shaven men as bearded, young people as old. and fat people as thin (e.g.. Christiaansen, Sweeney, & Ochalek, 1983; Frenda et al., 2011). It’s remarkably easy to produce these errors-with just one word (“hit” vs. “smashed”) being enough to alter an individual’s recollection. What happens, though, if we ramp up our efforts to plant false memories? Can we create larger-scale errors? In one study, college students were told that the investigators were trying to learn how different people remember the same experience. The students were then given a list of events that (they were told) had been reported by their parents; the students were asked to recall these events as well as they could, so that the investigators could compare the students’ recall with their parents’ (Hyman, Husband, & Billings, 1995).
Some of the events on the list actually had been reported by the participants’ parents. Other events were bogus-made up by the experimenters. One of the bogus events was an overnight hospitalization for a high fever; in a different experiment, the bogus event was attending a wedding reception and accidentally spilling a bowlful of punch on the bride’s family.
The college students were easily able to remember the genuine events (i.e.., the events actually reported by their parents). In an initial interview, more than 80 % of these events were recalled, but none of the students recalled the bogus events. However, repeated attempts at recall changed this pattern. By a third interview, 25% of the participants were able to remember the embarrassment of spilling the punch, and many were able to supply the details of this (entirely fictitious) episode. Other studies have shown similar results. Participants have been led to recall details of particular birthday parties that, in truth, they never had (Hyman et al., 1995); or an incident of being lost in a shopping mall even though this event never took place; or a (fictitious) event in which they were the victim of a vicious animal attack (Loftus, 2003, 2004; also see, e.g. Chrobak & Zaragoza, 2008 Geraerts et al., 2009: Laney & Loftus, 2010). Errors Encouraged through “Evidence” Other researchers have taken a further step and provided participants with “evidence” in support of the bogus memory. In one procedure, researchers obtained a real childhood snapshot of the participant (see Figure 8.4A for an example) and, with a few clicks of a computer mouse, created a fictitious picture like the one shown in Figure 8.4B. With this prompt, many participants were led to a vivid, detailed recollection of the hot-air balloon ride-even though it never occurred (Wade, Garry, Read, & Lindsay, 2002). Another study used an unaltered photo showing the participants’ second- grade class (see Figure 8.5 for an example). This was apparently enough to persuade participants that the experimenters really did have information about their childhood. Therefore, when the experimenters “reminded” the participants about an episode of their childhood misbehavior, the participants took this reminder seriously. The result: Almost 80% were able to “recall” the episode, often in detail, even though it had never happened (Lindsay, Hagen, Read, Wade, & Garry, 2004). False Memories, False Confessions
It is clear that people can sometimes remember entire events that never took place. They sometimes happened. They can remember emotional episodes (like being lost in a shopping mall) that never remember their own transgressions (spilling the punch bowl, misbehaving in the second grade), even though these misdeeds never occurred.
One study pushed things still further, using a broad mix of techniques to encourage false memories (Shaw & Porter, 2015). The interviewer repeatedly asked participants that (supposedly) she had learned about from their parents. She assured participants that she had detailed information about the (fictitious) event, and she applied social pressure with comments like to recall an event “Most people are able to retrieve lost memories if they try hard enough.” She offered smiles and encouraging nods whenever participants showed signs of remembering the (bogus) target events. If participants couldn’t recall the target events, she showed signs of disappointment and said things like “That’s ok. Many people can’t recall certain events at first because they haven’t thought about them for such a long time.” She also encouraged participants to use a memory retrieval technique (guided imagery) that is known to foster false memories. With these (and other) factors in play, Shaw and Porter persuaded many of their participants that just a few years earlier the participants had committed a crime that led to police contact. In fact, many participants seemed able to remember an episode in which they had assaulted another person with a weapon and had then been detained by the police. This felony never happened, but many participants “recalled” it anyhow. Their memories were in some cases vivid and rich with detail, and on many measures indistinguishable from memories known to be accurate.
Let’s be clear, though, that this study used many forms of influence and encouragement. It takes a lot to pull memory this far off track! There has also been debate over just how many of the participants in this study truly developed false memories. Even so, the results show that it’s possible for a large number of people to have memories that are emotionally powerful, deeply consequential, and utterly false. (For discussion of Shaw and Porter’s study, see Wade, Garry, & Pezdek, 2017. Also see Brewin & Andrews, 2017, and then in response, Becker-Blease & Freyd, 2017; Lindsay & Hyman, 2017: McNally, 2017: Nash, Wade, Garry, Loftus, & Ost, 2017; Otgaar, Merckelbach, Jelicic, & Smeets 2017: and Scoboria & Mazzoni, 2017.) Avoiding Memory Errors Evidence is clear that people do make mistakes-at times, large mistakes-in remembering the past. But people usually don’t make mistakes. In other words, you generally can trust your memory because more often than not your recollection is detailed, long-lasting, and correct. This mixed pattern, though, demands a question: Is there some way to figure out when you’ve made a memory mistake and when you haven’t? Is there a way to decide which memories you can rely on and which ones you can’t? Memory Confidence evaluating memories, people rely heavily on expressions of certainty or confidence. Specifically, people tend to trust memories that are expressed with confidence. (“I distinctly remember her yellow jacket; I’m sure of it.”) They’re more cautious about memories that are hesitant. (“I think she was wearing yellow, but I’m not certain.”) We can see these patterns when people are evaluating their own memories (e.g.. when deciding whether to take action or not, based on a bit of recollection); we see the same patterns when people are evaluating memories they hear from someone else (e.g., when juries are deciding whether they can rely on an eyewitness’s testimony). Evidence suggests, though, that a person’s degree of certainty is an uneven indicator of whether a memory is trustworthy. On the positive side, there are circumstances in which certainty and highly correlated (e.g., Wixted, Mickes, Clark, Gronlund, & Roediger, 2015; easily find exceptions to this pattern- forget that day; I remember it memory accuracy are we can Wixted & Wells, 2017). On the negative side, though, including memories that are expressed with total certainty (“I’ll never yesterday”) but that turn out to be entirely mistaken. In fact, we can find though it were as circumstances in which there’s no correspondence at all between how certain someone says she is, in recalling the past, and how accurate that recollection is likely to be. As a result, if we try to categorize memories as correct or incorrect based on someone’s confidence, we’ll often get it wrong. (For some of the evidence, see Busey, Tunnicliff, Loftus, & Loftus, 2000: Hirst et al., 2009: Neisser & Harsch, 1992; Reisberg, 2014; Wells & Quinlivan, 2009.) How can this be? One reason is that a person’s confidence in a memory is often influenced by factors that have no impact on memory accuracy. When these factors are present, confidence can shift (sometimes upward, sometimes downward) with no change in the accuracy level, with the result that any connection between confidence and accuracy can be strained or even shattered.
Participants in one study witnessed a (simulated) crime and later were asked if they could identify the culprit from a group of pictures. Some of the participants -“Good, you identified the suspect”; others weren’t. The feedback couldn’t possibly influence the were then given feedback accuracy of the identification, because the feedback arrived only after the identification had occurred. But the feedback did have a large impact on how confident participants said they’d been when making their lineup selection (see Figure 8.6), and so, with confidence inflated but accuracy unchanged, the linkage between confidence and accuracy was essentially eliminated. (Wells & Bradfield, 1998; also see Douglas, Neuschatz, Imrich, & Wilkinson, 2010; Semmler & Brewer, 2006; Wells, Olson, & Charman, 2002, 2003; Wright & Skagerberg, 2007.) Similarly, think about what happens if someone is asked to report on an event over and over. The repetitions don’t change the memory content-and so the accuracy of the report won’t change much from one repetition to the next. However, with each repetition, the recall becomes easier and more fluent, and this ease of recall seems to make people more confident that their memory is correct. So here, too, accuracy is unchanged but confidence is inflated-and thus there’s a gradual erosion, with each repetition, of the correspondence between accuracy and confidence. (For more disconnection between accuracy and confidence, see, e.g., Bradfield Douglas & Pavletic, 2012; Charman, Wells, & Joy, 2011.)
In many settings, therefore, we cannot count on confidence as a means of separating accurate memories from inaccurate ones. In addition, other findings tell us that memory errors can be just as emotional, just as vivid, as accurate memories (e.g., McNally et al., 2004). In fact, research overall suggests that there simply are no indicators that can reliably guide us in deciding which memories to trust and which ones not to trust. For now, it seems that memory errors, when they occur, may often be undetectable. e. Demonstration 8.2: Memory Accuracy and Confidence As you have seen, a large part of Chapter 8 is concerned with the errors people make when they’re trying to recall the past. But how powerful are the errors? Here is one way to find out. In this demonstration, you will read a series of sentences. Be warned: The sentences are designed to be tricky and are similar to one another. Several of the sentences describe one scene; several describe other scenes. To make this challenging, though, the scenes are interwoven (and so you might get a sentence about Scene 1, then a sentence about Scene 2, then another about Scene 1, then one about Scene 3, and so on).
Try to remember the sentences-including their wording-as accurately as you can. Try this so that you can really focus on the sentences. Can you avoid demonstration in a quiet setting, making any mistakes?
To help you just a little, the memory test will come immediately after the sentences, so that there’s no problem created by a long delay. To help you even more, the memory test will be a recoqnition test, so that the sentences will be supplied for you, with no demand that you come up with the sentences on your own. Finally, to allow you to do your best, the memory test won’t force you into a yes-or-no format. Instead, it will allow you to express degrees of certainty. Specifically, in the memory test you’ll judge, first, whether or not each test sentence was included in the original list. Second, you’ll indicate how confident you are, using 0 % to indicate “I’m really just guessing” and 100% to indicate “I’m totally certain.” Of course, you can use values between 0% and 100% to indicate intermediate levels of certainty.
In short, this is a demonstration designed to ask how good memory can be-with many factors in place to support performance: concrete, meaningful materials; ample warning about the nature of the materials; encouragement for you to give your best effort; immediate testing; recognition testing (not recall); and the option for you to “hedge your bets” by expressing your degree of certainty. Can we; in these ways, document nearly perfect memory?
Here are the sentences to memorize. Read them with care, because-as already mentioned- they are tricky to remember. HIDE
1. The girl broke the window on the porch.
2. The tree in the front yard shaded the man who was smoking his pipe.
3. The hill was steep.
4. The cat, running from the barking dog, jumped on the table.
5. The tree was tall.
6. The old car climbed the hill.
7. The cat running from the dog jumped on the table.
8. The girl who lives next door broke the window on the porch.
9. The car pulled the trailer.
10. The scared cat was running from the barking dog.
11. The girl lives next door.
12. The tree shaded the man who was smoking his pipe.
13. The scared cat jumped on the table.
14. The girl who lives next door broke the large window.
15. The man was smoking his pipe.
16. The old car climbed the steep hill.
17. The large window was on the porch.
18. The tall tree was in the front yard.
19. The car pulling the trailer climbed the steep hill.
20. The cat jumped on the table.
21. The tall tree in the front yard shaded the man.
22. The car pulling the trailer climbed the hill.
23. The dog was barking.
24. The window was large.
Now, close the list of sentences for the memory test.
Get a piece of paper and expand the list of sentences below. For each of the sentences, was the sentence on the previous list? If so, write “Old” Or is this a new sentence? If so, write “New.” Also, for each one, mark how confident you are, with 0% meaning “just guessing” and 100% indicating “totally certain.” Remember, you can also use values between 0% and 100% to indicate intermediate levels of certainty. SHOW Expand the first list of sentences. How well did you do? This is the moment at which we confess that there is a trick here: Every one of the test sentences was new. None of the test sentences were identical to the sentences used in the original presentation. For many of the test sentences, you probably (correctly) said “New” and were quite confident in your response. Which test sentences were these? Odds are good that you gave a high-confidence “New” response to a test sentence that mixed together elements from the different scenes. For example, you were probably confident and correct in rejecting “The old man who was smoking his pipe climbed the steep hill,” because the man with the pipe came from one scene (he was by the tree) and the steep hill came from a different scene (with the car climbing the hill). For these sentences, you could rely on your memory for the overall gist of the memory materials, and memory for gist tends to be quite good. On this basis, you easily (and accurately) rejected sentences that didn’t fit with that gist.
But for other test sentences, you probably said “Old” and may even have indicated 90% or 100% confidence that the sentences were familiar. But no matter how certain you were, you were wrong. nnot count on confidence as an indication of accurate memories. Let’s be clear, therefore, that we Even high-confidence recollection can be wrong.
As a separate implication, notice how hard it is to remember a sentence’s phrasing even in circumstances that are designed to help your memory. (Again, the testing was immediate. Recognition testing meant that you didn’t have to come up with sentences on your own. You were warned that the test would be difficult. You were trying to do well, and you’d been told that you should try to remember the sentence’s wording.) Even in this setting, errors (including high- confidence errors) can occur. Of course, one might argue that this is an acceptable pattern. After all, what you typically want to remember is the gist of a message, not the exact wording. Do you care whether you recall the exact phrasing of this paragraph? Or is it more important that you remember the point being made here? Nonetheless, there are situations in which you do want to remember the wording, and for that reason the results of this demonstration are troubling. There are also situations in which you might have misunderstood what you experienced, or your understanding might be incomplete. Those situations make it worrisome that what you remember seems to be dominated by your understanding, and not by the “raw materials” of your experience.
Demonstration adapted from Bransford, J. (1979). Human cognition: Learning, understanding and remembering, 1st ed. Belmont, CA: Wadsworth. 1979 Wadsworth, a part of Cengage Learning, Inc. Reproduced by permission ( www.cengage.com/permissions ). Forgetting We’ve been discussing the errors people sometimes make in recalling the past, but of course there’s another way your memory can let you down: Sometimes you forget. You try to recall what was on the shopping list, or the name of an acquaintance, or what happened last week, and you simply draw a blank. Why does this happen? Are there things you can do to diminish forgetting? The Causes of Forgetting Let’s start with one of the more prominent examples of “forgetting”-which turns out not to be forgetting at all. Imagine meeting someone at a party, being told his name, and moments later realizing you don’t have a clue what his name is-even though you just heard it. This common (and experience is not the result of ultra-rapid forgetting. Instead, it stems from a failure in acquisition. You were exposed to the name but barely paid attention to it and, as a result, never embarrass learned it in the first place.
What about “real” cases of forgetting-cases in which you once knew the information but no longer do? For these cases, one of the best predictors of forgetting (not surprisingly) is the passage of time. Psychologists use the term retention interval to refer to the amount of time that elapses between the initial learning and the subsequent retrieval; as this interval grows, you’re likely to forget more and more of the earlier event (see Figure 8.7). One explanation for pattern comes from the decay theory of forgetting, which proposes rather directly that memories fade or erode with the passage of time. Maybe this is because the relevant brain cells die off. Or maybe the connections among memories need to be constantly refreshed-and if they’re not refreshed, the connections gradually weaken.
A different possibility is that new learning somehow interferes with older learning. This view is referred to as interference theory. According to this view, the passage of time isn’t the direct cause of forgetting. Instead, the passage of time creates the opportunity for new learning, and it is the new learning that disrupts the older memories.
A third hypothesis blames retrieval failure. The idea here is that the “forgotten memory” is still in long-term storage, but the person trying to retrieve the memory simply cannot locate it. This proposal rests on the notion that retrieval from memory is far from guaranteed, and we argued in Chapter 7 that retrieval is more likely if your perspective at the time of retrieval matches the perspective in place at the time of learning. If we now assume that your perspective is likely to change as time goes by, we can make a prediction about forgetting: The greater the interval, the greater the likelihood that your perspective has changed, and therefore the greater the likelihood of retrieval failure. Which of these hypotheses is correct? It turns out that they all are. Memories do decay with the passage of time (e.g., Altmann & Schunn, 2012; Wixted, 2004; also Hardt, Nader, & Nadel, 2013; Sadeh, Ozubko, Winocur, & Moscovitch, 2016), so any theorizing about forgetting must include this factor. But there’s also no question that a great deal of “forgetting” is retrieval failure. This point is evident whenever you’re initially unable to remember some bit of information, but then, a while later, you do recall that information. Because the information was eventually retrieved, we know that it wasn’t “erased” from memory through either decay or interference. Your initial failure to recall information, then, must be counted as an example of retrieval failure.
Sometimes retrieval failure is partial: You can recall some aspects of the desired content, but not all. An example comes from the maddening circumstance in which you’re trying to think of a word but simply can’t come up with it. The word is, people say, on the “tip of their tongue,” and following this lead, psychologists refer to this as the TOT phenomenon. People experiencing this state can often recall the starting letter of the sought-after word and approximately what it sounds like. So, for example, “scrimshaw” or “something like secant” in trying to remember “sextant” (Brown, 1991; Brown & a person might remember “it’s something like Sanskrit” in trying to remember McNeill, 1966; Harley & Brown, 1998; James & Burke, 2000; Schwartz & Metcalfe, 2011).
What about interference? In one early study, Baddeley and Hitch (1977) asked rugby players to recall the names of the other teams they had played against over the course of a season. The key here is that not all players made it to all games (because of illness, injuries, or schedule conflicts). This fact allows us to compare players for whom “two games back” means two weeks ago, to players for whom “two games back” means four weeks ago. In this way, we can look at the effects of retention interval (two weeks vs. four) with the number of intervening games held constant. Likewise, we can compare players for whom the game a month ago was “three games back” to players for whom a month ago means “one game back” Now, we have the retention interval held constant, and we can look at the effects of intervening events. In this setting, Baddeley and Hitch reported that the mere passage of time accounts for very little; what really matters is the number of intervening events (see Figure 8.8). This is just what we would expect if interference, and not decay is the major contributor to forgetting.
But why does memory interference occur? Why can’t the newly acquired information coexist with older memories? The answer has several parts, but one element is linked to issues we’ve already discussed: In many cases, newly arriving information gets interwoven with older information, producing a risk of confusion about which bits are old (i.e., the event you’re trying to remember) and which bits are new (i.e., information that you picked up after the event). In addition, in some cases, new information seems literally to replace old information-much as you no longer save the rough draft of one of your papers once the final draft is done. In this situation, the new information isn’t woven into the older memory; instead, it erases it. Undoing Forgetting Is there any way to undo forgetting and to recover seemingly lost memories? One option, often discussed, is hypnosis. The idea is that under hypnosis a person can “return” to an earlier event and remember virtually everything about the event, including aspects the person didn’t even notice (much less think about) at the time.
The reality, however, is otherwise. Hypnotized participants often do give detailed reports of the target event, but not because they remember more; instead, they’re just willing to say more in order to comply with the hypnotist’s instructions. As a result, their “memories” are a mix of recollection guesses, and inferences-and, of course, the hypnotized individual cannot tell which of these are which (Lynn, Neuschatz, Fite, &Rhue, 2001; Mazzoni & Lynn, 2007; Spiegel, 1995). On the positive side, though, there are procedures that do seem to diminish forgetting, including the so-called cognitive interview. This procedure was designed to help police in their investigations and, specifically, is aimed at maximizing the quantity and accuracy of information obtained from eyewitnesses to crimes (Fisher & Schreiber, 2007; Memon, Meissner, & Fraser, 2010). The cognitive interview has several elements, including an effort toward context reinstatement-steps that put witnesses back into the mindset they were in at the time of the crime. (For more on context Chapter 7.) In addition, the cognitive interview builds on the simple fact that reinstatement, see retrieval of memories from long-term storage is more likely if a suitable cue is provided. The interview therefore offers a diverse set of retrieval cues with the idea that the more cues provided, the greater the chance of finding one that triggers the target memory.
The cognitive interview is quite successful, both in the laboratory and in real crime investigations, producing more complete recollection without compromising accuracy. This success adds to the argument that much of what we call “forgetting” can be attributed to retrieval failure and can be undone simply by providing more support for retrieval.
Also, rather than undoing forgetting, perhaps we can avoid forgetting. The key here is simply to “revisit a memory periodically. Each “visit seems to refresh the memory, with the result that forgetting is much less likely. Researchers have examined this effect in several contexts, including one that’s pragmatically quite important: Students often have to take exams, and confronting the material on an exam is, of course, an occasion in which students “revisit” what they’ve learned. These revisits, we’ve just suggested, should slow forgetting, and on this basis, taking an exam can actually help students to hang on to the material they’ve learned. Several studies have confirmed this “testing effect: Students have better long-term retention for materials they were tested on, compared materials they weren’t tested on. (See, e.g., Carpenter, Pashler, & Cepeda, 2009; Halamish & Bjork, 2011; Healy, Jones, Lalchandani, & Tack, 2017; Karpicke, 2012; Karpicke & Blunt, 2011; McDaniel, Anderson, Derbish, & Morrisette, 2007; Pashler, Rohrer, Cepeda, & Carpenter, 2007; Rowland, 2014.)
We might mention that similar effects can be observed if students test themselves periodically, taking little quizzes that they’ve created on their own. Related effects emerge if students are occasionally asked questions that require a brief revisit to materials they’ve encountered (Brown. Roediger, & McDaniel, 2014). In fact, that’s the reason why this textbook includes Test Yourself questions; those questions will actually help readers to remember what they’ve read! e. Demonstration 8.3: The Tip-of-the-Tongue Effect Your memory usually serves you well, quickly and easily providing all the information you need. But your memory can also let you down. Sometimes you’ll fail to remember something because you paid insufficient attention to the information when you first met it. As a result, the information was never recorded in memory, and so of course you can’t remember it later. In other settings, the information was recorded in memory but has now been lost-perhaps through decay, or perhaps through interference. In still other cases, the information was recorded in memory and remains there, but you can’t find the information when you want it. This last pattern is what we call “retrieval failure”- an inability to locate information that is, in fact, still in storage.
Retrieval failure is our best explanation for cases in which you fail to remember something but later (when provided with a proper hint or suitable context) can recall the target information. The fact that your recall eventually succeeds tells us that the information was recorded and wasn’t lost. The initial failure to remember, therefore, has to be understood as a problem in retrieval.
Retrieval failure is often complete: You utterly forget that you were supposed to stop at the bank on the way home (but then remember, when you later reach into your empty pocket). You completely forget about a concert you attended years ago (but then remember, when you hear one of the band’s songs on Pandora or Spotify). Sometimes, however, retrieval failure is partial: You can recall part of the information you’re after, but not all of it. Consider, for example, the common (but maddening) state in which you’re trying to think of a word, but you just can’t come up with it. You’re sure you know the word, but, no matter how hard you try, you can’t recall it, and so the word remains, people say, “on the tip of your tongue.” In line with this expression, psychologists refer to this as the “TOT” effect. In this situation, you may eventually come up with the word-and so it plainly recall the word, therefore, is another instance of retrieval failure-an inability to locate information was in your memory. Your initial inability to in storage.
In case you’ve never experienced the frustration of the TOT state, consider the following definitions. In each case, is the related word or name in your vocabulary? If it is, can you think of the word? If the word is in your vocabulary but you can’t think of it right now, can you recall what letter the word starts with? Can you remember how many syllables it has?
You may not know some of these terms at all; other terms will immediately spring to mind. But in at least some cases, you’re likely to end up in the frustrating state of having the word at the tip of your tongue but not being able to think of it.
1. The aromatic substance found in the gut of some whales, valued in earlier years for the manufacture of perfumes. 2. A tube-shaped instrument that is rotated to produce complex, symmetrical designs, created by light shining through mirrors inside the instrument
A structure of open latticework, usually made of wood, used as a support for vines or other 3. climbing plants .
4. The legendary Roman slave who was spared in the arena when the lion recognized him as the man who had removed a thorn from its paw.
5. An infectious, often fatal disease, often transmitted through contaminated animal substances and sometimes transmitted, in powder form, as an agent in germ warfare
6. The scholarly study of word origins and word histories
7. The American magician who died in 1926, famous for his escapes from chains, handcuffs, straitjackets, and padlocked boxes
8. People who explore caves as a hobby or sport
9. An instance of making a discovery by lucky accident
10. An instrument for measuring wind velocity
11. An Asian art form involving complex paper folding
12. The formal term for the collection and study of postage stamps and related material
13. A word or phrase that reads the same way backward or forward (e.g., “Madam I’m Adam”)
14. A building, usually stone, housing a large tomb or several tombs
15. The verb meaning to give up the throne
16. The sense of resentment often felt in response to an imagined insult
17. The term for the three dots used to indicate a pause or an omission (…) 18. The length of leather used in older times for sharpening razor
19. The accumulation of stones carried along, and eventually dropped, by a glacier
20. Someone who makes maps
21. Lasting only a very brief time
Open the following bar to find out what the words are. SHOW Demonstration adapted from Brown, R., & McNeill, D. (1966). The “tip of the tongue” phenomenon. Journal of Verbal Learning and Verbal Behavior, 5, 325-337. See also James, L., & Burke, D. (2000). Phonological priming effects on word retrieval and tip-of-the-tongue experiences in young and older adults. Journal of Experimental Psychology: Learning, Memory and Cognition, 26, 1378-1391. Memory: An Overall Assessment We’ve now seen that people sometimes recall with confidence events that never took place, and sometimes forget information they’d hoped to remember. But we’ve also mentioned the positive side of things: how much people can recall, and the key fact that your memory is accurate far more often than not. Most of the time, it seems, you do recall the past as it truly was.
Perhaps most important, we’ve also suggested that memory’s “failings” may simply be the price you pay in order to gain crucial advantages. For example, we’ve argued that memory errors arise because the various episodes in your memory are densely interconnected with one another; it’s these interconnections that allow elements to be transplanted from one remembered episode to another. But we’ve also noted that these connections have a purpose: They’re the retrieval paths that make memory search possible. Therefore, to avoid the errors, you would need to restrict the connections; but if you did that, you would lose the ability to locate your own memories within long- term storage. memory connections that lead to error also help you in other ways. Our environment, after all, is in many ways predictable, and it’s enormously useful for you to There’s little point, for example, in scrutinizing a kitchen to make sure there’s a stove in the room, exploit that predictability because in the vast majority of cases there is. So why take the time to confirm the obvious? Likewise, there’s little point in taking special note that, yes, this restaurant does have menus and, yes, people in the restaurant are eating and not having their cars repaired. These, too, are obvious points, and it would be a waste of effort to give them special notice.
On these grounds, reliance on schematic knowledge is a good thing. Schemata guide your attention to what’s informative in a situation, rather than what’s self-evident (e.g., Gordon, 2006), and they guide your inferences at the time of recall. If this use of schemata sometimes leads you astray, that’s a small price to pay for the gain in efficiency that schemata allow. (For similar points see Chapter 4.)
In the same way, the blurring together of episodes may be a blessing, not a problem. Think, for example, about all the times when you’ve been with a particular friend. These episodes are related to one another in an obvious way, and so they’re likely to become interconnected in your memory. This will cause difficulties if you want to remember which episode is which and whether you had a particular conversation in this episode or in that one. But rather than lamenting this, maybe we should celebrate what’s going on here. Because of the “interference,” all the episodes will merge together in your memory, so that what resides in memory is one integrated package, containing all of your knowledge about your friend. As a result, rather than complaining about memory confusion we should rejoice over the memory integration and “cross-referencing.” In all of these ways, then, our overall assessment of memory can be rather upbeat. We have, to be sure, discussed a range of memory errors, but these errors are in most cases a side product of mechanisms that otherwise help you-to locate your memories within storage, to be efficient in your contact with the world, and to form general knowledge. Thus, even with the errors, even with forgetting, it seems that human memory functions in a way that serves us extraordinarily well. (For more on the benefits produced by memory’s apparent limitations, see Howe, 2011; Nørby, 2015; Schacter, Guerin, & St. Jacques, 2011.) Autobiographical Memory Most of the evidence in Chapters 6 and 7 was concerned with memory for simple stimuli-such as word lists or short sentences. In this chapter, we’ve considered memories for more complex materials, and this has drawn our attention to the ways in which your knowledge (whether knowledge of a general sort or knowledge about related episodes) can both improve memory and also interfere with it.
In making these points, we’ve considered memories in which the research participant was actually involved in the remembered episode, and not just an external witness (e.g., the false memory that he committed a felony). We’ve also looked at studies that involved memories for emotional events (e.g., the plane crash discussed at the chapter’s start) and memory over the very long term (e.g., memories for childhood events “planted” in adult participants).
Do these three factors-involvement in the remembered event, emotion, and long delay-affect how or how well someone remembers? These factors are surely relevant to the sorts of remembering people do outside the laboratory, and all three are central for autobiographical memory. This is the memory that each of us has for the episodes and events of our lives, and this sort of memory plays a central role in shaping how each of us thinks about ourselves and, therefore how we behave. (For more on the importance of autobiographical memory, see Baddeley, Aggleton, & Conway, 2002; Prebble, Addis, & Tippett, 2013; Steiner, Thomsen, & Pillemer, 2017. For more on the distinction between the types of memory, including biological differences between autobiographical memory and “lab memory,” see Cabeza & St. Jacques, 2007; Hodges & Graham, 2001; Kopelman & Kapur, 2001; Tulving, 1993, 2002.) Let’s explore how the three factors we’ve mentioned, each seemingly central for autobiographical memory, influence what we remember. Memory and the Self Having some involvement in an event (as opposed to passively witnessing it) turns out to have a large effect on memory, because, overall, information relevant to the self is better remembered than information that’s not self-relevant-a pattern known as the “self-reference effect” (e.g., Symons & Johnson, 1997; Westmacott & Moscovitch, 2003). This effect emerges in many forms, including an advantage in remembering adjectives that apply to you relative to adjectives that don’t, better memory for names of places you have visited relative to names of places you’ve never been, and so on (see Figure 8.9). But here, too, we can find memory errors, in part because your “memory” for your own life is (just like other memories) a mix of genuine recall and some amount of schema-based reconstruction. For example, consider the fact that most adults believe they’ve been reasonably consistent, reasonably stable, over their lifetimes. They believe, in other words, that they’ve always been pretty much the same as they are now. This idea of consistency is part of their self-schema -the set of interwoven beliefs and memories that constitute people’s knowledge about themselves. When the time comes to remember the past, therefore, people will rely to some extent on this belief in their own consistency, so they’ll reconstruct their history in a biased way-one that maximizes the (apparent) stability of their lives. As a result, people often misremember their past attitudes and past romantic relationships, unwittingly distorting their personal history in a way that makes the past look more like the present than it really was. (See Conway & Ross, 1984; Holmberg & Homes, 1994. For related results, see Levine, 1997; Marcus, 1986; McFarland & Buehler, 2012; Ochsner & Schacter 2000; Ross & Wilson, 2003.) It’s also true that most of us would like to have a positive view of ourselves, including a positive view of how we’ve acted in the past. This, too, can shape memory. As one illustration, Bahrick, Hall,and Berger (1996) asked college students to recall their high school grades as accurately as they could, and the data showeda clear pattern of self-service. When students forgot a good grade, their (self-serving) reconstruction led them to the (correct) belief that the grade must have been a good one; consistent with this, 89 % of the A’s were correctly remembered. But when students forgot a poor grade, reconstruction led them to the (false) belief that the grade must have been okay; as a result, only 29% of the D’s were correctly recalled. (For other mechanisms through which motivation can color autobiographical recall, see Conway & Holmes, 2004; Conway & Pleydell-Pearce, 2000; memory. As one illustration, Bahrick, Hall, Molden & Higgins, 2012.) Memory and Emotion Another factor important for autobiographical memory is emotion. Many of your life experiences are of course emotional, making you feel happy, or sad, or angry, or afraid, and in general emotion helps you to remember. One reason is emotion’s impact on memory consolidation-the process through which memories are biologically “cemented in place.” (See Hardt, Einarsson, & Nader, 2010; Wang & Morris, 2010; although also see Dewar, Cowan, & Della Sala, 2010.)
Whenever you experience an event or gain new knowledge, your memory for this new content is initially fragile and is likely represented in the brain via a pattern of neural activation. Over the next few hours, though, various biological processes stabilize this memory and put it into a more enduring form. This process-consolidation-takes place “behind the scenes,” without you thinking about it, but it’s crucial. If the consolidation is interrupted for some reason (e.g., because of extreme fatigue or injury), no memory is established and recall later will be impossible. (That’s because there’s no information in memory for you to retrieve; you can’t read text off a blank page!)
A number of factors can promote consolidation. For example, evidence is increasing that key steps of consolidation take place while you’re asleep-and so a good night’s rest actually helps you, later on, to remember things you learned while awake the day before. (See Ackermann & Rasch. 2014: Giuditta, 2014; Rasch & Born, 2013; Tononi & Cirelli, 2013; Zillmer, Spiers, & Culbertson, 2008.) Also, there’s no question that emotion enhances consolidation. Specifically, emotional events trigger a response in the amygdala, and the amygdala in turn increases activity in the hippocampus. The hippocampus is, as we’ve seen, crucial for getting memories established. (See Chapter 7; for reviews of emotion’s biological effects on memory, see Buchanan, 2007; Hoschedidt, Dongaonkar, Payne, & Nadel, 2010; Joels, Fernandez, & Roosendaal, 2011; Kensinger, 2007; LaBar, 2007; LaBar & Cabeza, 2006; Yonelinas & Ritchey, 2015. For a complication, though, see Figure 8.10.) Emotion also shapes memory through other mechanisms. An event that’s emotional is likely to be important to you, virtually guaranteeing that you’ll pay close attention as the event unfolds, and we know that attention and thoughtful processing help memory. Moreover, you tend to mull over emotional events in the minutes (or hours) following the event, and this is tantamount to memory rehearsal. For all these reasons, it’s not surprising that emotional events are well remembered (Reisberg & Heuer, 2004; Talmi, 2013).
Let’s note, though, that emotion doesn’t just influence how well you remember; it also influences what you remember. Specifically, in many settings, emotion seems to produce a “narrowing” of attention, so that all of your attention will be focused on just a few aspects of the scene (Easterbrook, 1959). This narrowing helps guarantee that these attended aspects will be firmly placed into memory, but it also implies that the rest of the event, excluded from the narrowed focus, won’t be remembered later (e.g., Gable & Harmon-Jones, 2008; Reisberg & Heuer, 2004; Steblay, 1992).
What exactly you’ll focus on, though, may depend on the specific emotion. Different emotions lead you to set different goals: If you’re afraid, your goal is to escape; if you’re angry, your goal is to deal with the person or issue that’s made you angry; if you’re happy, your goal may be to relax and enjoy! In each case, you’re more likely to pay attention to aspects of the scene directly relevant to your goal, and this will color how you remember the emotional event. (See Fredricks on, 2000; Harmon-Jones, Gable, & Price, 2013; Huntsinger, 2012, 2013; Kaplan, Van Damme, & Levine, 2012 Levine & Edelstein, 2009.) Flashbulb Memories
One group of emotional memories seems special. These are the so-called flashbulb memories memories of extraordinary clarity, typically for highly emotional events, retained despite the passage of many years. When Brown and Kulik (1977) introduced the term “flashbulb memory,” they pointed to the memories people had of the moment in 1963 when they first heard that President Kennedy had been assassinated. In the Brown and Kulik study, people interviewed more than a decade after that event remembered it “as though it were yesterday,” and many participants were certain they’d never forget that awful day. Moreover, participants’ recollection was quite detailed-with people remembering where they were at the time, what they were doing, and whom they were with. Indeed many participants were able to recall the clothing worn by people around them, the exact words uttered, and the like.
Many other events have also produced flashbulb memories. clearly recall where they were when they heard about the attack on the World Trade Center in 2001; or example, most Americans can many people vividly remember what they were doing in 2009 when they heard that Michael Jackson had died; many Italians have clear memories of their country’s victory in the 2006 World Cup; and so on. (See Pillemer, 1984; Rubin & Kozin, 1984; also see Weaver, 1993; Winograd & Neisser, 1993.)
Remarkably, though, these vivid, high-confidence memories can contain substantial errors. Thus, when people say, “Ill never forget that day. . ” they’re sometimes wrong. For example, Hirst et al. (2009) interviewed more than 3,000 people soon after the September 11 attack on the World Trade Center, asking how they first heard about the attack; who brought them the news; and what they were doing at the time. When these individuals were re-interviewed a year later, however, more than a third (37%) provided a substantially different account. Even so, the participants were strongly confident in their recollection (rating their degree of certainty, on a 1-to-5 scale, at an average of 4.4). The outcome was the same for participants interviewed three years after the attack-with 43% offering different accounts from those they had given initially. (For similar data, see Neisser & Harsch, 1992; also Hirst & Phelps, 2016; Rubin & Talarico, 2007; Schmidt, 2012; Talarico & Rubin, 2003.)
Other data, though, tell a different story, suggesting that some flashbulb memories are entirely accurate. Why should this be? Why are some flashbulb events remembered well, while others aren’t? The answer involves several factors, including how, how often, and with whom someone discusses the flashbulb event. In many cases, this discussion may encourage people to “polish” their reports- so that they’re offering their audience a “better,” more interesting narrative. After a few occasions of telling and re-telling this version of the event, the new version may replace the original memory. (For more on these issues, see Conway et al., 1994; Hirst et al., 2009; Luminet & Curci, 2009; Neisser, Winograd, & Weldon, 1991; Palmer, Schreiber, & Fox, 1991; Tinti, Schmidt, Sotgiu, Testa, & Curci, 2009: Tinti, Schmidt, Testa, & Levine, 2014.)
Notice, then, that an understanding of flashbulb memories requires us to pay attention to the social aspects of remembering. In many cases, people “share” memories with one another (and so, for example, I tell you about my vacation, and you tell me about yours). Likewise, in the aftermath of an important event, people often compare their recollections. (“Did you see how he ran when the alarm sounded!?”) In all cases, people are likely to alter their accounts in various ways, to allow for a better conversation. They may, for example, leave out mundane bits, or add bits to make their account more interesting or to impress their listeners. These new points about how the event is described will, in turn, often alter the way the event is later remembered.
In addition, people sometimes “pick up” new information in these conversations-if, for example, someone who was present for the same event noticed a detail that you missed. Often, this new information will be absorbed into other witnesses’ memory-a pattern sometimes referred to as “co- witness contamination.” Let’s note, though, that sometimes another person who witnessed the event will make a mistake in recalling what happened, and, after conversation, other witnesses may absorb this mistaken bit into their own recollection (Hope, Gabbert, & Fraser, 2013). In this way, conversations after an event can sometimes have a positive impact on the accuracy and content of a person’s eventual report, and sometimes a negative impact.
For all these reasons, then, it seems that “remembering” is not an activity shaped only by the person who holds the memory, and exploring this point will be an important focus for future research. (For early discussion of this broad issue, see Bartlett, 1932. For more recent discussion, see Choi, Kensinger, & Rajaram, 2017; Gabbert & Hope, 2013; Roediger & Abel, 2015.) Returning to flashbulb memories, though, let’s not lose track of the fact that the accuracy of these memories is uneven. Some flashbulb memories are marvelously accurate; others are filled with error. Therefore, the commonsense idea that these memories are somehow “burned into the brain,” and thus always reliable, is surely mistaken. In addition, let’s emphasize that from the point of view of the person who has a flashbulb memory, there’s no detectable difference between an accurate flashbulb memory and an inaccurate one: Either one will be recalled with great detail and enormous confidence. In each case, the memory can be intensely emotional. Apparently, memory errors can occur even in the midst of our strongest, most vivid recollections.
Flashbulb memories usually concern events that were strongly emotional. Sadly, though, we can also find cases in which people experience truly extreme emotion, and this leads us to ask: How are traumatic events remembered? If someone has witnessed wartime atrocities, can we count on the accuracy of their testimony in a war-crimes trial? If someone suffers through the horrors of a sexual assault, will the painful memory eventually fade?
Evidence suggests that most traumatic events are well remembered for many years. In fact, victims of atrocities often seem plagued by a cruel enhancement of memory, leaving them with extra-vivid and long-lived recollections of the terrible event (e.g., Alexander et al., 2005; Goodman et al., 2003: Peace & Porter, 2004; Porter & Peace, 2007; Thomsen & Berntsen, 2009). As a result, people who have experienced trauma sometimes complain about having “too much” memory and wish they remembered less. This enhanced memory can be understood in terms of a mechanism we’ve already discussed: consolidation. This process is promoted by the conditions that accompany bodily arousal, including the extreme arousal typically present in a traumatic event (Buchanan & Adolphs, 2004; Hamann, 2001; McGaugh, 2015). But this doesn’t mean that traumatic events are always well remembered. There are, in fact, cases in which people who’ve suffered through extreme events have little or no recall of their experience (e.g., Arrigo & Pezdek, 1997). We can also sometimes document substantial errors in someone’s recall of a traumatic event (Paz-Alonso & Goodman, 2008).
What factors are producing this mixed pattern? In some cases, traumatic events are accompanied by sleep deprivation, head injuries, or substance abuse, each of which can disrupt memory (McNally, 2003). In other cases, the memory-promoting effects of arousal are offset by the complex memory effects of stress. The key here is that the experience of stress sets off a cascade of biological reactions. These reactions produce changes throughout the body, and the changes are generally beneficial, helping the organism to survive the stressful event. However, the stress-produced changes are disruptive to some biological functions, and this can lead to a variety of problems (including medical problems caused by stress). How does the mix of stress reactions influence memory? The answer is complicated. Stress experienced at the time of an event seems to enhance memory for materials directly relevant to the Source of the stress, but has the opposite effect-undermining memory-for other aspects of the event (Shields, Sazma, McCullough, & Yonelinas, 2017). Also, stress experienced during memory retrieval interferes with memory, especially if the target information was itself emotionally charged.
How does all this play out in situations away from the laboratory? One line of evidence comes from a study of soldiers who were undergoing survival training. As part of their training, the soldiers through a highly realistic simulation of a prisoner- of-war interrogation. One day later, the soldiers were asked to identify the interrogator from a lineup. Despite the extensive (40-minute) face-to-face encounter with the interrogator and the relatively short (one-day) retention interval, many soldiers picked the wrong person from the lineup. Soldiers who had experienced a moderate-stress interrogation picked the wrong person from a live lineup 38% of the time; soldiers who had experienced a high-stress interrogation (one that included a physical confrontation) picked the wrong person 56 % of the time if tested with a live lineup, and et al., 2004; also see Deffenbacher, Bornstein, Penrod, & McCorty, 2004; Hope, Lewinski, Dixon, Blocksidge, & Gabbert, 2012; Valentine & were deprived of sleep and food, and they went 68% of the time if tested with a photographic lineup. (See Morgan Messout, 2008.) Repression and “Recovered” Memories Some authors argue in addition that people defend themselves against extremely painful memories by pushing these memories out of awareness. Some writers suggest that the painful memories are “repressed”; others use the term “dissociation” to describe this self-protective mechanism. No matter what terms we use, the idea is that these painful memories (including, in many cases, memories for childhood abuse) won’t be consciously available but will still exist in a person’s long- term storage and in suitable circumstances can be “recovered”-that is, made conscious again. (See for discussion, Belli, 2012; Freyd, 1996, 1998; Terr, 1991, 1994.)
Most memory researchers, however, are skeptical about this proposal. As one consideration, painful events-including events that seem likely candidates for repression-seem typically to be well remembered, and this is the opposite of what we would expect if a self-protective mechanism was in place. In addition, some of the abuse memories reported as “recovered” may, in fact, have been remembered all along, and so they provide no evidence of repression or dissociation. In these cases, the memories had appeared to be “lost” because the person refused to discuss these memories for many years; “recovery” of these memories simply reflects the fact that the person is at last willing to talk about them. This sort of “recovery” can be extremely consequential- emotionally and legally-but doesn’t tell us anything about how memory works. Sometimes, though, memories do seem to be genuinely lost for a while and then recovered. But this pattern may not reveal the operation (and, eventually, the “lifting”) of repression or dissociation. Instead, this pattern may be the result of retrieval failure-a mechanism that can “hide” memories for periods of time, only to have them reemerge once a suitable retrieval cue is available. Here, too, the recovery may be of enormous importance for the person who is finally remembering the long-lost episodes; but again, this merely confirms the role of an already-documented memory mechanism, with no need for theorizing about repression.
In addition, we need to acknowledge the possibility that at least some recovered memories may, in fact, be false memories. After all, we know that false memories occur and that they’re more likely when someone is recalling the distant past than when one is trying to remember recent events. It’s also relevant that many recovered memories emerge only with the assistance of a therapist who is genuinely convinced that a client’s psychological problems stem from long-forgotten episodes of childhood abuse. Even if therapists scrupulously avoid leading questions, their expectations might still lead them to shape their clients’ memory in other ways-for example, by giving signs of interest or concern if the clients hit on the “right” line of exploration, by spending more time on topics related to the alleged memories, and so on. In these ways, the climate within a therapeutic session could guide the client toward finding exactly the “memories” the therapist expects to find. Overall, then, the idea of a self-protective mechanism “hiding” painful memories from view is highly controversial. Some psychologists (often, those working in a mental health specialty) insist that they routinely observe this sort of self-protection, and other psychologists (generally, memory researchers) reject the idea that memories can be hidden in this way. It does seem clear, however, that at least some of these now-voiced memories are accurate and provide evidence for terrible crimes. As in all cases, though, the veracity of recollection cannot be taken for granted. This warning is important in evaluating any memory, but especially recollection. (For discussions of this difficult-and sometimes angrily debated-issue, see, among so for anyone wrestling with traumatic others, Belli, 2012; Brewin &Andrews, 2014, 2016; Dalenberg et al., 2012; Geraerts et al., 2009; Giesbrecht, Lynn, Lilienfeld, & Merckelbach, 2008; Kihlstrom, 2006; Küpper, Benoid, Dalgleish, & Anderson, 2014; Loftus, 2017; Ost, 2013; Patihis, Lilienfeld, Ho, & Loftus, 2014; Pezdek & Blandon- Gitlin, 2017.) Long, Long-Term Remembering
In the laboratory, a researcher might ask you to recall a word list you read just minutes ago or a film you saw a week ago. Away from the lab, however, people routinely try to remember events from years-perhaps decades-back. We’ve mentioned that these longer retention intervals are generally associated with a greater amount of forgetting. But, impressively, memories from long ago can sometimes turn out to be entirely accurate.
In an early study, Bahrick, Bahrick, and Wittlinger (1975; also Bahrick, 1984; Bahrick & Hall, 1991) tracked down the graduates of a particular high sch0ol-people who had graduated in the previous year, and the year before, and the year before that, and ultimately, people who had graduated 50 years earlier. These alumni were shown photographs from their own year’s high school yearbook, and for each photo they were given a group of names and had to choose the name of the person shown in the picture. The data for this “name-matching” task show remarkably little forgetting approximately 90 % correct if tested 3 months after graduation, the same after 7 performance was years, and the same after 14 years. In some versions of the test, performance was still excellent after 34 years (see Figure 8.11). As a different example, what about the material you’re learning right now? Five years from now will you still remember what you’ve learned? How about a decade from now? Conway, Cohen, and Stanhope (1991, 1992) explored these questions, testing students’ retention of a cognitive psychology course taken years earlier. The results echo the pattern we’ve already seen. Some forgetting of names and specific concepts was observed during the first 3 years after the course. After the third year, however, performance stabilized, so that students tested after 10 years still remembered a fair amount-in fact, just as much as students tested after 3 years (see Figure 8.12). In an earlier section, we argued that the retention interval is crucial for memory and that memory gets worse as times goes by. The data now in front of us, though, indicate that how much the interval matters-that is, how quickly memories “fade”-may depend on how well established the memories were in the first place. The high school students in the Bahrick et al. study had seen their classmates day after day, for (perhaps) several years. They therefore knew their classmates’ names very, very well-and this is why the passage of time had only a slight impact on their memories for the names. Likewise, students in the Conway et al. study had apparently learned their psychology quite well-and so they retained what they’d learned for a very long time. In fact, we first met this study in Chapter 6, when we mentioned that students’ grades in the course were good predictors of how much the students would still remember many years after the course was done. Here, too, the better the original learning, the slower the forgetting.
We can maintain our claim, therefore, that the passage of time is the enemy of memory: Longer retention intervals produce lower levels of recall. However, if the material is very well learned at the start, and also if you periodically “revisit” the material, you can dramatically diminish the impact of the passing years. e. Demonstration 8.4: Memory for Words This demonstration begins with a memory test. You’re going to read a list of words, then test your memory for what you just read.
Read through the following list at a comfortable pace. Then, close the list, and without looking at it, take a blank piece of paper and write down as many words from the list as you can. HIDE Shy Paris Blonde
Texas Athletic Dramatic
Tall Clever Nevada
Helpful Perth Black
Indiana Balding Sensitive
Flabby Generous Wisconsin
Calm Italy Muscular
England Slender Talented
Strong Creative Melbourne
Modest Peru Smart
Vancouver Pretty Cheerful
Clumsy Daring Vermont
Stylish Montreal Pierced
How many words did you recall? Now, go back to the original list, and put a star next to the words that in some way are related to you-words that describe you, that clearly describe the opposite of who you are, or that name a place that you care about (perhaps because you’ve been there, or because you’ve always wanted to go there). How many of these self-referential words (now starred) are there? How many of them did you remember? How many words on the list aren’t starred (i.e., aren’t self-referential)? How many of them did you remember?
The best way to evaluate these numbers is to compare your results to those of others. Trade your results with a classmate. How many of the self-referential words did you remember in comparison to d by people for whom the words weren’t self- how many of those same words were remer referential?
Even without that comparison, though, did you have a memory advantage for the self-referential words? Most people do, and this pattern is known as the “self-reference effect.” Notice, by the way, that we didn’t mention this effect at the start of this demonstration. That’s because we didn’t want to draw your attention to words that referred to you. If we had, then this “extra” attention itself would have influenced your memory. How General Are the Principles of Memory? There is certainly more to be said about autobiographical memory. For example, it can’t be surprising that people tend to remember significant turning points in their lives and often use these turning points as a means of organizing their autobiographical recall (Enz & Talarico, 2015; Rubin & Umanath, 2015). Perhaps related, there are also memory patterns associated with someone’s age. Specifically, most people recall very little from the early years of childhood (before age 3 or so; e.g., Akers et al., 2014; Bauer, 2007; Hayne, 2004; Howe, Courage, & Rooksby, 2009; Morrison & Conway, 2010). In contrast, people generally have clear and detailed memories of their late adolescence and early adulthood, a pattern known as the “reminiscence bump.” (See Figure 8.13; Conway & Haque, 1999; Conway, Wang, Hanyu, & Haque 2005; Dickson, Pillemer, & Bruehl, 2011; Koppel & Rubin, 2016; Rathbone, Moulin, & Conway, 2008; Rathbone, O’Connor, & Moulin, 2017.) As a result, for many Americans, the last years of high school and the years they spend in college are likely to be the most memorable periods of their lives. But in terms of the broader themes of this chapter, where does our brief survey of autobiographical memory leave us? In many ways, this form of memory is similar to other sorts of remembering. Autobiographical memories can last for years and years, but so can memories that don’t refer directly to your own life. Autobiographical remembering is far more likely if the person occasionally revisits the target memories; these rehearsals dramatically reduce forgetting. But the same is true in non-autobiographical remembering.
Autobiographical memory is also open to error, just as other forms of remembering are. We saw this in cases of flashbulb memories that turn out to be false. We’ve also seen that misinformation and leading questions can plant false autobiographical memories-about birthday parties that never happened and trips to the hospital that never took place (also see Brown & Marsh, 2008). Misinformation can even reshape memories for traumatic events, just as it can alter memories for trivial episodes in the laboratory (Morgan, Southwick, Steffan, Hazlett, & Loftus, 2013; Paz-Alonso & Goodman, 2008).
These facts strengthen chapters: Certain principles seem to apply to memory in general, no matter what is being remembered. All memories depend on connections. The connections promote retrieval. The claim that has been emerging in our iscussion over the last three connections also facilitate interference, because they allow one memory to blur into another. The connections can fade with the passage of time, producing memory gaps, and the gaps are likely to be filled via reconstruction based on generic knowledge. All these things seem to be true whether we’re talking about relatively recent memories or memories from long ago, emotional memories or memories of calm events, memories for complex episodes or memories for simple word lists. But this doesn’t mean that all principles of memory apply to all types of remembering. As we saw in Chapter 7, the rules that govern implicit memory may be different from those that govern explicit memory. And as we’ve now seen, some of the factors that play a large role in shaping autobiographical remembering (e.g., the role of emotion) may be irrelevant to other sorts of memory.
In the end, therefore, our overall theory of memory is going to need more than one level of description. We’ll need some principles that apply to only certain types of memory (e.g., principles specifically aimed at emotional remembering). But we’ll also need broader principles, reflecting the fact that some themes apply to memory of all sorts (e.g., the importance of memory connections). As the last three chapters have shown, these more general principles have moved us forward considerably in our understanding of memory in many different domains and have enabled us to illuminate many aspects of learning, of memory retrieval, and of the sources of memory error. e. Demonstration 8.5: Childhood Amnesia Each of us remembers many things about our lives, so that, overall, our memories are rich and detailed. There is, however, one well-documented limit on the memories we have: Think back to to think about what something that happened when you were 10 years old. (It will probably help grade you were in and who your teacher was. Can you remember anything about that year?) How about when you were 9 years old? When you were 8? When you were 7? What is the earliest event in your life that you can remember?
Many people have trouble remembering events that took place before they were 4 years old. Very few people can remember events that took place before they were 3. This pattern is so common that it often gets a name-childhood amnesia, or sometimes infantile amnesia. Do you fit this pattern? Can you remember any events from the first three years of your life? If you can, is it possible that you’re not remembering the event itself, but instead remembering family discussions about the event? Or remembering some photograph of the event? Several explanations have been offered for childhood amnesia, and probably each of them captures part of the truth. One important consideration, though, hinges on the young child’s understanding of the world. As the textbook chapter discusses, we associating them with other knowledge that we have. But, of course, this requires that you have that typically remember events by other knowledge, so that you can link the new information to it. Young children lack this other knowledge-they lack a scaffold to which they can attach new information, and this makes it difficult for them to establish new information in memory.
As a further exploration, ask some of your friends about the earliest events in their lives that they men, can remember. Several lines of evidence suggest that women can recall earlier life events than and that children who were quite verbal at an early age can recall earlier life events than children who were less verbal. Do these claims fit with your observations? COGNITIVE PSYCHOLOGY AND EDUCATION remembering for the long term Sometimes you need to recall things after a short delay-a friend tells you her address and you drive to her apartment an hour later, or you study for a quiz that you’ll take tomorrow morning. Sometimes, however, you want to remember things overa much longer time span-perhaps trying to recall things you learned months or years ago. This longer-term retention is certainly important in educational settings. Facts that you learn in high school may be crucial for your professional work later in life. Likewise, facts that you learn in your first year at college, or in your first year in a job, may be crucial in your third or fourth year. How, therefore, can we help people to remember things for the very long term?
The chapter has suggested a two-part answer to this question. First, you’re more likely to hang on to material that you learned very well in the first place. The chapter mentions one study in which people tried to recall the material they’d learned in a college course a decade earlier. In that study students’ grades in the course were good predictors of how much the students would remember years after the course was done-and so, apparently, the better the original learning, the slower the forgetting. long-term retention also depends on another factor-whether you occasionally “revisit” the material you’ve learned. Even a brief refresher can help enormously. In one study, students were quizzed on little factoids they had most likely learned at some prior point in their lives (Berger, Hall, & Bahrick, 1999)-for example, “Who was the first astronaut to walk on the moon?”; “Who wrote the fable about the fox and the grapes?” In many cases, the students knew these little facts but couldn’t recall them at that moment. In that situation, the students were given a quick reminder. The correct answer was shown to them for 5 seconds, with the simple instruction that they should look at the answer because they would need it later on.
Nine days after this reminder, participants were able to recall roughly half the answers. This obviously wasn’t perfect performance, but it was an enormous return (an improvement from 0 % to 50%) from a very small investment (5 seconds of “study time”). And it’s likely that a second reminder a few days later, again lasting just 5 seconds, would have lifted their performance still further and allowed the participants to recall the items after an even longer delay.
One suggestion, then, is that testing yourself (perhaps with flashcards-with a cue on one side and an answer on the other) can be quite useful. Flashcards are often a poor way to learn material, because (as we’ve seen) learning requires thoughtful and meaningful engagement with the materials you’re trying to memorize, and running through stack of flash cards probably won’t promote that thoughtful engagement. But using flashcards may be an excellent way to review material that is already learned- and so a way to avoid forgetting this material. Other, more substantial, forms of testing can also be valuable. Think about what happens each time you take a vocabulary quiz in your Spanish class. A question like “What’s the Spanish word for ‘bed’?” gives you practice in retrieving the word, and that practice promotes fluency in retrieval. In addition, seeing the word (cama) can itself refresh the memory, promoting retention.
The key idea here is the “testing effect.” This term refers to a consistent pattern in which students who have taken a test have better retention later on, in comparison to students who didn’t take the initial test. (See, e.g., Carpenter, Pashler, & Cepeda, 2009; Glass & Sinha, 2013; Halamish & Bjork, 2011; Karpicke, 2012; McDermott, Agarwal, D’Antonio, Roediger, & McDaniel, 2014; Pyc & Rawson, 2012.) This pattern has been documented with students of various ages (including high school and college students) and with different sorts of material.
The implications for students should be clear. It really does pay to go back periodically and ademic year as wel review what you’ve learned-including materia you learned earlier this as material from previous years. The review doesn’t have to be lengthy or intense; in the first study described here, just a 5-second exposure was enough to decrease forgetting dramatically.
Finally, you shouldn’t complain if a teacher insists on giving frequent quizzes. Of course, quizzes can be a nuisance, but they serve two functions. First, they can help you assess your learning, so that you can judge whether-perhaps-you need to adjust your study strategies. Second, the quizzes actually help you retain what you’ve learned-for days, and probably months, and perhaps even decades after you’ve learned it. COGNITIVE PSYCHOLOGY AND THE LAW jurors’ memory Throughout the textbook, we’ve covered many topics relevant to the question of what eyewitnesses to crimes-or crime victims-can or cannot remember. But memory is also relevant to the courts for another reason: Members of a jury sit and listen to hours (and, sometimes, many days) of courtroom testimony. Then, they move into the jury room, where, on the basis of their recollection of the testimony, they must evaluate the facts of the case and reach a verdict. But what if the jurors don’t remember the testimony they’ve heard? In some courtrooms, members of the jury are allowed to take notes during the trial, but in many jurisdictions they aren’t. Perhaps we should worry, therefore, about jurors’ memories just as much as we worry about witnesses’ memories.
Jurors’ memories are influenced by the same factors as any other memories. For example, we know in general that people try to fit complex events into a mental framework, or schema. Aspects of the event that fit well with this framework are likely to be remembered. Aspects that don’t fit with the framework may be forgotten or remembered in a distorted form, so that the recollection, now with its distorted content, does fit with the framework. This pattern has been documented in many settings, so it’s not surprising that it can also be demonstrated in jurors. To see how this plays out, bear in mind that in the opening phase of a trial, lawyers from each side have a chance to address the jury, and they use this opportunity to describe the case to come, foreshadowing what their arguments will be and, in some trials, what the evidence will show. Often, these presentations take the form of a story, describing in narrative form the sequence of events that is central for the trial. These stories can have a large impact on jurors-so that, for example, jurors generally remember more of the trial evidence if they have one of these stories in mind from the trial’s start. The reason is that jurors, listening to the trial testimony, can fit each new fact into the framework provided by the story, and this link to the framework supports memory.
For similar reasons, it’s not surprising that jurors will remember more of the trial evi-dence if we make it easier for them to fit the evidence into a story. Concretely, they’ll remember more if the testimony presents the trial evidence in “story sequence”-first, the earliest events in the story; then, later events; and so on.
But there’s also a downside. Once jurors have adopted a story about the trial, evidence that’s consistent with the story is more likely to be remembered; evidence inconsistent with the story is often forgotten. Also, jurors can sometimes “remember” evidence that actually wasn’t presented during the trial but that is consistent with the story!
These findings are just what we’d expect, based on what we know about memory in other settings. But these facts are also troubling, because we would obviously prefer that jurors remember all the evidence-and remember it accurately. Perhaps, therefore, we should seek changes in courtroom procedures so that, in the end, the jurors’ verdict will be based on an unbiased and complete recollection of the trial evidence. Also, it’s important that jurors work together, as a group, to reach their verdict. So per-haps, during their deliberations, jury members can remind one another about points that some of them may have forgotten. Likewise, we might hope that jurors can correct one another’s memory errors when they retire to the jury room to discuss the case.
This “memory repair” does happen to some extent, so deliberation does seem to im-prove jurors memory-but only up about the evidence are the people who are most confident that they recall the trial well. However, to a small extent. Why is it small? In the jury room, the people most likely to speak research tells us that confidence in one’s memory isn’t always an indication that the memory is accurate. Therefore, the people who speak up may not be the ones who remember the evidence correctly!
Overall, then, it seems that memory research highlights juror memory as yet another arena in which errors are possible. More positively, the research points improving the legal system might be possible-and are to another subject on which efforts at surely desirable.
Interconnections between Acquisition and Retrieval
Learning as Preparation for Retrieval Putting information into long-term memory helps you only if you can retrieve that information later on. Otherwise, it would be like putting money into a savings account without the option of ever making withdrawals, or writing books that could never be read. But let’s emphasize that there are different ways to retrieve information from memory. You can try to recall the information (“What was the name of your tenth-grade homeroom teacher?”) or to recognize it (“Was the name perhaps Miller?”). If you try to recall the information, a variety of cues may or may not be available (you might be told, as a hint, that the name began with an M or rhymes with “tiller”).
In Chapter 6, we largely ignored these variations in retrieval. We talked as if material was well established in memory or was not, with little regard for how the material would be retrieved from memory. There’s reason to believe, however, that we can’t ignore these variations in retrieval, and in this chapter we’ll examine the interaction between how a bit of information was learned and how it is retrieved later. Crucial Role of Retrieval Paths In Chapter 6, we argued that when you’re learning, you’re making connections between the newly acquired material and other information already in your memory. These connections make the new knowledge “findable” later on Specifically, the connections serve as retrieval paths: When you want to locate information in memory, you travel on those paths, moving from one memory to the next until you reach the target material.
These claims have an important implication. To see this, bear in mind that retrieval paths-like any paths-have a starting point and an ending point: The path leads you from Point A to Point B That’s useful if you want to move from A to B, but what if you’re trying to reach B from somewhere else? What if you’re trying to reach Point B, but at the moment you happen to be nowhere close to Point A? In that case, the path linking A and B may not help you.
As an analogy, imagine that you’re trying to reach Chicago from somewhere to the west. For this purpose, what you need is some highway coming in from the west. It won’t help that you’ve constructed a wonderful road coming into Chicago from the east. That road might be valuable in other circumstances, but it’s not the path you need to get from where you are right now to where you’re heading.
Do retrieval paths in memory work the same way? If so, we might find cases in which your learning is excellent preparation for one sort of retrieval but useless for other types of retrieval-as if you’ve built a road coming in from one direction but now need a road from another direction. Do the research data show this pattern? Context-Dependent Learning Consider classic studies on context-dependent learning (Eich, 1980; Overton, 1985). In one such study, Godden and Baddeley (1975) asked scuba divers to learn various materials. Some of the divers learned the material while sitting on dry land; others learned it while underwater, hearing the material via a special communication set. Within each group, half of the divers were then tested while above water, and half were tested below (see Figure 7.2).
Underwater, the world has a different look, feel, and sound, and this context could easily influence what thoughts come to mind for the divers in the study. Imagine, for example, that a diver is feeling cold while underwater. This context will probably lead him to think “cold-related” thoughts, so those thoughts will be in his mind during the learning episode. In this situation, the diver is likely to form memory connections between these thoughts and the materials he’s trying to learn.
Let’s now imagine that this diver is back underwater at the time of the memory test. Most likely he’ll again feel cold, which may once more lead him to “cold-related” thoughts. These thoughts, in turn, are now connected (we’ve proposed) to the target materials, and that gives us what we want: The cold triggers certain thoughts, and because of the connections formed during learning, those thoughts can trigger the target memories.
Of course, if the diver is tested for the same memory materials on land, he might have other links other memory connections, that will lead to the target memories. Even so, on land the diver will be at a disadvantage because the “cold-related” thoughts aren’t triggered-so there will be no benefit from the memory connections that are now in place, linking those thoughts to the sought-after memories.
By this logic, we should expect that divers who learn material while underwater will remember the material best if they’re again underwater at the time of the test. This setting will enable them to use the connections they established earlier. In terms of our previous analogy, they’ve built certain highways, and we’ve put the divers into a situation in which they can use what they’ve built. And the opposite is true for divers who learned while on land; they should do best if tested on land. And that is exactly what the data show (see Figure 7.3).
Similar results have been obtained in other studies, including those designed to mimic the learning situation of a college student. In one experiment, research participants read a two-page article similar to the sorts of readings they might encounter in their college courses. Half the participants read the article in a quiet setting half read it in noisy circumstances. When later given a short-answer test, those who read the article in quiet did best if tested in quiet-67% correct answers, compared to 54% correct if tested in a noisy environment. Those who read the article in a noisy environment did better if tested in a noisy environment-62% correct, compared to 46 %. ( See Grant et al., 1998; also see Balch, Bowman, & Mohler, 1992; Cann & Ross, 1989; Schab, 1990; Smith, 1985; Smith & Vela, 2001.)
In another study, Smith, Glenberg, and Bjork (1978) reported the same pattern if learning and testing took place in different rooms -with the rooms varying in appearance, sounds, and scent. In this study, though, there was an important twist: In one version of the procedure, the participants learned materials in one room and were tested in a different room. Just before testing, however, the participants were urged to think about the room in which they had learned-what it looked like and how it made them feel. When tested, these participants performed as well as those for whom there was no room change (Smith, 1979). What matters, therefore, is not the physical context but the psychological context-a result that’s consistent with our account of this effect. AS a result, you can get the benefits of context-dependent learning through a strategy of context reinstatement -re- creating the thoughts and feelings of the learning episode even if you’re in a very different place at the time of recall. That’s because what matters for memory retrieval is the mental context, not the physical environment itself. e. Demonstration 7.1: Retrieval Paths and Connections Often, the information you seek in memory is instantly available. For example, if you try to remember your father’s name, or the capital of France, the information springs immediately into your mind. Other times, however, the retrieval of information is more difficult.
How well do you remember your childhood? For example, think back to the sixth grade: How many of your sixth-grade classmates do you remember? Try writing a list of all their names on a piece of paper. Do it now, before you read any farther.
Now, read the following questions:
· What house did you live in when you were in the sixth grade? Think about times that friends came over to your house. Does that help you remember more names?
· Were you involved in any sports in the sixth grade? Think about who played on the teams with you. Does that help you remember more names?
· Where did you sit in the classroom in sixth grade? Who sat at the desk on your left? Who sat at the desk on your right? In front of you? Behind? Does that help you remember more names?
· Did you ride the bus to school, or carpool, or walk? Were there classmates you often saw on your way to or from school? Does that help you remember more names?
· Was there anyone in the class who was always getting in trouble? Anyone who was a fabulous athlete? Anyone who was incredibly funny? Do these questions help you remember more names?
Chances are good that at least one of these strategies, which help you “work your way back” to the names, did enable you to come up with some classmates you’d forgotten-and perhaps helped you to recall some names you hadn’t thought about for years!
Apparently, these “extra” names were in your memory, even though you couldn’t come up with them at first. Instead, you needed to locate the right retrieval path leading to the memory, the right precisely, once you were at the right connection. Once that connection was in your mind (or, more “starting point” for the path), it led you quickly to the target memory. This is just what we would expect, based on the claims in Chapter 7. Encoding Specificity The results we’ve been describing also illuminate a further point: what it is that’s stored in memory. Let’s go back to the scuba-diving experiment. The divers in this study didn’t just remember the words they’d learned; apparently, they also remembered something about the context in which the learning took place. Otherwise, the data in Figure 7.3 (and related findings) make no sense: If the context left no trace in memory, there’d be no way for a return to the context to influence the divers later.
Here’s one way to think about this point, still relying on our analogy. Your memory contains both the information you were focusing on during learning, and the highways you’ve now built, leading toward that information. These highways- the memory connections-can of course influence your search for the target information; that’s what we’ve been emphasizing so far. But the connections can do more: They can also change the meaning of what is remembered, because in many settings “memory plus this set of connections” has a different meaning from “memory plus that set of connections.” This change in meaning, in turn, can have profound consequences for how you remember the past. In one of the early experiments exploring this point, participants read target words (e.g., “piano”) in one of two contexts: “The man lifted the piano” or “The man tuned the piano.” In each case, the sentence led the participants to think about the target word in a particular way, and it was this thought that was encoded into memory. In other words, what was placed in memory wasn’t just the word “piano.” Instead, what was recorded in memory was the idea of “piano as something heavy” or “piano as musical instrument.”
This difference in memory content became clear when participants were later asked to recall the to recall the target word target words. If they had earlier seen the “lifted” sentence, they were likely if given the cue “something heavy.” The hint “something with a nice sound” was much less effective. But if participants had seen the “tuned” sentence, the result reversed: Now, the “nice sound” hint was effective, but the “heavy” hint wasn’t (Barclay, Bransford, Franks, McCarrell, & Nitsch, 1974). In both cases, the cue was effective only if it was congruent with what was stored in memory.
Other experiments show a similar pattern, traditionally called encoding specificity (Tulving, 1983; also see Hunt & Ellis, 1974; Light & Carter-Sobell, 1970). This label reminds us that what you encode (ie., place into memory) is indeed specific-not just the physical stimulus as you encountered it, but the stimulus together with its context. Then, if you later encounter the stimulus in some other context, you ask yourself, “Does this match anything I learned previously?” and you correctly answer no. And we emphasize that this “no” response is indeed correct. It’s as if you had learned the word “other” and were later asked whether you’d been shown the word “the.” In fact, “the” does appear as part of “other”-because the letters t h e do appear within “other But it’s the whole that people learn, not the parts. Therefore, if you’ve seen “other,” it makes sense to deny that you’ve seen the”- or, for that matter, “he” or “her”-even though all these letter combinations are contained within “other” Learning a list of words works in the same way. The word “piano” was contained in what the research participants learned, just as “the” is contained in “other” What was learned, however, wasn’t just this word. Instead, what was learned was the broader, integrated experience: the word as the perceiver understood it. Therefore, “piano as musical instrument” isn’t what participants learned if they saw the “lifted” sentence, so they were correct in asserting that this item wasn’t on the earlier list (also see Figure 7.4).
e. Demonstration 7.2: Encoding Specificity The textbook argues that the material in your memory is not just a reflection of the sights and sounds you’ve experienced. Instead, the material in your memory preserves a record of how you thought about these sights and sounds, how you interpreted and understood them. This demonstration, illustrating this point, is little complicated because it has three separate parts. First, you’ll read a list of words. Next, you should leave the demonstration and go do something else for 15 to 20 minutes-run some errands, perhaps, or do a bit of your reading for next week’s class. After that, your memory will be tested.
Here is the list of words to be remembered. For each word, a short phrase or cue is provided to help you focus on what the word means. Read the phrase or cue out loud, then pause for a second, then read the word, then pause for another second to make sure you’ve really thought about the word. Then move on to the next. Ready? Begin. HIDE A day of the week: Thursday A large city: Tokyo
A government leader: King A sign of happiness: Smile
A type of bird: Cardinal A student: Pupil
A famous psychologist: Freud A long word: Notwithstanding
A mane item: Wine Has four wheels: Toyota
A personality trait: Charm A part of a bird: Bill
A vegetable: Cabbage A member of the family: Grandfather
Associated with heat: Stove A happy time of year: Birthday
A round object: Ball A part of a word: Letter
Found in the jungle: Leopard A tool: Wrench
A crime: Robbery Found next to a highway: Motel
A baseball position: Pitcher A type of sport equipment: Racket
Associated with cold: North Part of a building: Chimney
Song accompaniment: Banjo Made of leather: Saddle
Take to a birthday party: Present A tropical plant: Palm
A girl’s name: Susan A synonym for “big”: Colossal
A type of footgear: Boots Associated with lunch: Noon
A man-made structure: Bridge Part of the intestine: Colon
A weapon: Cannon A sweet food: Banana
An assertion possession: Mine
Now, what time is it? Close the list of words and go do something else for 15 minutes, then come back for the next part of this demonstration.
Next, we’re going to test your memory for the words you learned earlier. To guide your efforts at recall, a cue will be provided for each of the words. Sometimes the cue will be exactly the same as the cue you saw before, and sometimes it will be different. In all cases, though, the cue will be closely related to the target word. There are no misleading cues.
On a piece of paper, write down the word from the previous list that is related to the cue. Do not look at the previous list. If you can’t recall some of the words, leave those items blank
Here are the answers. Check which ones you got right.
These words are obviously in groups of three. For the second word in each group (“Tokyo,” “Cannon,” etc.), the cue is identical to the cue you saw on the very first list. How many of these (out of 13) did you get right?
For the first word in each group (“Smile,” “Banana.” etc.), the cue is closely linked to the one you saw at first (“A sign of happiness” was replaced with “A facial expression,” and so on). How many of these (out of 13) did you get right?
For the third word in each group (“Mine,” “Bridge,” etc.), the cue actually changed the meaning of the target word. (On the first list, “Bridge” was “A manmade structure,” not “A card game”; “Racket” was “A type of sports equipment,” not “A type of noise.) How many of these (out of 13) did you get right? Most people do best with the identical cues and a little worse with the closely linked cues. Most people recall the fewest words with the cues that changed the meaning. Is this the pattern of your results? If so, your data fit with what the chapter describes as encoding specificity. This term reflects the fact that what goes into your memory isn’t just the words; it’s more specific than that-the words plus some record of what you thought about each word. As a result, what’s in your memory is not (for example) the word “bridge.” If that were your memory, a cue like “card game” might do the trick Instead, what’s in your memory is something like “structure used to get across a river,” and to trigger that idea, you need a different cue.
Demonstration adapted from Thieman, T. J. (1984). Table 1, in A classroom demonstration of encoding specificity. Teaching of Psychology, 11(2), 102. Copyright 1984 Routledge. Reprinted by permission from the publisher (Taylor & Francis Group, http://www.informaworld.com) The Memory Network In Chapter 6, we introduced the idea that memory acquisition-and, more broadly, learning-involves the creation (or strengthening) of memory connections. In this chapter, we’ve returned to the idea of memory connections, building on the idea that these connections serve as retrieval paths guiding you toward the information you seek. But what are these connections? How do they work? And who (or what?) is traveling on these “paths”?
According to many theorists, memory is best thought of as a vast network of ideas. In later chapters, we’ll consider how exactly these ideas are represented (as pictures? as words? in some more abstract format?). For now, let’s just think of these representations as nodes within the network, just like the knots in a fisherman’s net. (In fact, the word “node” is derived from the Latin word for knot, nodus.) These nodes are tied to each other via connections we’ll call associations or associative links. Some people find it helpful to think of the nodes as being like light bulbs that can be turned on by incoming electricity, and to imagine the associative links as wires that carry the electricity. Spreading Activation Theorists speak of a node becoming activated when it has received a strong enough input signal. Then, once a node has been activated, it can activate other nodes: Energy will spread out from the just-activated node via its associations, and this will activate the nodes connected to the just- activated node.
To put all of this more precisely, nodes receive activation from their neighbors, and as more and more activation arrives at a particular node, the activation level for that node increases. Eventually the activation level will reach the node’s response threshold. Once this happens, we say that the node fires. This firing has several effects, including the fact that the node will now itself be a source of activation, sending energy to its neighbors and activating them. In addition, firing of the node will draw attention to that node; this is what it means to “find” a node within the network.
Activation levels below the response threshold, so-called subthreshold activation, also play an important role. Activation is assumed to accumulate, so that two subthreshold inputs may add together, in a process of summation, and bring the node to threshold. Likewise, if a node has been partially activated recently, it is in effect already “warmed up,” so that even a weak input will now be sufficient to bring it to threshold.
These claims mesh well with points we raised in Chapter 2, when we considered how neurons communicate with one another. Neurons receive activation from other neurons; once a neuron reaches its threshold, it fires, sending activation to other neurons. All of this is precisely parallel to the suggestions we’re describing here. Our current discussion also parallels claims offered in Chapter 4, when we described how a network of detectors might function in object recognition. In other words, the network linking memories to each other will resemble the networks we’ve described linking detectors to each other (e.g., Figures 4.9 and 4.10). Detectors, like memory nodes, receive their activation from other detectors; they can accumulate activation from different inputs, and once activated to threshold levels, they fire.
Returning to long-term storage, however, the key idea is that activation travels from node to node via associative links. As each node becomes activated and fires, it serves as a source for further activation, spreading onward through the network. This process, known as spreading activation, enables us to deal with a key question: How does one navigate through the maze of associations? If you start a search at one node, how do you decide where to go from there? The answer is that in most cases you don’t “choose” at all. Instead, activation spreads out from its starting point in all directions simultaneously, flowing through whatever connections are in place. Retrieval Cues This sketch of the memory network leaves a great deal unspecified, but even so it allows us to explain some well-established results. For example, why do hints help you to remember? Why, for example, do you draw a blank if asked, “What’s the capital of South Dakota?” but then remember if given the cue “Is it perhaps a man’s name?” Here’s one likely explanation. Mention of South Dakota will activate nodes in memory that represent your knowledge about this state. Activation will then spread outward from these nodes, eventually reaching nodes that represent the capital city’s name. It’s possible, though, that there’s only a weak connection between the SOUTH DAKOTA nodes and the nodes representing PIERRE. Maybe you’re not very familiar with South Dakota, or maybe you haven’t thought about this state’s capital for some time. In either case, this weak connection will do a poor job of carrying the activation, with the result that only a trickle of activation will flow into the PIERRE nodes, and so these nodes won’t reach threshold and won’t be “found.”
Things will go differently, though, if a hint is available. If you’re told, “South Dakota’s capital is also a man’s name,” this will activate the MAN’S NAME node. As a result, activation will spread out from this source at the same time that activation is spreading out from the SOUTH DAKOTA nodes. Therefore, the nodes for PIERRE will now receive activation from two sources simultaneously, and this will probably be enough to lift the nodes’ activation to threshold levels. In this way, question- plus-hint accomplishes more than the question by itself (see Figure 7.5).
Semantic Priming The explanation we’ve just offered rests on a key assumption-namely, the summation of subthreshold activation. In other words, we relied on the idea that the insufficient activation received from one source can add to the insufficient activation received from another source. Either source of activation on its own wouldn’t be enough, but the two can combine to activate the target nodes.
Can we document this summation more directly? In a lexical-decision task, research participants are shown a series of letter sequences on a computer screen. Some of the sequences spell words other sequences aren’t words (e.g., “blar, plome”). The participants’ task is to hit a “yes” button if the sequence spells a word and a “no” button otherwise. Presumably, they perform this task by “looking up” these letter strings in their “mental dictionary,” and they base their response on whether or not they find the string in the dictionary. We can therefore use the participants’ speed of response in this task as an index of how quickly they can locate the word in their memories.
In a series of classic studies, Meyer and Schvaneveldt (1971; Meyer, Schvaneveldt, & Ruddy, 1974) presented participants with pairs of letter strings, and participants had to respond “yes” if both strings were words and “no” otherwise. For example, participants would say “yes” in response to “chair, bread” but “no” in response to “house, fime.” Also, if both strings were words, sometimes the words were semantically related in an obvious way (e.g., “nurse, doctor”) and sometimes they weren’t (“cake. shoe”). Of interest was how this relationship between the words would influence performance. Consider a trial in which participants see a related pair, like “bread, butter.” To choose a response, they first need to “look up” the word “bread” in memory. This means they’ll search for, and presumably activate, the relevant node, and in this way they’ll decide that, yes, this string is a legitimate word. Then, they’re ready for the second word. But in this sequence, the node for BREAD (the first word in the pair) has just been activated. This will, we’ve hypothesized, trigger a spread of activation outward from this node, bringing activation to other, nearby nodes. These nearby nodes will surely include BUTTER, since the association between “bread” and “butter” is a strong one. Therefore, once the BREAD node (from the first word) is activated, some activation should also spread to the BUTTER node.
From this base, think about what happens when a participant turns her attention to the second word in the pair. To select a response, she must locate “butter” in men finds the relevant node), then she knows that this string, too, is a word, and she can hit the “yes” button. But the process of activating the BUTTER node has already begun, thanks to the (subthreshold) activation this node just received from BREAD. This should accelerate the process of bringing this node to threshold (since it’s already partway there), and so it will require less time to activate. As a result, we expect quicker responses to “butter” in this context, compared to a context If she finds this word (i.e. in which “butter” was preceded by some unrelated word. Our prediction, therefore, is that trials with related words will produce semantic priming. The “priming” indicates that a specific prior event (in this case, presentation of the first word in the pair) will produce a state of readiness (and, therefore, faster responding) later on. There are various forms of priming (in Chapter 4, we discussed repetition priming). In the procedure we’re considering here, the priming results from the fact that the two words in the pair are related in meaning- therefore, this is semantic priming.
The results confirm these predictions. Participants’ lexical-decision responses were faster by almost 100 ms if the stimulus words were related (see Figure 7.6), just as we would expect on the term model we’re developing. (For other relevant studies, including some alternative conceptions of priming, see Hutchison, 2003; Lucas, 2000.)
Before moving on, though, node activating nearby nodes-is not the whole story for memory search. As one complication, we should mention that this process of spreading activation-with one people have some the processes of reasoning (Chapter 12) and the mechanisms of executive control (Chapters 5 and 6). In addition, evidence suggests that once the spreading activation has begun, people have the option of “shutting down” some of this spread if they’re convinced that the wrong nodes are being activated (e.g., Anderson & Bell, 2001; Johnson & Anderson, 2004). Even so, spreading activation is a crucial degree of control over the starting points for their memory searches, relying on us understand why memory connections mechanism. It plays a central role in retrieval, and it helps are so important and so helpful. e. Demonstration 7.3: Spreading Activation in Memory Search On a piece of paper, list all of the men’s first names you can think of that are also verbs. For example, you can Mark something on paper; you shouldn’t Rob a bank. If you’re willing to ignore the spelling, you can Neil before the queen and Phil a bucket. How many other men’s names are also verbs? Spend a few minutes generating the list.
How do you search your memory to come up with these names? One possibility is that you first think of all the men’s names that you know, and then from this list you select the names that work as verbs. A different possibility reverses this sequence: You first think of all the verbs that you know and from this list you select the words that are also names. One last possibility is that you combine these steps, so that your two searches go on in parallel: In essence, you let activation spread out in your memory network from the MEN’S NAMES nodes, and at the same time you let activation spread out from the VERBS nodes. Then, you can just wait and see which nodes receive activation from both of these sources simultaneously. In fact, the evidence suggests that the third option (simultaneous activation from two sources) is the one you use. We can document this by asking a different group of people just to list all the verbs they know. When we do this, we find that some verbs come to mind only after a long delay-if at all. For example, if you’re just thinking of verbs, the verb “rustle” may not pop into your thoughts. If, therefore, you were trying to think of verbs-that-are-also-names by first thinking about verbs and then screening them, you’re unlikely to come up with “rustle” in your initial step (i.e., generating a list of verbs). Therefore, you won’t think about “rustle” in this setting, and so you won’t spot the fact that it’s also a man’s name (“Russell”). On this basis, this name won’t be one of the names on your list.
The reverse is also true. If you’re just thinking about men’s names, the name “Russell” may not spring to mind, and so, if this is the first step in your memory search (i.e., first generate a list of names; then screen it, looking for verbs), you won’t come up with this name in the first place. Therefore, you won’t consider this name, won’t see that it’s also a verb, and won’t put it on your list.
It turns out, though, that relatively rare names and rare verbs are often part of your final output. This makes no sense if you’re using a “two-step” procedure (first generate names, then screen them; would n up i he first or first generate verbs, then screen them) because the key w step of this process. But the result does make sense if your memory search combines the two steps. In that case, even though these rare items are only weakly activated by the MEN’S NAMES nodes, and only weakly activated by the VERBS nodes, they are activated perfectly well if they can receive energy from both time-and that is why these rare items come easily to mind. And, by the way, there are at least 50 men’s names that are also verbs, so keep hunting for them! It may help to remember that Americans Bob for apples at Halloween. Yesterday, I Drew a picture and decided to Stu the beef for dinner. I can Don a suit, Mike a speaker, Rush to an appointment, Flip a pancake, or Jimmy a locked door. These are just some of the names that could be on your list! e. Demonstration 7.4: Semantic Priming As Chapter 7 describes, searching thr ough long-term memory relies heavily on a process of spreading activation, with currently activated nodes sending activation outward to their neighbors. If this spread brings enough activation to the neighbors, then those nodes will themselves become activated. However, even if these nodes don’t receive enough activation to become activated themselves, the subthreshold activation still has important effects.
Here is a list of anagrams (words for which we’ve scrambled up the letters). Can you unscramble them to figure out what each of the words is?
Did you get them all? Continue in order to see the answers.
The answers, in no particular order, are “sea,” “shirt.” “victor,” “island,” “mountain,” “wave,” “pilot.” and-what? The last anagram in the list actually has two solutions: It could be an anagram for the boat used in North America to explore lakes and streams, or it could be an anagram for the body of water that sharks and whales and sea turtles live in. Which of these two solutions came to your mind? If you happen to be a devoted paddler, then good that “ocean” is the word “canoe” may have come rapidly into your thoughts. But the odds are that came to mind for you. Why is this? Several of the other words in this series (“sea,” “island” “mountain, “wave”) are semantically associated with “ocean.” Therefore, when you solved these earlier anagrams, you activated nodes for these words, and the activation spread outward from there to the neighboring nodes-including, probably, OCEAN. As a result, the word “ocean” was already primed when you turned to the last anagram, making it likely that this word, and not the legitimate alternative, would come into your thoughts as you unscrambled NOCAE. Different Forms of Memory Testing Let’s pause to review. In Chapter 6, we argued that learning involves the creation or strengthening of connections. This is why memory is promoted by understanding (because understanding consists, in large part, of seeing how new material is connected to other things you know). We also proposed that these connections later serve as retrieval paths, guiding your search through the vast warehouse that is memory. In this chapter, weve explored an important implication of this idea: that (like all paths) the paths through memory have both a starting point and an end point. Therefore, retrieval paths will be helpful only if you’re at the appropriate starting point; this, we’ve proposed, is the basis for the advantage produced by context reinstatement. And, finally, we’ve now started to lay out what these paths really are: connections that carry activation from one memory to another.
This theoretical base also helps us with another issue: the impact of different forms of memory testing. Both in the laboratory and in day-to-day life, you often try to recall information from memory. This means that you’re presented with a retrieval cue that broadly identifies the information you seek, and then you need to come up with the information on your own: “What was the name of that great restaurant your parents took us to?”; “Can you remember the words to that song?”; “Where were you last Saturday?” In other circumstances, you draw information from your memory via recognition. This term refers to cases in which information is presented to you, and you must decide whether it’s the sought-after information: “Is this the man who robbed you?”; “I’m sure I’ll recognize the street when we get there”; “If you let me taste that wine, I’ll tell you if it’s the same one we had last time.”
These two modes of retrieval-recall and recognition-are fundamentally different from each other. Recall requires memory search because you have to come up with the sought-after item on your own; you need to locate that item within memory. As a result, recall depends heavily on the so far. Recognition, in contrast, often depends memory connections we’ve been emphasizing sense of familiarity. Imagine, for example, that you’re taking a recognition test, and the fifth word on the test is “butler.” In response to this word, you might find yourself thinking, “I don’t recall seeing on a this word on the list, but this word feels really familiar, so I guess I must have seen it recently Therefore, it must have been on the list.” In this case, you don’t have source memory; that is, you don’t have any recollection of the source of your current knowledge. But you do have a strong sense of familiarity, and you’re willing to make an inference about where that familiarity came from. In other words, you attribute the familiarity to the earlier encounter, and thanks to this attribution you’ll probably respond “yes” on the recognition test. Familiarity and Source Memory We need our terms here, because source memory is actually a type of recall. Let’s say, for example, that you hear a song on the radio and say, “I know I’ve heard this song before because it feels familiar and I remember where I heard it.” In this setting, you’re able to remember the source of your familiarity, and that means you’re recalling when and where you encountered the song. On this basis, we don’t need any new theory to talk about source memory, because we can use the same theory that we’d use for other forms of recall. Hearing the song was the retrieval cue that launched a search through memory, a search that allowed you to identify the setting in which you last encountered the song. That search (like any search) was dependent on memory connections, and would be explained by the spreading activation process that we’ve already described.
But what about familiarity? What does this sort of remembering involve? As a start, let’s be clear that familiarity is truly distinct from source memory. This is evident in the fact that the two types of memory are independent of each other-it’s possible for an event to be familiar without any source memory, and it’s possible for you to have source memory without any familiarity. This independence is evident when you’re watching a movie and realize that one of the actors is familiar, but (sometimes with considerable frustration, and despite a lot of effort) you can’t recall where you’ve seen that actor before. Or you’re walking down the street, see a familiar face, and find yourself asking, “Where do I know that woman from? Does she work at the grocery store I shop in? Is she the driver of the bus I often take?” You’re at a loss to answer these questions; all you know is that the face is familiar.
In cases like these, you can’t “place” the memory; you can’t identify the episode in which the face was last encountered. But you’re certain the face is familiar, even though you don’t know why-a clear example of familiarity without source memory. The inverse case is less common, but it too can be demonstrated. For example, in Chapter 2 we discussed Capgras syndrome. Someone with this syndrome might have detailed, accurate memories of what friends and family members look like, and probably remembers where and when these other people were last encountered. Even so, when these other people are in view they seem hauntingly unfamiliar. In this setting, there is source memory without familiarity. (For further evidence-and a patient who, after surgery, has intact source memory but disrupted familiarity-see Bowles et al., 2007; also see Yonelinas & Jacoby, 2012.)
We can also document the difference between source memory and familiarity in another way. In many studies, (neurologically intact) participants have been asked, during a recognition test, to make a “remember/know” distinction. This involves pressing one button (to indicate “remember”) if they actually recall the episode of encountering a particular item, and pressing a different button (“know”) if they don’t recall the encounter but just have a broad feeling that the item must have been on the earlier list. With one response, participants are indicating that they have a source memory; with the other, they’re indicating an absence of source memory. Basically, a participant using the “know response is saying, “This item seems familiar, so I know it was on the earlier list even though I don’t remember the experience of seeing it” (Gardiner, 1988; Hicks & Marsh, 1999; Jacoby, Jones, & Dolan, 1998).
Researchers can use FMRI scans to monitor participants’ brain activity while they’re taking these memory tests, and the scans indicate that “remember” and “know” judgments depend on different brain areas. The scans show heightened activity in the hippocampus when participants indicate that they “remember” a particular test item, suggesting that this brain structure is crucial for source memory. In contrast, “know” responses are associated with activity in a different area-the anterior parahippocampus, with the implication that this brain site is crucial for familiarity. (See Aggleton & Brown, 2006; Diana, Yonelinas, & Ranganath, 2007; Dobbins, Foley, Wagner, & Schacter, 2002; Eldridge, Knowlton, Furmanski, Bookheimer, & Engel, 2000; Montaldi,, Spencer, Roberts, & Mayes, 2006; Wagner, Shannon, Kahn, & Buckner, 2005. Also see Rugg & Curran, 2007; Rugg & Yonelinas, 2003.)
Familiarity and source memory can also be distinguished during learning. If certain brain areas (e.g., the rhinal cortex) are especially active during learning, then the stimulus is likely to seem familiar later on. In contrast, if other brain areas (e.g., the hippocampal region) are particularly active during learning, there’s a high probability that the person will indicate source memory for that stimulus when tested later (see Figure 7.7). (See, e.g., Davachi & Dobbins, 2008; Davachi, Mitchell, & Wagner, 2003: Ranganath et al., 2003.)
We still need to ask, though, what’s going on in these various brain areas to create the relevant memories. Activity in the hippocampus is probably helping to create the memory connections we’ve been discussing all along, and it’s these connections, we’ve suggested, that promote source memory. But what about familiarity? What “record” does it leave in memory? The answer to this question leads us to a very different sort of memory. e. Demonstration 7.5: Studying for Different Types of Tests Chapter 7 emphasizes that recollection and familiarity are distinct types of memory-each obeys its own type of rules, and each is supported by its own brain circuits. With this context, think about the fact that when a teacher in school, or a professor in college, announces a test, students often ask about the format of the test. Will the test include multiple-choice questions? True-false questions? Short-answer questions? Essay questions?
Spend a minute thinking through whether the promised format of an your study strategies. Does the promised format influence how hard you study? Does it influence how you study or what you focus on during your studying?
Ask a few friends the same questions. Do they want to know, in advance of a test, what the test’s upcoming test influences format will be? Does it influence how they prepare for the test?
Then, as one last step: If you believe you study in the same way for different formats, is this consistent with the evidence in the chapter, distinguishing recollection and familiarity? If you think rou study differently for true-false or multiple-choice tests (both tests that hinge on recognition) on recall), do your choices about study than you do for short-answer or essay tests (both hinging strategy line up with what the chapter says about recognition and recall? We might mention that in a study done years ago, half of the participants were told that their memories would be assessed via a recall test; half were told they would be given a recognition test. Then, when the test actually took place, half of each group got the test format they expected; half did not. The data showed that participants did better with the recall test if this is what they’d expected (62% vs. 40% ) , and participants did better with the recognition test if that’s what they’d been led to expect (87 % vs. 67%) . Can you explain what’s going on here? Are there perhaps lessons for your own study strategies?
For more on this, see Tversky, B. (1973). Encoding processes in recognition and recall. Cognitive Psychology, 5, 275-287. Implicit Memory How can we find out if someone remembers a previous event? The obvious path is to ask her-“How did the job interview go?”; “Have you ever seen Casablanca?”; “Is this the book you told me about?” But at the start of this chapter, we talked about a different approach: We can expose someone to an event, and then later re-expose her to the same event and assess whether her response on the second encounter is different from the first. Specifically, we can ask whether the first encounter somehow primed the person-got her ready-for the second exposure. If so, it would seem that the person must retain some record of the first encounter- she must have some sort of memory. Memory without Awareness In a number of studies, participants have been asked to read through a list of words, with no indication that their memories would be tested later on. (They might be told that they’re merely checking the list for spelling errors.) Then, sometime later, the participants are given a lexical- decision task: They are shown a series of letter strings and, for each, must indicate (by pressing one button or another) whether the string is a word or not. Some of the letter strings in the lexical- decision task are duplicates of the words seen in the first part of the experiment (i.e., they were on the list participants had checked for spelling), enabling us to ask whether the first exposure somehow primed the participants for the second encounter. In these experiments, lexical decisions are quicker if the person has recently seen the test word; that is, lexical decision shows the pattern that in Chapter 4 we called “repetition priming” (e.g., Oliphant, 1983). Remarkably, this priming is observed even when participants have no recollection for having encountered the stimulus words before. To demonstrate this, we can show participants a list of words and then test them in two different ways. One test assesses memory directly, using standard recognition procedure: “Which of these words were on the list I showed you earlier?” The other test is indirect and relies on lexical decision: “Which of these letter strings form real words?” In this procedure, the two tests will yield different results. At a sufficient delay, the direct memory test is likely to show that the participants have completely forgotten the words presented earlier; their recognition performance is essentially random. According to the lexical-decision results however, the participants still remember the words-and so they show a strong priming effect. In this situation, then, participants are influenced by a specific past experience that they seem (consciously) not to remember at all-a pattern that some researchers refer to as “memory without awareness.” A different example draws on a task called word-stem completion. In this task, participants are given three or four letters and must produce a word with this beginning. If, for example, they’re given cla-, then “clam” or “clatter” would be acceptable responses, and the question of interest for us is which of these responses the participants produce. It turns out that people are more likely to offer a specific word if they’ve encountered it recently; once again, this priming effect is observed even if participants, when tested directly, show no conscious memory of their recent encounter with that word (Graf, Mandler, & Haden, 1982).
Results like these lead psychologists to distinguish two types of memory. Explicit memories are those usually revealed by direct memory testing-testing that urges participants to remember the past. Recall is a direct memory test; so is a standard recognition test. Implicit memories, however, are typically revealed by indirect memory testing and are often manifested as priming effects. In testing, participants’ current behavior is demonstrably influenced by a prior event, but this form they may be unaware of this. Lexical decision, word-stem completion, and many other tasks provide indirect means of assessing memory. (See, for example, Mulligan & Besken, 2013; for a different perspective on these data, though, see Cabeza & Moscovitch, 2012.)
How exactly is implicit memory different from explicit memory? We’ll say more about this question before we’re done; but first we need to say more about how implicit memory feels from the rememberer’s point of view. This will lead us back into our discussion of familiarity and source memory. False Fame In a classic research study, Jacoby, Kelley, Brown, and Jasechko (1989) presented participants with a were told nothing about a memory test; they thought list of names to read out loud. The participants the experiment was concerned with how they pronounced the names. Some time later, during the second step of the procedure, the participants were shown a new list of names and asked to rate each person on this list according to how famous each one was. The list included some real, very famous people; some real but not-so-famous people; and some fictitious names that the experimenters had invented. Crucially, the fictitious names were of two types: Some had occurred on the prior (“pronunciation”) list, and some were simply new names. A comparison between those two types will indicate how the prior familiarization (during the pronunciation task) influenced the participants’ judgments of fame. For some participants, the “famous” list was presented right after the “pronunciation” list; for other participants, there was a 24-hour delay between these two steps. To see how this delay matters, imagine that you’re a participant in the immediate-testing condition: When you see one of the fictitious-but-familiar names, you might decide, “This name sounds familiar, but that’s because I just saw it on the previous list.” In this situation, you have a feeling that the (familiar) name is distinctive, but you also know why it’s distinctive-because you remember your earlier encounter with the name. In other words, you have both a sense of familiarity and a source memory, so there’s nothing here to persuade you that the name belongs to someone famous, and you respond accordingly. But now imagine that you’re a participant in the other condition, with the 24-hour delay. Because of the delay, you may not recall the earlier episode of seeing the name in the pronunciation task. But the broad sense of familiarity remains anyway, so in this setting you might say, “This name rings a bell, and I have no idea why. I guess this must be a famous person.” And this is, in fact, the pattern of the data: When the two lists are presented one day apart, participants are likely to rate the made-up names as being famous.
Apparently, the participants in this study noted (correctly) that some of the names did “ring a bell” and so did trigger a certain feeling of familiarity. The false judgments of fame, however, come from the way the participants interpreted this feeling and what conclusions they drew from it. Basically, participants in the 24-hour-delay condition forgot the real source of the familiarity (appearance on a recently viewed list) and instead filled in a bogus source (“Maybe I saw this person in a movie?”). And it’s easy to see why they made this misattribution. After all, the experiment was described to them as being about fame, and other names on the list were actually those of famous people. From the participants’ point of view, therefore, it was reasonable to infer in this setting that any name that “rings a bell” belongs to a famous person. We need to be clear, though, that this misattribution is possible only because the feeling of familiarity produced by these names was relatively vague, and therefore open to interpretation. The suggestion, then, is that implicit memories may leave people with only a broad sense that a stimulus is somehow distinctive-that it “rings a bell” or “strikes a chord” What happens after this depends on how they interpret that feeling. Implicit Memory and the “Illusion of Truth” How broad is this potential for misinterpreting an implicit memory? Participants in one study heard a series of statements and had to judgehow interesting each statement was (Begg, Anas, & Farinacci, 1992). As an example, one sentence was “The average person in Switzerland eats about 25 pounds of cheese each year” (This is false; the average in 1992, when the experiment done, was closer to 18 pounds.) Another was “Henry Ford forgot to put a reverse gear in his first automobile.” (This is true.)
After hearing these sentences, the participants were presented with some more sentences, but now they had to judge the credibility of these sentences, rating them on a scale from certainly true to certainly false. However, some of the sentences in this “truth test” were repeats from the earlier presentation, and the question of interest is how sentence credibility is influenced by sentence familiarity. The result was a propagandist’s dream: Sentences heard before were more likely to be accepted as true; that is, familiarity increased credibility. (See Begg, Armour, & Kerr, 1985; Brown & Halliday, 1990; Fiedler, Walther, Armbruster, Fay, & Naumann, 1996; Moons, Mackie, & Garcia-Marques, 2009; Unkelbach, 2007.) This effect was found even when participants were warned in advance not to believe the sentences in the first list. In one procedure, participants were told that half of the statements had been made by men and half by women. The women’s statements, they were told, were always true; the men’s, always false. (Half the participants were told the reverse.) Then, participants rated how interesting the sentences were, with each sentence attributed to either a man or a woman: for example, “Frank Foster says that house mice can run an average of 4 miles per hour” or “Gail Logan says that crocodiles sleep with their eyes open.” Later, participants were presented with more sentences and had to judge their truth, with these new sentences including the earlier assertions about mice, crocodiles, and so forth.
Let’s focus on the sentences initially identified as being false-in our example, Frank’s claim about mice. If someone explicitly remembers this sentence (“Oh yes-Frank said such and such”), then he should judge the assertion to be false (“After all, the experimenter said that the men’s statements were all lies”). But what about someone who lacks this explicit memory? This person will have no conscious recall of the episode in which he last encountered this sentence (i.e., will have no source memory), and so he won’t know whether the assertion came from can’t use the source as a basis for judging the truthfulness of the sentence. But he might still have an implicit memory for the sentence left over from the earlier exposure (“Gee, that statement rings bell”), and this might increase his sense of the statement’s credibility (“I’m sure I’ve heard that somewhere before; I guess it must be true). This is exactly the pattern of the data: Statements plainly identified as false when they were first heard still created the so-called illusion of truth; that man or a woman. He therefore is, these statements were subsequently judged to be more credible than sentences never heard before. The relevance of this result to the political arena or to advertising should be clear. A newspaper headline might inquire, “Is Mayor Wilson a crook?” Or the headline might declare, “Known criminal claims Wilson is a crook!” In either case, the assertion that Wilson is a crook would become familiar. The Begg et al. data indicate that this familiarity will, by itself, increase the likelihood that you’ll later believe in Wilson’s dishonesty. This will be true even if the paper merely raised the question; it will be true even if the allegation came from a disreputable source. Malicious innuendo does, in fact produce nasty effects. (For related findings, see Ecker, Lewandowsky, Chang, & Pillai, 2014.)
Attributing Implicit Memory to the Wrong Source
Apparently, implicit memory can influence us (and, perhaps, bias us) in the political arena. Other evidence suggests that implicit memory can influence us in the marketplace-and can, for example, guide our choices when we’re shopping (e.g., Northup & Mulligan, 2013, 2014). Yet another example involves the justice system, and it’s an example with troubling implications. In an early study by Brown, Deffenbacher, and Sturgill (1977), research participants witnessed a staged crime. Two or three days later, they were shown “mug shots” of individuals who supposedly had participated in the crime. But as it turns out, the people in these photos were different from the actual “criminals”-no mug shots were shown for the truly “guilty” individuals. Finally, after four or five more days, the participants were shown a lineup and asked to select the individuals seen in Step 1-namely, the original crime (see Figure 7.8).
The data in this study show a pattern known as source confusion. The participants correctly realized that one of the faces in the lineup looked familiar, but they were confused about the source of the familiarity. They falsely believed they had seen the person’s face in the original “crime,” when, in truth, they’d seen that face only in a subsequent photograph. In fact, the likelihood of this error was quite high, with 29% of the participants (falsely) selecting from the lineup an individual they had seen only in the mug shots. (Also see Davis, Loftus, Vanous, & Cucciare, 2008; Kersten & Earles, 2017. For examples of similar errors that interfere with real-life criminal investigations, see Garrett, 2011. For a broader discussion of eyewitness errors, see Reisberg, 2014.) e. Demonstration 7.6: Priming from Implicit Memory Imagine that yesterday you read a particular word-“couch,” for example. This encounter with the word can change how you react to the word when you see it today. This will be true even if your memory contains no explicit record of yesterday’s event, so that you have no conscious memory of having read that particular word. Even without an explicit record, your unconscious memory can lead you to interpret the word differently the next time you meet it, or it can lead you to recognize the word more quickly. Implicit memories can also change your emotional response to a word. The emotional effect probably won’t be enough to make you laugh out loud or shed a tear when you see the word, but it may be enough to make the word seem more attractive to you than it would have been without the priming.
These implicit memory effects are, however, difficult to translate into quick demonstrations, because a classroom (or do-at-home) demonstration is likely to leave you with both an implicit and an explicit memory of the stimulus materials, and the explicit memory will overshadow the implicit memory. In other words, if an experience does leave you with an explicit memory, this record might lead you to overrule the implicit memory. Thus, your implicit memory might pull you toward a particular response, but your explicit memory might allow you to refuse that response and lead you to a different one instead-perhaps a response that’s not even close to the one favored by the implicit memory. We can, however, demonstrate something close to these effects. For example, write a short sentence using each of the following words.
Wind Bottle Close
Read Record Refuse
Dove Foot Desert
Pet Tear Lead
Write your sentences before you read on!
Several of thése words can be used in more than one way or have more than one meaning (think about how the “wind” blows and also how you “wind” up some types of toys). How did you use these items in your sentences? This use is likely to be guided to some extent by your memory: If you recently reada sentence like “He dove into the pool,” you’re more likely to use “dove” to indicate the activity rather than the bird. This effect will work even if you didn’t especially notice the word when you first saw it, and even if, now, you have no conscious recollection of recently seeing the word. In other words, these priming effects depend on implicit memory, not explicit.
It turns out that the opening paragraphs of this demonstration used the word “read” in the past tense, priming you to use the word in the past tense. Did you, in your sentence? The early paragraphs in this demonstration also primed you to use “record” as a noun, not a verb; “tear” as the name for the thing that comes out of your eye, not an action; “close” as an adjective, not a noun or a verb; and “refuse” as a verb, not a noun. You were also primed by the opening paragraphs to think of “lead” as a verb, not a noun. Did the priming work for you, guiding how you used the test words in the sentences you composed? Did you notice the primes? Did you remember them?
Let’s be clear, though, that these priming effects don’t work every time, simply because a number of other factors, in addition to priming, also influence how you use these words. Even so, the probability of your using the word a certain way is often changed by the prime, and so most people do show these priming effects. Theoretical Treatments of Implicit Memory One message coming from these studies is that people are often better at remembering that something is familiar than they are at remembering why it is familiar. This explains why it’s possible to have a sense of familiarity without source memory (“I’ve seen her somewhere before, but I can’t figure out where!”) and also why it’s possible to be correct in judging familiarity but mistaken in judging source.
In addition, let’s emphasize that in many of these studies participants are being influenced by memories they aren’t aware of. In some cases, participants realize that a stimulus is somehow familiar, but they have no memory of the encounter that produced the familiarity. In other cases, they don’t even have a sense of familiarity for the target stimulus; nonetheless, they’re influenced by their previous encounter with the stimulus. For example, experiments show that participants often prefer a previously presented stimulus over a novel stimulus, even though they have no sense of familiarity with either stimulus. In such cases, people have no idea that their preference is being guided by memory (Murphy, 2001; also Montoy, Horton, Vevea, Citkowicz, & Lauber, 2017).
It does seem, then, that the phrase “memory without awareness” is appropriate, and it does make sense to describe these memories as implicit memories. But how can we explain this form of unconscious “remembering”? Processing Fluency
Our discussion Chapters and 5 -has laid the foundation for a proposal about implicit memory. Let’s build the argument in steps.
When a stimulus arrives in front of your eyes, it triggers certain detectors, and these trigger other detectors, and these still others, until you recognize the object. (“Oh, it’s my stuffed bear, Blueberry”) We can think of this sequence as involving a “flow” of activation that moves from detector to detector. We could, if we wished, keep track of this flow and in this way identify the “path” that the activation traveled through the network. Let’s refer to this path as a processing pathway-the sequence of detectors, and the connections between detectors, that the activation flows through in recognizing a specific stimulus.
In the same way, we’ve proposed in this chapter that remembering often involves the activation of a node, and this node triggers other, nearby, no des so that they become activated; they trigger still other nodes, leading eventually to the information you seek in memory. So here, too, we can speak of a processing pathway-the sequence of nodes, and connections between nodes, that the activation flows through during memory retrieval.
We’ve also said the use of a processing pathway strengthens that pathway. This is because the baseline activation level of nodes or detectors increases if the nodes or detectors have been used frequently in the past, or if they’ve been used recently. Likewise, connections (between detectors or nodes) grow stronger with use. For example, by thinking about the link between, say, “Jacob” and “Boston,” you can strengthen the connection between the corresponding nodes, and this will help you remember that your friend Jacob comes from Boston. Now, let’s put the pieces together. Use of a processing pathway strengthens the pathway. As a result, the pathway will be a bit more efficient, a bit faster, the next time you use it. Theorists describe this fact by saying that use of a pathway increases the pathway’s processing fluency-that is, the speed and ease with which the pathway will carry activation.
In many cases, this is all the theory we need to explain implicit memory effects. Consider implicit memory’s effect on lexical decision. In this procedure, you first are shown a list of words, including the word “bubble.” Then, we ask you to do the lexical-decision task, and we find that you’re faster for words (like “bubble”) that had been included in the earlier list. This increase in speed provides evidence for implicit memory, and the explanation is straightforward. When we show you “bubble” early in the experiment, you read the word, and this involves activation flowing through the appropriate processing pathway for this word. This warms up the pathway, and as a result the functioning will be more fluent the next time you use it. Of course, when “bubble” shows up later as part of the lexical-decision task, it’s handled by the same (now more fluent) pathway, and so the n’s word is processed more rapidly-exactly the outcome that we’re trying to explain. For other implicit-memory effects, though, we need a further assumption- namely, that people are sensitive to the degree of processing fluency. That is, just as people can tell whether they’ve lifted a heavy carton or a lightweight one, or whether they’ve answered an easy question (“What’s 2 + 2?”) or a harder one (“What’s 17 3 19?”), people also have a broad sense of when they have perceived easily and when they have perceived only by expending more effort. They likewise know when a sequence of thoughts was particularly fluent and when the sequence was labored.
This fluency, however, is perceived in an odd way. For example, when a stimulus is easy to perceive, you don’t experience something like “That stimulus sure was easy to recognize!” Instead, you merely register a vague sense of specialness. You feel that the stimulus “rings a bell.” No matter how it is described, though, this sense of specialness has a simple cause-namely, the detection of fluency, created by practice.
There’s one complication, however. What makes a stimulus feel “special” may not be fluency itself. Instead, people seem sensitive to changes in fluency (e.g., they notice if it’s a little harder to recognize a face this time than it was in the past). People also seem to notice discrepancies between how easy (or hard) it was to carry out some mental step and how easy (or hard) they expected it to be (Wanke & Hansen, 2015; Whittlesea, 2002). In other words, a stimulus is registered as distinctive, or “rings a bell.” when people detect a change or a discrepancy between experience and expectations. To see how this matters, imagine that a friend unexpectedly gets a haircut (or gets new eyeglasses, or adds or removes some facial hair). When you see your friend, you realize immediately that something has changed, but you’re not sure what. You’re likely those new glasses?”) and get a scornful answer. (“No, you’ve seen these glasses a hundred times over the last year.”) Eventually your friend tells you what the change is-pointing out that you failed to to ask puzzled questions (“Are notice that he’d shaved off his mustache (or some such).
What’s going on here? You obviously can still recognize your friend, but your recognition is less fluent than in the past because of the change in your friend’s appearance, and you notice this change -but then are at a loss to explain it (see Figure 7.9).
On all of these grounds, we need another step in our hypothesis, but it’s a step we’ve already introduced: When a stimulus feels special (because of a change in fluency, or a discrepancy between the fluency expected and the fluency experienced), you often want to know why. Thus the vague feeling of specialness (again, produced by fluency) can trigger an attribution process, as you ask, “Why did that stimulus stand out?”
In many circumstances, you’ll answer this question correctly, and so the specialness will be (accurately) interpreted as familiarity and attributed to the correct source. (“That woman seems distinctive, and I know why: It’s the woman I saw yesterday in the dentist’s office.”) Often, you make this attribution because you have the relevant source memory-and this memory guides you in deciding why a stimulus (a face, a song, a smell) seems to stand out. In other cases, you make a reasonable inference, perhaps guided by the context. (“I don’t remember where I heard this joke before, but it’s the sort of joke that Conor is always telling joke is familiar.”) In other situations, though, things don’t go so smoothly, and so-as we have seen- so I bet it’s one of his and that’s why the people sometimes misinterpret their own processing fluency, falling prey to the errors and illusions we have been discussing. The Nature of Familiarity
All of these points provide us-at last-with a proposal for what “familiarity” is, and the proposal is surprisingly complex. You might think that familiarity is simply a feeling that’s produced more or less directly when you encounter a stimulus you’ve met before. But the research findings described in the last few sections point toward a different proposal-namely, that “familiarity” is more like a conclusion that you draw rather than a feeling triggered by suggests that a stimulus will seem familiar whenever the following list of requirements is met: First, you have encountered the stimulus before. Second, because of that prior encounter (and the “practice” it provided), your processing of that stimulus is now faster and more efficient; there is, in other words, an increase in processing fluency. Third, you detect that increased fluency, and this leads you to register the stimulus as somehow distinctive or special. Fourth, you try to figure out a stimulus. Specifically, the evidence why the stimulus seems special, and you reach a particular conclusion-namely, that the stimulus has this distinctive quality because it’s a stimulus you’ve met before in some prior episode (see Figure 7.10).
Let’s be clear, though, that none of these steps happens consciously-you’re not aware of seeking an interpretation or trying to explain why a stimulus feels distinctive. All you experience consciously is the end product of all these steps: the sense that a stimulus feels familiar. Moreover, this conclusion about a stimulus isn’t one you draw capriciously; instead, you’re likely to arrive at this conclusion and decide a stimulus is familiar only when you have supporting information. Thus, imagine that you encounter a stimulus that “rings a bell.” As we mentioned before, you’re likely to decide the stimulus is familiar if you also have an (explicit) source memory, so that you can recall where and when you last encountered that stimulus. You’re also more likely to decide a stimulus is familiar if the surrounding circumstances support it. For example, if you’re asked, “Which of these words were on the list you saw earlier?” the question itself gives you a cue that some of the words were recently encountered, and so you’re more likely to attribute fluency to that encounter.
The fact remains, though, that judgments like these sometimes go astray, which is why we need this complicated theory. We’ve considered several cases in which a stimulus is objectively familiar (you’ve seen it recently) but doesn’t feel familiar-just as our theory predicts. In these cases, you detect the fluency but attribute it to some other source. (“That melody is lovely” rather than “The melody is familiar.”) In other words, you go through all of the steps shown in the top of Figure 7.10 specific prior event, and so you don’t except for the last two: You don’t attribute the fluency to a experience a sense of familiarity. We can also find the opposite sort of case-in which a stimulus is not familiar (i.e., you’ve not seen it recently) but feels familiar anyhow-and this, too, fits with the theory. This sort of illusion of familiarity can be produced if the processing of a completely novel stimulus is more fluent than you expected-perhaps because (without telling you) we’ve sharpened the focus of a computer display or presented the stimulus for a few milliseconds longer than other stimuli you’re inspecting (Jacoby & Whitehouse, 1989; Whittlesea, 2002; Whittlesea, Jacoby, & Girard, 1990). Cases like these can lead to the situation shown in the bottom half of Figure 7.10. And as our theory predicts, these situations do produce an illusion: Your processing of the stimulus is unexpectedly fluent; you seek an attribution for this fluency, and you’re fooled into thinking the stimulus is familiar-so you say you’ve seen the stimulus before, when in fact you haven’t. This illusion is a powerful confirmation that the sense of familiarity does rest on processes like the ones we’ve described. (For more on fluency, see Besken & Mulligan, 2014; Griffin, Gonzalez, Koehler, & Gilovich, 2012; Hertwig, Herzog, Schooler, & Reimer, 2008; Lanska, Olds, & Westerman, 2013; Oppenheimer, 2008; Tsai & Thomas, 2011. For a glimpsse of what fluency amounts to in the nervous system, see Knowlton & Foerde, 2008.) The Hierarchy of Memory Types Clearly, we’re often influenced by the past without being aware of that influence. We often respond differently to familiar stimuli than we do to novel stimuli, even if we have no subjective feeling of familiarity. On this basis, it seems that our conscious recollection seriously underestimates what’s in our memories, and research has documented many ways in which unconscious memories influence what we do, think, and feel.
In addition, the data are telling us that there are two different kinds of memory: one type (“explicit”) is conscious and deliberate, the other (“implicit”) is typically unconscious and automatic. These two broad categories can be further subdivided, as shown in Figure 7.11. Explicit memories can be subdivided into episodic memories (memory for specific events) and semantic memory (more general knowledge). Implicit memory is often divided into four subcategories, as shown in the figure. Our emphasis here has been on one of the subtypes-priming-largely because of its role in producing the feeling of familiarity. However, the other subtypes of implicit memory are also important and can be distinguished from priming both in terms of their functioning (i.e., they follow somewhat different rules) and in terms of their biological underpinnings.
Some of the best evidence for these distinctions, though, comes from the clinic, not the laboratory. In other words, we can learn a great deal about these various types of memory by considering individuals who have suffered different forms of brain damage. Let’s look at some of that evidence. Amnesia As we have already mentioned, a variety of injuries or illnesses can lead to a loss of memory, or amnesia. Some forms of amnesia are retrograde, meaning that they disrupt memory for things learned prior to the event that initiated the amnesia (see Figure 7.12). Retrograde amnesia is often caused by blows to the head; the afflicted person is unable to recall events that occurred just before the blow. Other forms of amnesia have the reverse effect, causing disruption of memory for experiences after the onset of amnesia; these are cases of anterograde amnesia. (Many cases of amnesia involve both retrograde and anterograde memory loss.)
Disrupted Episodic Memory, but Spared Semantic Memory
Studies of amnesia can teach us many things. For example, do we need all the distinctions shown in Figure 7.11? Consider the case of Clive Wearing, whom we met in the opening to Chapter 6. (You can find more detail about Wearing’s case in an extraordinary book by his wife-see Wearing, 2011.) Wearing’s episodic memory is massively disrupted, but his memory for generic information, as well as his deep love for his wife, seem to be entirely intact. Other patients show the reverse pattern- disrupted semantic memory but preserved episodic knowledge. One patient, for example, suffered damage (from encephalitis) to the front portion of her temporal lobes. As a consequence, she lost her memory of many common words, important historical events, famous people, and even the fundamental traits of animate and inanimate objects. “However, when asked about her wedding and honeymoon, her father’s illness and death, or other specific past episodes, she readily produced detailed and accurate recollections” (Schacter, 1996, p. 152; also see Cabeza & Nyberg, 2000). (For more on amnesia, see Brown, 2002; Clark & Maguire, 2016; Kopelman & Kapur, 2001; Nadel & Moscovitch, 2001; Riccio, Millin, & Gisquet-Verrier, 2003.)
These cases (and other evidence too; see Figure 7.13) provide the double dissociation that demands a distinction between episodic and semantic memory. It’s observations like these that force us to the various distinctions shown in Figure 7.11. (For evidence, though, that episodic and semantic memory are intertwined in important ways, see McRae & Jones, 2012.)
We’ve already mentioned the patient known as H.M. His memory loss was the result of brain surgery in 1953, and over the next 55 years (until his death in 2008) H.M. participated in a vast number of studies. Some people suggest he was the most-studied individual in the entire history of psychology (which is one of the reasons we’ve returned to his case several times). In fact, the data gathering continued after H.M.’s death-with careful postmortem scrutiny of his brain. (For a review of H.M’s case, see Corkin, 2013; Milner, 1966, 1970; also O’Kane, Kensinger, & Corkin, 2004; Skotko et al., 2004; Skotko, Rubin, & Tupler, 2008.)
After his surgery, H.M. was still able to recall events that took place before the surgery-and so his amnesia was largely anterograde, not retrograde. But the amnesia was severe. Episodes he had experienced after the surgery, people he had met, stories he had heard-all seemed to leave no lasting record, as though nothing new could get into his long-term storage.
H.M. could hold a mostly normal conversation (because his working memory was still intact), but his deficit became instantly clear if the conversation was interrupted. If you spoke with him for a while, then left the room and came back 3 or 4 minutes later, he seemed to have totally forgotten that the earlier conversation ever took place. If the earlier conversation was your first meeting with H.M., he would, after the interruption, be certain he was now meeting you for the very first time.
A similar amnesia has been found in patients who have been longtime alcoholics. The problem isn’t the alcohol itself; the problem instead is that alcoholics tend to have inadequate diets, getting most of their nutrition from whatever they’re drinking. It turns out, though, that most alcoholic beverages are missing several key nutrients, including vitamin B1 (thiamine). As a result, longtime alcoholics are vulnerable to problems caused by thiamine deficiency, including the disorder known as Korsakoffs syndrome (Rao, Larkin, & Derr, 1986; Ritchie, 1985). Patients suffering from Korsakoffs syndrome seem similar to H.M. in many ways. They typically have no problem remembering events that took place before the onset of alcoholism. They can also maintain current topics in mind as long as there’s no interruption. New information, though, if displaced from the mind, seems to be lost forever. Korsakoffs patients who have been in the hospital for decades will casually mention that they arrived only a week ago; if asked the name of the current president or events in the news, they unhesitatingly give answers appropriate for two or three decades earlier, whenever the disorder began (Marslen-Wils on & Teuber, 1975; Seltzer & Benson, 1974).
Anterograde Amnesia: What Kind of Memory Is Disrupted?
At the chapter’s beginning, we alluded to other evidence that complicates this portrait of anterograde amnesia, and it’s evidence that brings us back to the distinction between implicit and explicit memory. As it turns out, some of this evidence has been available for a long time. In 1911, the Swiss psychologist Edouard Claparède (1911/1951) reported the following incident. He was introduced to a young woman suffering from Korsakoffs amnesia, and he reached out to shake her hand. However, Claparède had secretly positioned a pin in his own hand so that when they clasped hands the patient received a painful pinprick. (Modern investigators would regard this experiment as a cruel violation of a patient’s rights, but ethical standards were much, much lower in 1911.) The next day, Claparède returned and reached out to shake hands with the patient. Not surprisingly, she gave no indication that she recognized Claparède or remembered anything about the prior encounter. (This confirms the diagnosis of amnesia.) But just before their hands touched, the patient abruptly pulled back and refused to shake hands with Claparède. He asked her why the patient said vaguely, “Sometimes pins are hidden in people’s hands.”
What was going on here? On the one side, this patient seemed to have no memory of the prior encounter with Claparède. She certainly didn’t mention it in explaining her refusal to shake hands and when questioned closely about the earlier encounter, she showed no knowledge of it. But, on the other side, she obviously remembered something about the painful pinprick she’d gotten the d after some confusion previous day. We see this clearly in her behavior.
A related pattern occurs with other Korsakoff’s patients. In one of the early demonstrations of this point, researchers used a deck of cards like those used in popular trivia games. Each card contained a question and some possible answers, in a multiple-choice format (Schacter, Tulving, & Wang, 1981). The experimenter showed each card to a Korsakoffs patient, and if the patient didn’t know the answer, he was told it. Then, outside of the patient’s view, the card was replaced in the deck, guaranteeing that the same question would come up again in a few minutes.
When the question did come up again, the patients in this study were likely to get it right-and so apparently had learned the answer in the previous encounter. Consistent with their diagnosis though, the patients had no recollection of the learning: They were unable to explain why their answers were correct. They didn’t say, “I know this bit of trivia because the same question came up just five minutes ago.” Instead, patients were likely to say things like “I read about it somewhere” or “My sister once told me about it.”
Many studies show similar results. In setting after setting, Korsakoffs patients are unable to recall episodes they’ve experienced; they seem to have no explicit memory. But if they’re tested indirectly, we see clear indications of memory-and so these patients seem to have intact implicit memories. (See, e.g., Cohen & Squire, 1980; Graf & Schacter, 1985; Moscovitch, 1982; Schacter, 1996; Schacter & Tulving, 1982; Squire & McKee, 1993.) In fact, in many tests of implicit memory, amnesic patients seem indistinguishable from ordinary individuals. Can There Be Explicit Memory without Implicit?
We can also find patients with the reverse pattern-intact explicit memory, but impaired implicit. One study compared amygdala with a second patient who had the opposite pattern: damage to the amygdala but not the a patient who had suffered brain damage to the hippocampus but not the hippocampus (Bechara et al., 1995). These patients were exposed to a series of trials in which a particular stimulus (a blue light) was reliably followed by loud boat horn, while other stimuli (green, yellow, or red lights) were not followed by the horn. Later on, the patients were exposed to the blue light on its own, and their bodily arousal was measured; would they show a fright reaction in response to this stimulus? In addition, the patients were asked directly, “Which color was followed by the horn?”
The patient with damage to the hippocampus did show a fear reaction to the blue light-assessed via the skin conductance response (SCR), a measure of bodily arousal. As a result, his data on this measure look just like results for control participants (i.e., people without brain damage; see Figure 7.14). However, when asked directly, this patient couldn’t recall which of the lights had been associated with the boat horn.
In contrast, the patient with damage to the amygdala showed the opposite pattern. She was able to report that just one of the lights had been associated with the horn and that the light’s color had been blue-demonstrating fully intact explicit memory. When presented with the blue light, however she showed no fear response. Optimal Learning Before closing this chapter, let’s put these amnesia findings into the broader context of the chapter’s main themes. Throughout the chapter, we’ve suggested that we cannot make claims about learning or memory acquisition without some reference to how the learning will be used later on. For example, whether it’s better to learn underwater or on land depends on where you will be tested. Whether it’s better to learn while listening to jazz or while sitting in a quiet room depends on the acoustic background of the memory test environment.
These ideas are echoed in the neuropsychology data. Specifically, it would be misleading to say that brain damage (whether from Korsakoffs syndrome or some other source) ruins someone’s ability to create new memories. Instead, brain damage is likely to disrupt some types of learning but not others, and how this matters for the person depends on how the newly learned material will be accessed. Thus, someone who suffers hippocampal damage will probably appear normal on an indirect memory test but seem amnesic on a direct test, while someone who suffers amygdala damage will probably show the reverse pattern. All these points enormously important for our theorizing about memory, but they also have a practical implication. Right now, you are reading this material and presumably want to remember it later on. You’re also encountering new material in other settings (perhaps in other classes you’re taking), and surely you want to remember that as well. How should you study all of this information if you want the best chances of retaining it for later use?
At one level, the message from this chapter might be that the ideal form of learning would be one that’s “in tune with” the approach to the material that you’ll need later. If you’re going to be tested explicitly, you want to learn the material in a way that prepares you for that form of retrieval. If you’ll be tested underwater or while listening to music, then, again, you want to learn the material in a way that prepares you for that context and the mental perspective it produces. If you’ll need source memory, then you want one type of preparation; if you’ll need familiarity, you might want a different type of preparation.
The problem, though, is that during learning, you often don’t know how you’ll be approaching the material later-what the retrieval environment will be, whether you’ll need the information implicitly or explicitly, and so on. As a result, maybe the best strategy perspectives. To revisit our earlier analogy, imagine that you know at some point in the future you’ll want to reach Chicago, but you don’t know yet whether you’ll be approaching the city from the north, the south, or the west. In that case, your best bet might be to build multiple highways, so that would be to use multiple learnin you can reach your goal from any direction. Memory works the same way. If you initially think about topic in different ways and in relation to many other ideas, then you’ll establish many paths leading to the target material-and so you’ll be able to access that material from many different perspectives. The practical message from this chapter, then, is that this multiperspective approach may provide the optimal learning strategy. e. Demonstration 7.7: Unconscious “Motor Memories” One of the important messages in Chapter 7 is that some memories seem not to be conscious and are revealed only through indirect testing. The chapter focuses on one type of unconscious memories-that is, memories typically revealed through priming effects of one sort or another. But there are other types of unconscious memories-including ones that seem to be represented through habitual motions.
For example, you probably have keys that open a number of locked doors, but do you remember which way each key turns in each door? Many people “recall” this information by pantomiming the relevant action, pretending to insert a key into an imagined lock. Can you use this strategy to remember which way the key turns in your own front door? Or in the lock of some other door that you often have to open?
As you go through your daily routine, you likely also have to turn various knobs to open cabinets or to turn on lights. Without moving your hands, can you recall which way the knobs turn? Can you help yourself remember by pantomiming the action?
In the same way, do you remember where the individual letters are on a keyboard, whether it’s the keyboard for your computer or the one you use on your smartphone? Where is the key for “j? The key for “e”? Again, many people “recall” this information by pretending to type something, using an imagined keyboard in the air. Can you use this strategy to remember where various keys are located? How is this type of memory similar to what the chapter calls “explicit” memory? How is this type of memory different from explicit memory? How is it similar to, and different from, the sort of priming effects described in the book under the broad banner of “implicit” memory? COGNITIVE PSYCHOLOGY AND EDUCATION familiarity can be treacherous Sometimes you see a picture of someone and immediately say, “Gee-she looks familiar!” This seems like a simple and direct reaction to the picture, but the chapter describes how complicated familiarity really is. Indeed, the chapter makes it clear that we can’t think of familiarity just as a “feeling” somehow triggered by a stimulus. Instead, familiarity seems more like a conclusion that you draw at the end of a many-step process. As a result of these complexities, errors about familiarity are possible: cases in which a stimulus feels familiar even though it’s not, or cases in which you correctly realize that the stimulus is familiar but then make a mistake about why it’s familiar.
These points highlight the dangers, for students, of relying on familiarity. As one illustration, consider the advice that people sometimes give for taking a multiple-choice test. They tel inclination” or “Choose the answer that feels familiar.” In some cases these strategies will you, “Go with your first help, because sometimes the correct answer will indeed feel familiar. But in other cases these strategies can lead you astray, because the answer you’re considering may seem familiar for a bad reason. What if your professor once said, “One of the common mistakes people make is to believe. . ” and then talked about the claim summarized in the answer you’re now considering? Alternatively, what if the answer seems familiar because it resembles the correct answer but is, in some crucial way, different (and therefore from the correct answer mistaken)? In either of these cases, your sense of familiarity might lead you to a wrong answer.
Even worse, one study familiarized people with phrases like “the record for tallest pine tree” Because of this exposure, these people were later more likely to accept as true a longer phrase, such as “the record for tallest pine tree is 350 feet.” Why? Because they realized that (at least) part of the sentence was familiar and therefore drew the reasonable inference that they must have encountered the entire sentence at some previous point. The danger here should be obvious: On a multiple- choice test, part of an incorrect option may be an exact duplicate of some phrase in your reading; if so, relying on familiarity will get you into trouble! (And, by the way, this claim about pines is false the tallest pine tree-a sugar pine-is only about 273 feet tall.)
As a different concern, think back to the end-of-chapter essay for Chapter 6. There, we noted that one of the most common study strategies used by students is to read and reread their notes, or read and reread the textbook. This strategy turns out not to help memory very much, and other strategies are demonstrably better. But, in addition, the rereading strategy can actually hurt you. Thanks to the rereading, you become more and more familiar with the materials, which makes it easy to interpret this familiarity as mastery. But this is a mistake, and because of the mistake familiarity can sometimes lead students to think they’ve mastered material when they haven’t. causing them to end their study efforts too soon. What can you do to avoid all these dangers? You’ll do much better job of assessing your own mastery if, rather than relying on familiarity, you give yourself some sort of quiz (perhaps one you find in the textbook, or one that a friend creates for you). More broadly, it’s valuable to be alert to the various complexities associated with familiarity. After all, you don’t want to ignore familiarity, multiple-choice question but option B seems somehow familiar, then choosing B may be your only path forward. But given the difficulties we’ve mentioned here, it may be best to regard familiarity just as a weak clue because sometimes it’s all you’ve got. If you really don’t know the answer to a about the past and not as a guaranteed indicator. That attitude may encourage the sort of caution that will allow you to use familiarity without being betrayed by it. COGNITIVE PSYCHOLOGY AND THE LAW the “cognitive interview” Police investigations often depend on eyewitness reports, but what can the police do if witnesses insist they can’t remember the event and can’t answer the police questions? Are there steps we can take to help witnesses remember?
A number of exotic procedures have been proposed to promote witness recollection, including hypnosis and the use of memory-enhancing medications. Evidence suggests, however, that these procedures provide little benefit (and may, in some settings, actually harm memory). Indeed “hypnotically enhanced memory” is inadmissible as trial evidence in most jurisdictions.
However, there is a much more promising approach. The “cognitive interview” is a technique developed by psychologists with the aim of improving eyewitness memory; a parallel procedure has been developed for interviewing children who have been witnesses to crimes. A related procedure is used in England (the so-called P.E.A.C.E. procedure) for questioning suspects.
A considerable quantity of evidence suggests that the cognitive interview is successful-it does help people to remember more. It’s gratifying, then, that the cognitive interview has been adopted by a number of police departments as their preferred interview technique. How does the cognitive interview work? Let’s start with the fact that sometimes you cannot remember things simply because you didn’t notice them in the first place, and so no record of the desired information was ever placed in long-term storage. In this situation, no procedure-whether it’s the cognitive interview, or hypnosis, or simply trying really hard to recall-can locate information that isn’t there to be located. You cannot get water out of an empty bottle, and you cannot read words off a blank page. In the same way, you cannot recall information that was never placed in memory to begin with.
In other cases, though, the gaps in your recollection have a different source. The desired information is in memory, but you’re unable to find it. (We have more to say about this point in Chapter 8, when we discuss theories of forgetting.) To overcome this problem, the cognitive interview relies on context reinstatement. The police investigator urges the witness to think back to the setting of the target event: How did the witness feel at the time of the crime? What was the physical setting? What was the weather? As Chapter 7 discusses, these steps are likely to put the witness back into the same mental state, the same frame of mind, that he or she had at the time of the crime-and in many cases, these steps will promote recall.
Moreover, the chapter’s discussion of retrieval paths leads to the idea that sometimes you’ll recall a memory only if you approach the memory from the right angle-using the proper retrieval path. But how do you choose the proper path? The cognitive interview builds on the simple idea that you don’t have to choose. Instead, you can try recalling the events from lots of different angles (via lots of different paths) in order to maximize your chances of finding a path that leads to the desired information.
For example, the cognitive interview encourages witnesses first to recount the event from its start to its end, and then recount it in reverse sequence from the end back to the beginning. Sometimes witnesses are also encouraged to take a different spatial perspective: “You just told me what you saw; try to remember what Joe would have seen, from where he was standing”
In short, the cognitive interview builds on principles that are well established in research-the or the importance of retrieval paths. It’s no surprise, role of context reinstatement, for example, on mechanisms that we therefore, that the cognitive interview is effective; the procedure capitalizes know to be helpful.
The cognitive interview was, as we’ve said, designed to help law-enforcement professionals in their investigations. Note, though, that the principles involved here are general ones, and they can be useful in many other settings. Imagine a physician trying to get as complete a medical history as possible: “When did the rash first show up? Is it worse after you’ve eaten certain foods?” Or think anything about a novice repair person trying to recall what she learned in training: “Did they tell me about this particular error code?” Or think about your own situation when you’re trying to recall, say, what you read in the library last week. The ideas built into the cognitive interview are useful in these settings as well-in fact, they’re useful for anyone who needs to draw as much information from memory as possible. COGNITIVE PSYCHOLOGY AND THE LAW in-court identifications Imagine that you witness a crime. The police suspect that Robby Robber was the perpetrator, so they place Robby’s picture onto a page together with five other photos, and they show you this “photospread.” You point to Robby’s photo and say, “That might be the guy, but I’m not sure.” The police can’t count this as a positive identification; but, based on other evidence, they become convinced that Robby is guilty, so he’s arrested and brought to trial.
During the trial, you’re asked to testify, and when you’re on the stand, the prosecutor asks, “Do you see the perpetrator in the courtroom?” You answer yes, and so the prosecutor asks you to indicate who the robber is. You point to Robby and say, “That’s him-the man at the defense table.”
In-court identifications (I.D’s), like the one just described, are dramatic and are enormously persuasive for juries. But, in truth, in-court I.D’s are problematic for several reasons. First, research tells us that people are often better at realizing that a face is familiar than they are in recalling why the face is familiar. In the case just described, therefore, you might (correctly) realize that Robby’s face is familiar and sensibly conclude that you’ve seen his face before. But then you might make an error about where you’d seen his face before, mistakenly concluding that Robby looks familiar because you saw him during the crime, when in actuality he looks familiar only because you’d seen his picture in the photospread! This error is sometimes referred to as “unconscious transference” because the face is, in your memory, unconsciously “transferred” from one setting to another. (You actually saw him in the photospread, but in your memory you “transfer” him into the original crime- memory’s version of a “cut-and-paste” operation.)
Second, notice that in our hypothetical case you had made a tentative identification from photospread. You had, in effect, made a commitment to a particular selection, and it’s generally difficult to set aside this commitment in order to get a fresh start in a later identification procedure. In this way, your in-court I.D. of Robby is likely to be influenced by your initial selection from the photospread-even if you made your initial selection with little confidence.
Third, in-court identifications are inevitably suggestive. In the courtroom, it’s obvious from the seating arrangement who the defendant is. The witness also knows that the police and prosecution believe the defendant is guilty. These facts, by themselves, put some pressure on the witness to make an identification of the defendant-especially if the defendant looks in any way familiar.
Fourth, the justice system works at a much slower speed than any of us would wish, with the result that trials often take place many months (or years) after a crime. Therefore, there has been ample opportunity for a witness’s memory of the crime to fade. As a result, the witness doesn’t have much of an “internal anchor” (a good and clear memory) to guide the identification, making her or him all the more vulnerable to the effects of suggestion or the effect of the earlier commitment.
In light of these concerns, many researchers would argue that in-court identifications have little value as evidence. In addition, note that some of these concerns also apply to out- of- court identifications. I testified in one trial in which the victim claimed that the defendant looked familiar. and she was almost certain that he was the man who had robbed her. It turned out, though, that the man had an excellent alibi. What was the basis for the victim’s (apparently incorrect) identification? For years, the defendant had, for his morning run, used a jogging path that went right by the victim’s house. It seems likely, therefore, that the defendant looked familiar to the victim because she had seen him during his run-and that she had then unconsciously (and mistakenly) transferred his face into her memory of the crime. It’s crucial that the courts and police investigators do all that they can to avoid these problems. Specifically, if a witness thinks that the defendant “looks familiar,” it’s important to ask whether there might be some basis for the familiarity other than the crime itself. With steps like these, we can use what we know about memory to improve the accuracy of eyewitness identifications-and, in that way, improve the accuracy and the efficiency of the criminal justice system.
The Acquisition of Memories and the Working-Memory System
Acquisition, Storage, and Retrieval How does new information-whether it’s a friend’s phone number or a fact you hope to memorize for the bio exam-become established in memory? Are there ways to learn that are particularly effective? Then, once information is in storage, how do you locate it and “reactivate” it later? And why does search through memory sometimes fail-so that, for example, you forget the name of that great restaurant downtown (but then remember the name when you’re midway through a mediocre dinner someplace else)?
In tackling these questions, there’s a logical way to organize our inquiry. Before there can be a some new information. Therefore, acquisition-the process memory, you need to gain, or “acquire,” of gaining information and placing it into memory-should be our first topic. Then, once you’ve acquired this information, you need to hold it in memory until the information is needed. We refer to this as the storage phase. Finally, you remember. In other words, you somehow locate the information in the vast warehouse that is memory and you bring it into active use; this is called retrieval. This organization seems logical; it fits, for example, with the way most “electronic memories” (e.g., computers) work. Information (“input”) is provided to a computer (the acquisition phase). The information then resides in some dormant form, generally on the hard drive or perhaps in the cloud (the storage phase). Finally, the information can be brought back from this dormant form, often via a search process that hunts through the disk (the retrieval phase). And there’s nothing special about the computer comparison here; “low-tech” information storage works the same way. Think about a file drawer-information is acquired (i.e., filed), rests in this or that folder, and then is retrieved.
Guided by this framework, we’ll begin our inquiry by focusing on the acquisition of new memories, leaving discussion of storage and retrieval for later. As it turns out, though, we’ll soon find reasons for challenging this overall approach to memory. In discussing acquisition, for example, we might wish to ask: What is good learning? What guarantees that material is firmly recorded in memory? As we’ll see, evidence indicates that what counts as “good learning” depends on how the memory is to be used later on, so that good preparation for one kind of use may be poor preparation for a different kind of use. Claims about acquisition, therefore, must be interwoven with claims about retrieval. These interconnections between acquisition and retrieval will be the central theme of Chapter 7. In the same way, we can’t separate claims about memory acquisition from claims about memory storage. This is because how you learn (acquisition) depends on what you already know (information in storage). We’ll explore this important relationship in both this chapter and Chapter 8.
We begin, though, in this chapter, by describing the acquisition process. Our approach will be roughly historical. We’ll start with a simple model, emphasizing data collected largely in the 1970s. Well then use this as the framework for examining more recent research, adding r inements to the model as we proceed. e. Demonstration 6.1: Primacy and Recency Effects The text describes a theoretical model in which working memory and long-term memory are distinct from each other, each governed by its own principles. But what’s the evidence for this distinction? Much of the evidence comes from an easily demonstrated data pattern.
Read the following list of 25 words out loud, at a speed of roughly one second per word. (Before you begin, you might start tapping your foot at roughly one tap per second, and then keep tapping your foot as you read the list; that will help you keep up the right rhythm.) HIDE
Now, close the list so you can’t see it anymore, and write down as many words from the list as you can remember, in any order.
Open the list, and compare your recall with the actual list. How many words did you remember? Which words did you remember?
· Chances are good that you remembered the first three or four words on the list. Did you? The textbook chapter explains why this is likely
· Chances are also good that you remembered the final three or four words on the list. Did you? Again, the textbook chapter explains why this is likely.
· Even though you were free to write down the list in any order you chose, it’s very likely that you started out by writing the words you’d just read-that is, the first words you wrote were probably the last words you read on the list. Is that correct?
The chapter doesn’t explain this last point, but the reason is straightforward. At the end of the list, the last few words you’d read were still in your working memory, simply because you’d just been thinking about these words, and nothing else had come along yet to bump these items out working memory. The minute you think about something else, though, that “something else” will occupy working memory and will displace these just-heard words. With that base, imagine what would happen if, at the very start of your recall, you tried to remember, say, the first words on the list. This effort will likely bring those words into your thoughts, and so now these words are in working memory-bumping out the words that were there and potentially causing you to lose track of those now-displaced words. To avoid this problem, you probably started your recall by “dumping” your working memory’s current contents (the last few words you read) onto the recall sheet. Then, with the words preserved in this way, it didn’t matter if you displaced them from working memory, and you were freed to go to work on the other words from the list.
· Finally, it’s likely that one or two of the words on the list really “stuck” in your memory, even though the words were neither early in the list (and so didn’t benefit from primacy) nor late on the list (and so didn’t benefit from recency). Which words (if any) stuck in your memory in this way? Why do you think this is? Does this fit with the theory in the text?
The Route into Memory For many years, theorizing in cognitive psychology focused on the process through which information was perceived and then moved into memory storage-that is, on the process of information acquisition. One early proposal was offered by Waugh and Norman (1965). Later refinements were added by Atkinson and Shiffrin (1968), and their version of the proposal came to be known as the modal model. Figure 6.1 provides a simplified depiction of this model. Updating the Modal Model
According to information first arrives, it is stored briefly in modal the model, when sensory memory. This form of memory holds on to the input in “raw” sensory form-an iconic memory for visual inputs and an echoic memory for auditory inputs. A process of selection and interpretation then moves the information into short-term memory-the place where you hold information while you’re working on it. Some of the information is then transferred into long- much larger and term memory, a more permanent storage place. This proposal captures some important truths, but it needs to be updated in several ways. First, the idea of “sensory memory” plays a much smaller role in modern theorizing, So modern discussions of perception (like our discussion in Chapters 2 and 3) often make no mention of this memory. (For recent a assessment of visual sensory memory, though, Cappiello & Zhang, 2016.) Second, modern proposals use the term working memory rather than “short-term memory,” to see emphasize the thoughts in this memory are currently activated, currently function of this memory. Ideas or being thought about, and so you’re currently working on. Long-term memory (LTM), in contrast, is the vast repository that contains all of your knowledge and all of your beliefs-most of which you aren’t thinking about (i.e., they’re the ideas aren’t working on) at this moment.
The modal model also needs updating in another way. Pictures like the one in Figure 6.1 suggest that working memory is a storage place, sometimes described as the “loading dock” just outside of the long-term memory “warehouse.” The idea is that information has to “pass through” working memory on the way into longer-term storage. Likewise, the picture implies that memory retrieval involves the “movement” of information out of storage and back into working memory.
In contrast, contemporary theorists don’t think of working memory as a working memory is (as we will see) simply the name we give to a status. Therefore, when we say that ideas are “in working memory,” we simply “place” at all. Instead mean that these ideas are currently activated and being set worked on by a specific of operations.
We’ll have more to say about this modern perspective before we’re through. It’s important to emphasize, though, that contemporary thinking also preserves some key ideas from the modal model, including its claims about how working memory and long-term memory differ from each other. Let’s identify those differences. First, working memory is limited in size; long-term memory is enormous. In fact, long-term memory has to be enormous, because it contains all of your knowledge-including specific knowledge (e.g., how many siblings you have) and more general themes (e.g., that water is wet, that Dublin is in Ireland, that unicorns don’t exist). Long-term memory also contains all of your “episodic” knowledge-that is, your knowledge about events, including events early in your life as well as more experiences.
Second, getting information into working memory is easy. If you think about a particular idea or recent e of content, then you’re “working on” that idea or content, and so this information- some other by definition-is now in your working memory. In contrast, we’ll see later in the chapter that getting information into long-term memory often involves some work. Third, getting information out of working memory is also easy. Since (by definition) this memory holds the ideas you’re thinking about right now, the information is already available to you. Finding information in long-term memory, in contrast, can sometimes be difficult and slow-and in some settings can fail completely.
Fourth, the contents of working memory are quite fragile. Working memory, we emphasize, contains the ideas you’re thinking about right now. If your thoughts shift to a new topic, therefore, the new ideas will enter working memory, pushing out what was there a moment ago. Long-term memory, in contrast, isn’t linked to your current thoughts, so it’s much less fragile-information remains in storage whether you’re thinking about it right now or not.
We can make all these claims more concrete by looking at some classic research findings. These findings come from a task that’s quite artificial (i.e., not the sort of memorizing you do every day) but also quite informative. Working Memory and Long-Term Memory: One Memory or Two? In many studies, researchers have asked participants to listen to a series of words, such as “bicycle artichoke, radio, chair, palace.” In a typical experiment, the list might contain 30 words and be presented at a rate of one word per second. Immediately after the last word is read, the participants must repeat back as many words as they can. They are free to report the words in any order they choose, which is why this task is called a free recall procedure. People usually remember 12 to 15 words in this test, in a consistent pattern. They’re very likely to remember the first few words on the list, something known as the primacy effect, and they’re also likely to remember the last few words U-shaped curve describing the relation on the list, a recency effect. The resulting pattern is a between positions within the series-or serial position-and the likelihood of recall (see Figure 6.2 Baddeley & Hitch, 1977; Deese & Kaufman, 1957; Glanzer & Cunitz, 1966; Murdock, 1962; Postman & Phillips, 1965).
Explaining the Recency Effect What produces this pattern? We’ve already said that working memory contains the material someone is working on at just that moment. In other words, this memory contains whatever the person is currently thinking about; and during the list presentation, the participants are thinking about the words they’re hearing. Therefore, it’s these words that are in working memory. This memory, however, is limited in size, capable of holding only five or six words. Consequently, as participants try to keep up with the list presentation, they’ll be placing the words just heard into working memory, and this action will bump the previous words out of working memory. As a result, as participants proceed through the list, their working memories will, at each moment, contain only the half dozen words that arrived most recently. Any words that arrived earlier than these will have been pushed out by later arrivals.
Of course, the last few words on the list don’t get bumped out of working memory, because no further input arrives to displace them. Therefore, when the list presentation ends, those last few words stay in place. Moreover, our hypothesis is that materials in working memory are readily available-easily and quickly retrieved. When the time comes for recall, then, working memory’s contents (the list’s last few words) are accurately and completely recalled.
The key idea, then, is that the list’s last few words are still in working memory when the list ends (because nothing has arrived to push out these items), and we know that working memory’s contents are easy to retrieve. This is the source of the recency effect. Explaining the Primacy Effect The primacy effect has a different source. We’ve suggested that it takes some work to get information into long-term memory (LTM), and it seems likely that this work requires some time and attention. So let’s examine how participants allocate their attention to the list items. As participants hear the list, they do their best to be good memorizers, and so when they hear the first word, they repeat it over and over to themselves (“bicycle, bicycle, bicycle”)-a process known as memory rehearsal. When the second word arrives, they rehearse it, too (“bicycle, artichoke, bicycle, artichoke”). Likewise for the third (“bicycle, artichoke, radio, bicycle, artichoke, radio”), and so on through the list. Note, though, that the first few items on the list are privileged. For a brief moment, “bicycle” is the only word participants have to worry about, so it has 100% of their attention; no other word receives this privilege. When “artichoke” arrives a moment later, participants divide their attention between the first two words, so “artichoke” gets only 50% of their attention-less than “bicycle” got, but still a large share of the participants’ efforts. When “radio” arrives, it has to compete with “bicycle” and “artichoke” for the participants’ time, and so it receives only 33% of their attention. Words arriving later in the list receive even less attention. Once six or seven words have been presented, the participants need to divide their attention among all these words, which means that each one receives only a small fraction of the participants’ focus. As a result, words later in the list are rehearsed fewer times than words early in the list-a fact that can be confirmed simply by asking to rehearse out loud (Rundus, 1971). participants
This view of things leads immediately observed memory advantage for the early list items. These early words didn’t have to share attention with other words (because the other words hadn’t arrived yet), were devoted to them than to any others. This means that the early words have a greater chance of to our explanation of the primacy effect-that is, the so more time and more rehearsal greater chance of being recalled after a delay. That’s what being transferred into LTM-and so a shows up in these classic data as the primacy effect. Testing Claims about Primacy and Recency This account of the serial-position curve leads to many predictions. First, we’re claiming the recency portion of the curve is coming from working memory, while other items on the list are being recalled from LTM. Therefore, manipulations of working memory should affect recall of the recency items but not items earlier in the list. To see how this works, consider a modification of our procedure. In the standard setup, we allow participants to recite what they remember immediately after the list’s end. But instead, we can delay recall by asking participants to perform some other task before they report the list items-for example, we can ask them to count backward by threes, starting from 201. They do this for just 30 seconds, and then they try to recall the list.
We’ve hypothesized that at the end of the list working memory still contains the last few items heard from the list. But the task of counting backward will itself require working memory (e.g., to keep track of where you are in the counting sequence). Therefore, this chore will displace working memory’s current contents; that is, it will bump the last few list items out of working memory. As a result, these items won’t benefit from the swift and easy retrieval that working memory allows, and, of course, that retrieval was the presumed source of the recency effect. On this basis, the simple chore of counting backward, even if only for a few seconds, will eliminate the recency effect. In contrast, the counting backward should have no impact on recall of the items earlier in the list: These items are (by hypothesis) being recalled from long-term memory, not working memory, and there’s no reason to think the counting task will interfere with LTM. (That’s because LTM, unlike working memory, isn’t dependent on current activity.) Figure 6.3 shows that these predictions are correct. An activity interpolated, or inserted, between the list and recall essentially eliminates the recency effect, but it has no influence elsewhere in the list (Baddeley & Hitch, 1977; Glanzer & Cunitz, 1966; Postman & Phillips, 1965). In contrast, merely delaying the recall for a few seconds after the list’s end, with no interpolated activity, has no impact. In this case, participants maintain them in working memory. With no new materials coming in, nothing pushes the recency can continue rehearsing the last few items during the delay and so can items out of working memory, and so, even with a delay, a normal recency effect is observed.
We’d expect a different outcome, though, if we manipulate long-term memory rather than working memory. In this case, the manipulation should affect all performance except for recency (which, again, is dependent on down the presentation of the list? Now, participants will have more time to spend on all of the list items, increasing the likelihood of transfer into more permanent storage. This should improve recall working memory, not LTM). For example, what happens if we slow for all items coming from LTM. Working memory, in contrast, is limited by its size, not by ease of entry or ease of access. Therefore, the slower list presentation should have no influence on working- memory performance. Research results confirm these claims: Slowing the list presentation improves retention of all the pre-recency items but does not improve the recency effect (see Figure 6.4).
Other variables that influence long-term memory have similar effects. Using more familiar or more common words, for example, would be expected to ease entry into long-term memory and does improve pre-recency retention, but it has no effect on recency (Sumby, 1963).
It seems, therefore, that the recency and pre-recency portions of the curve are influenced by distinct sets of factors and obey different principles. Apparently, then, these two portions of the curve are the products of different mechanisms, just as our theory proposed. In addition, FMRI scans suggest that memory for early items on a list depends hippocampus) that are associated with long-term memory; memory for later items on the list do not show this pattern (Talmi, Grady, Goshen-Gottstein, & Moscovitch, 2005; also Eichenbaum, 2017; see on brain areas (in and around the Figure 6.5). This provides further confirmation for our memory model.
A Closer Look at Working Memory Earlier, we counted four fundamental differences between working memory and LTM-the size of these two stores, the ease of entry, the ease of retrieval, and the fact that working memory is dependent on current activity (and therefore fragile) while LTM is not. These are all points proposed by the modal model and preserved in current thinking. As we’ve said, though, investigators’ understanding of working memory has developed over the years. Let’s examine the newer conception in more detail. The Function of Working Memory Virtually all mental activities require the coordination of several pieces of information. Sometimes the relevant bits come into view one by one, so that you need to hold on to the early-arrivers until the rest of the information is available, and only then weave all the bits together. Alternatively sometimes the relevant bits are all in view at the same time-but you still need to hold on to them together, so that you can think about the relations and combinations. In either case, you’ll end up with multiple ideas in your thoughts, all activated simultaneously, and thus several bits of information in the status we describe as “in working memory.” (For more on how you manage to focus on these various bits, see Oberauer & Hein, 2012.) Framing things in this way makes it clear how important working memory is: You use it whenever you have multiple ideas in your mind, multiple elements that you’re trying to combine or compare. Let’s now add that people differ in the “holding capacity” of their working memories. Some people more elements, and some with fewer. How does this matter? to determine if your (and work with) are able to hold on to To find out, we first need a way of measuring working memory’s capacity, memory capacity is above average, below, this measurement, however, has changed or somewhere in between. The procedure for obtaining over the years; looking at this change will help clarify what working memory is, and what working memory is for. Digit Span For many years, the holding capacity of working memory was measured with a digit-span task. In this task, research participants hear a series of digits read to them (e.g., “8, 3, 4”) and must immediately repeat them back. If they do so successfully, they’re given a slightly longer list (e.g., “9, 2,4, 0”). If they can repeat this one without error, they’re given a still longer list (“3, 1, 2, 8, 5”), and so on. The procedure continues until the participant starts to make errors-something that usually happens when the list contains more than seven or eight items. The number of digits the person can echo back without errors is referred to as that person’s digit span.
Procedures such as this imply that working memory’s capacity is typically around seven items-at least five and probably not more than nine. These estimates have traditionally been summarized by the statement that this memory holds “7 plus-or-minus 2″ items (Chi, 1976; Dempster, 1981; Miller, 1956; Watkins, 1977).
However, we immediately need a refinement of these measurements. If working memory can hold 7 plus-or-minus 2 items, what exactly is an “item”? Can people remember seven sentences as easily as seven words? Seven letters as easily as seven equations? In a classic paper, George Miller (one of the founders of the field of cognitive psychology) proposed that working memory holds 7 plus-or-minus 2 chunks (Miller, 1956). The term “chunk” doesn’t sound scientific or technical, and that’s useful because this informal terminology reminds us that a chunk doesn’t hold a fixed quantity of information. Instead, Miller proposed, working memory holds 7 plus-or-minus 2 packages, and what those packages contain is largely up to the individual person. The flexibility in how people “chunk” input can easily be seen in the span test. Imagine that we test someone’s “letter span” rather than their “digit span,” using the procedure already described. So the person might hear “R, L” and have to repeat this sequence back, and then “F, C, H,” and so on. Eventually, let’s imagine that the person hears a much longer list, perhaps one starting “H, O, P, T, R A, S, L, U… If the person thinks of these as individual letters, she’ll only remember 7 of them, more or less. But she might reorganize the list into “chunks” and, in particular, think of the letters as forming syllables (“HOP, TRA, SLU, . . .”). In this case, she’ll still remember 7 plus-or-minus 2 items but the items are syllables, and by remembering the syllables she’ll be able to report back at least a dozen letters and probably more.
howHow far can this process be extended? Chase and Ericsson (1982; Ericsson, 2003) studied a remarkable individual who happens to be a fan of track events. When he hears numbers, he thinks of them as finishing times for races. The sequence “3, 4, 9, 2,” for example, becomes “3 minutes and 49.2 seconds, near world-record mile time.” In this way, four digits become one chunk of information. This person can then retain 7 finishing times (7 chunks) in memory, and this can involve 20 or 30 digits! Better still, these chunks can be grouped into larger chunks, and these into even larger chunks. For example, finishing times for individual racers can be chunked together into heats within track meet, so that, now, 4 or 5 finishing times (more than a dozen digits) become one chunk. With strategies like this and a lot of practice, this person has increased his apparent memory span from the “normal” 7 digits to 79 digits. However, let’s be clear that what has changed through practice is merely this person’s chunking strategy, not the capacity of working memory itself. This is evident in the fact that when tested with sequences of letters, rather than numbers, so that he can’t use his chunking strategy, this individual’s memory span is a normal size-just 6 consonants. Thus, the 7-chunk limit is still in place for this man, even though (with numbers) he’s able to make extraordinary use of these 7 slots.
Chunking provides one complication in our measurement of working memory’s capacity. Another- and deeper-complication grows out of the very nature of working memory. Early theorizing about working memory, as we said, was guided by the modal model, and this model implies that working memory is something like a box in which information is stored or a location in which information can be displayed. The traditional digit-span test fits well with this idea. If working memory is like a box, then it’s sensible to ask how much “space” there is in the box: How many slots, or spaces, are there in it? This is precisely what the digit span measures, on the idea that each digit (or each chunk is placed in its own slot.
We’ve suggested, though, that the modern conception of working memory is more dynamic-so that working memory is best thought of as a status (something like “currently activated”) rather than a place. (See, e.g., Christophel, Klink, Spitzer, Roelfsema, & Haynes, 2017; also Figure 6.6.) On this basis, perhaps we need to rethink how we measure this memory’s capacity-seeking a measure that reflects working memory’s active operation.
Modern researchers therefore measure this memory’s capacity in terms of operation span, a measure of working memory when it is “working.” There are several ways to measure operation span, with the types differing in what “operation” they use (e.g., Bleckley, Foster, & Engle, 2015; Chow & Conway, 2015). One type is reading span. To measure this span, a research participant might be with asked to read aloud a series of sentences, like these:
Due to his gross inadequacies, his position as director was terminated abruptly It is possible, of course, that life did not arise on Earth at all. Immediately after reading the sentences, the participant is asked to recall each sentence’s final word-in this case, “abruptly” and “all.” If she can do this with these two sentences, she’s asked to do the same task with a group of three sentences, and then with four, and so on, until the limit on her performance is located. This limit defines the person’s working-memory capacity, or WMC.(However there are other ways to measure operation span-see Figure 6.7.)
Let’s think about what this task involves: storing materials (the ending words) for later use in the recall test, while simultaneously working with other materials (the full sentences). This juggling of processes, as the participant moves from one part of the task to the next, is exactly what working memory must do in day-to-day life. Therefore, performance in this test is likely to reflect the efficiency with which working memory will operate in more natural settings.
Is operation span a valid measure-that is, does it measure what it’s supposed to? Our hypothesis higher operation span has a larger working memory. If this is right, then use of this memory is that someone with a someone with a higher span should have an advantage in tasks that make heavy Which tasks are these? They’re tasks that require you to keep multiple ideas active at the same time, prediction: People so that you can coordinate and integrate various bits of information. So here’s our with a larger span (i.e., a greater WMC) should do better in tasks that require the coordination of different pieces of information. Consistent with this claim, people with a greater WMC do have an advantage in many settings-in tests of reasoning, assessments of reading comprehension, standardized academic tests (including the verbal SAT), tasks that require multitasking, and more. (See, e.g., Ackerman, Beier, & Boyle, 2002; Butler, Arrington, & Weywadt, 2011; Daneman & Hannon, 2001; Engle & Kane, 2004; Gathercole & Pickering, 2000; Gray, Chabris, & Braver, 2003; Redick et al., 2016; Salthouse & Pink, 2008. For some complications, see Chow & Conway, 2015; Harrison, Shipstead, & Engle, 2015; Kanerva & Kalakoski, 2016; Mella, Fagot, Lecert, & de Ribaupierre, 2015.)
These results convey several messages. First, the correlations between WMC and performance provide indications about when it’s helpful to have a larger working memory, which in turn helps us understand when and how working memory is used. Second, the link between WMC and measures of intellectual performance provide an intriguing hint about what we’re measuring with tests (like the SAT) that seek to measure “intelligence.” We’ll return to this issue in Chapter 13 when we discuss the nature of intelligence. Third, it’s important that the various correlations are observed with the more active measure of working memory (operation span) but not with the more traditional (and more static) span measure. This point confirms the advantage of the more dynamic measures and strengthens the idea that we’re now thinking about working memory in the right way: not as a passive storage box, but instead as a highly active information processor. The Rehearsal Loop Working memory’s active nature is also evident in another way: in the actual structure of this memory. The key here is that working memory is not a single entity but is instead, a system built of several components (Baddeley, 1986, 1992, 2012; Baddeley & Hitch, 1974; also see Logie & Cowan, 2015). At the center of the working-memory system is a set of processes we discussed in Chapter 5: the executive control processes that govern the selection and sequence of thoughts. In discussions of working memory, these processes have been playfully called the “central executive” as if there tiny agent embedded in your mind, running your mental operations. Of course, there is no were a agent, and the central executive is just a name we give to the set of mechanisms that do run the show.
The central executive is needed for the “work” in working memory; if you have to plan a response or make a decision, these steps require the executive. But in many settings, you need less than this from working memory. Specifically, there are because you’re analyzing them right you don’t need the executive. Instead, you can rely on the executive’s “helpers,” leaving the executive settings in which you need to keep ideas in mind, not now but because you’re likely to need them soon. In this case free to work on more difficult matters. Let’s focus on one of working memory’s most important helpers, the articulatory rehearsal loop. To see how the loop functions, try reading the next few sentences while holding on to these numbers: “1, 4, 6, 3” Got them? Now read on. You’re probably repeating the numbers over and over to yourself, rehearsing them with your inner voice. But this takes very little effort, so you can continue reading while doing this rehearsal. Nonetheless, the moment you need to recall the numbers (what were they?), they’re available to you.
In this setting, the four numbers were maintained by working memory’s rehearsal loop, and with the numbers thus out of the way, the central executive could focus on the processes needed for reading. That is the advantage of this system: With mere storage handled by the helpers, the executive is available for other, more demanding tasks.
To describe this sequence of events, researchers would say that you used subvocalization-silent speech-to launch the rehearsal loop. This production by the “inner voice” produced representation of the target numbers in the phonological buffer, a passive storage system used for holding a representation (essentially an “internal echo”) of recently heard or self-produced sounds. In other words, you created an auditory image in the “inner ear.” This image started to fade away after a second or two, but you then subvocalized the numbers once again to create a new image, sustaining the material in this buffer. (For a glimpse of the biological basis for the “inner voice” and “inner ear” see Figure 6.8.)
Many lines of evidence confirm this proposal. For example, when people are storing information in working memory, they often make “sound-alike” errors: Having heard “F” they’ll report back “S.” When trying to remember the name “Tina,” they’ll slip and recall “Deena” The problem isn’t that people mis-hear the inputs at the start; similar sound-alike confusions emerge if the inputs are presented visually. So, having seen “F,” people are likely situation to report back the similar-looking “E.” to report back “S”; they aren’t likely in this
What produces this pattern? The cause lies in the fact that for this task people are relying on the rehearsal loop, which involves a mechanism (the “inner ear”) that stores the memory items as (internal representations of) sounds. It’s no surprise, therefore, that errors, when they occur, are shaped by this mode of storage.
As a test of this claim, we can ask people to take the span test while simultaneously saying “Tah- Tah-Tah” over and over, out loud. This concurrent articulation task obviously requires the mechanisms for speech production. Therefore, those mechanisms are not available for other use including subvocalization. (If you’re directing your lips and tongue to produce the “Tah-Tah-Tah” sequence, you can’t at the same time direct them to produce the sequence needed for the subvocalized materials.) How does this constraint matter? First, note that our original span test measured the combined capacities of the central executive and the loop. That is, when people take a standard span test (as opposed to the more modern measure of operation span), they store some of the to-be-remembered items in the loop and other items via the central executive. (This is a poor use of the executive underutilizing its talents, but that’s okay here because the standard span task doesn’t require anything beyond mere storage.)
With concurrent articulation, though, the loop isn’t available for use, so we’re now measuring the capacity of working memory without the rehearsal loop. We should predict, therefore, that concurrent articulation, even though it’s extremely easy, should cut memory span drastically. This prediction turns out to be correct. Span is ordinarily about seven items; with concurrent articulation, it drops by roughly a third-to four or five items (Chincotta & Underwood, 1997; see Figure 6.9).
Second, with visually presented items, concurrent articulation should eliminate the sound-alike errors. Repeatedly saying “Tah-Tah-Tah” blocks use of the articulatory loop, and it’s in this loop, we’ve proposed, that the sound-alike errors arise. This prediction, too, is correct: With concurrent articulation and visual presentation of the items, sound-alike errors are largely eliminated.
The Working-Memory System
As we have mentioned, your working memory contains the thoughts and ideas you’re working right now, and often this means you’re trying to same time. That can cause difficulties, because working memory only has a small capacity. That’s on keep multiple ideas in working memory alll at the important, because they substantially increase working why working memory’s helpers are so memory’s capacity. Against this backdrop, it’s not surprising that the working-memory system relies on other helpers in addition to the rehearsal loop. For example, the system also relies on the visuospatial buffer, used for storing visual materials such as mental images, in much the same way that the rehearsal loop speech-based materials. (We’ll have more to say about mental images in Chapter 11.) Baddeley working-memory system) has also proposed another stores (the researcher who launched the idea of a component of the system: the episodic buffer. This component is proposed as a mechanism that helps the executive organize information into a chronological sequence-so that, for example, you can keep track of a story you’ve just heard or a film clip you’ve just seen (e.g., Baddeley, 2000, 2012; Baddeley & Wilson, 2002; Baddeley, Eysenck, & Anderson, 2009). The role of this component is evident in patients with profound amnesia who seem unable to put new information into long-term storage, but who still can recall the flow of narrative in a story they just heard. This short-term recall, it seems, relies on the episodic buffer-an aspect of working memory that’s unaffected by the amnesia. In addition, other helpers have been deaf since birth and communicate via sign language. We wouldn’t expect these individuals can be documented in some groups of people. Consider people who rely on an “inner voice” and an “inner ear”-and they don’t. People who have been deaf since birth to rely on a different helper for working memory: They use an “inner hand” (and covert sign language) rather than an “inner voice” (and covert speech). As a result, they disrupted if they’re asked to wiggle their fingers during a memory task (similar to a hearing person saying “Tah-Tah-Tah”), and are wiggle their fingers during they also tend to make “same hand-shape” errors in working memory (similar to the sound-alike errors made by the hearing population). The Central Executive What can we say about the main player within the working-memory system-the central executive? In our discussion of attention (in Chapter 5), we argued that executive control processes are needed to govern the sequence of thoughts and actions; these processes enable you to set goals, make plans for reaching those goals, and select the steps needed for implementing those plans. Executive control also helps whenever you want to rise above habit or routine, in order to “tune” your words or deeds to the current circumstances.
For purposes of the current chapter, though, let’s emphasize that the same processes control the selection of ideas that are active at any moment in time. And, of course, these active ideas (again, by definition) constitute the contents of working memory. It’s inevitable, then, that we would link executive control with this type of memory. With all these points in view, we’re ready to move on. We’ve now updated the modal model (Figure 6.1) in important ways, and in particular we’ve abandoned the notion of a relatively passive short-term memory serving largely as storage container. We’ve shifted to a dynamic conception of working memory, with the proposal that this term is merely the name for an activities-especially the complex activities of the central executive together with its various helpers.
But let’s also emphasize that in this modern conception, just as in the modal model, working memory is quite fragile. Each shift in attention brings new information into working memory, and the newly arriving material displaces earlier items. Storage in this memory, therefore, is temporary organized set of Obviously, then, we also need some sort of enduring memory storage, so that we can remember things that happened an hour, or a day, or even years ago. Let’s turn, therefore, to the functioning of long-term memory. e. Demonstration 6.2: Chunking The text mentions the benefits of chunking, and these benefits are easy to demonstrate. First, let’s measure your memory span in the normal way: Cover the list of letters below with your hand or a piece of paper. Now, slide your hand or paper down, to reveal the first row of letters. Read the row silently, pausing briefly after you read each letter. Then, close your eyes, and repeat the row aloud. Open your eyes. Did you get it right? If so, do the same with the next row, and keep going until you hit a row that’s too long-that is, a row for which you make errors. Count the items in that row. This count is your digit span.
Now, we’ll do the exercise again, but this time, with rows containing letter pairs, not letters. Using the same procedure, at what row do you start to make errors?
EL ZA IN
ET LO JA RE
CA OM DO IG FU
AT YE OR CA VI TA
EB ET PI NU ES RA SU
RI NA FO ET HI ER WU AG
UR KA TE PO AG UF WO SA KI
SO HU JA IT WO FU CE YO FI UT
It’s likely that your span measured with single letters was 6 or 7, or perhaps 8. It’s likely that your span measured with letter pairs was a tiny bit smaller, perhaps 5 or 6 pairs-but that means you’re now remembering 10 or 12 letters. If we focus on the letter count, therefore, your memory span seems to have increased from the first test to the second. But that’s the wrong way to think about this. Instead, your memory span is constant (or close to it). What’s changing is how you use that span -that is, how many letters you cram into each “chunk.
Now, one more step: Read the next sentence to yourself, then close your eyes, and try repeating the sentence back.
The tyrant passed strict laws limiting the citizens’ freedom. Could you do this? Were you able repeat the sentence? If so, notice that your memory now seems able to hold 51 letters. Again, if we focus on letter count, your memory span is growing at an astonishing speed! But, instead, let’s count chunks. The phrase “The tyrant” is probably just one chunk, likewise “strict laws” and “the citizens” Therefore, this sentence really just contains six chunks-and so is easily within your memory span! e. Demonstration 6.3: The Articulatory Rehearsal Loop Chapter 6 introduces the notion of the articulatory rehearsal loop, one of the key “helpers” within the working-memory system. As the chapter describes, many lines of evidence document the existence of this loop, but one type of evidence is especially easy to demonstrate. The demonstration is mentioned in the chapter, but here is a more elaborate version.
Read these numbers and think about them for a moment, so that you’ll be able to recall them in a few seconds: 8257. Now, while you’re holding on to these numbers, read the following paragraph:
You should, right now, be rehearsing those numbers while you are reading this paragraph, so that you’ll be able to recall them when you’re done with the paragraph. You are probably storing the numbers in your articulatory rehearsal loop, saying the numbers over and over to yourself. Using the loop in this way requires little effort or attention, leaving the central executive free to work on the concurrent task of reading these sentences-identifying the words, assembling them into phrases, and figuring out what the phrases mean. As a result, with the loop holding the numbers and the executive doing the reading, there is no conflict and no problem. Therefore, this combination is relatively easy.
Now, what were those numbers? Most people can recall them with no problem, for the reasons just described. They read-and understood-the passage, and holding on to the numbers caused no difficulty at all. Did you understand the passage? Can you summarize it, briefly, in your own words?
Next, try a variation: Again, you will place four numbers in memory, but then you’ll immediately start saying “Tah-Tah-Tah” over and over out loud, while reading a passage. Ready? The numbers are: 3 814. Start saying “Tah-Tah-Tah” and read on.
Again, you should be rehearsing the numbers as you read, and also repeating “Tah-Tah-Tah” over and over out loud. The repetitions of “Tah-Tah-Tah” demand little thought, but they do require the neural circuits and the muscles that are needed for speaking, and with these resources tied up in this fashion, they’re not available for use in the rehearsal loop. As a result, you don’t have the option of storing the four numbers in the loop. That means you need to find some other means of remembering the numbers, and that’s likely to involve the central executive. As a result, the executive needs to do two things at once-hold on to the numbers, and read the passage.
Now, what were those numbers? Many people in this situation find they’ve forgotten the numbers. Others can recall the numbers but find this version of the task (in which the executive couldn’t rely on the rehearsal loop) much harder, and they may report that they actually found themselves skimming the passage, not reading it. Again, can you summarize the paragraph you just read? Glance back over the paragraph to see if your summary is complete: Did you miss something? You may have, because many people report that in this situation their attention hops back and forth, so that they read a little, think about the numbers, read some more, think about the numbers again, and so on-an experience they didn’t have without the “Tah-Tah-Tah.”
Finally, we need one more condition: Did the “Tah-Tah-Tah” disrupt your performance because (as proposed) it occupied your rehearsal loop? Or was this task simply a distraction, disrupting your performance because saying “Tah-Tah-Tah” over and over was irritating, or perhaps embarrassing? To find out, let’s try one more task: Close your fist, but leave your thumb sticking out, and position your hand so that you’re making the conventional “thumbs doWn” signal. With your hand in this shape, tap your thumb, over and over, on the top of your head. Keep doing this while reading the following passage. Once again, though, hold these numbers in your memory as you read: 7 2 4 5.
In this condition, you’re again producing a rhythmic activity as you read, although it’s tapping rather than repeating a syllable. If the problem in the previous condition was distraction, you should be distracted in the same way here. And you probably look ridiculous tapping your head in this fashion. If the problem in the previous condition was embarrassment, you should again be embarrassed here. On either of these grounds, this condition should be just as hard as the previous one. But if the problem in the previous condition depended instead on the repeated syllables blocking you from using your articulatory loop, that won’t be a problem here, and this condition should be easier than the previous one.
What were the numbers? This condition probably was easy-alllowing us to reject the idea that the problem lies in distraction or embarrassment. Instead, use of the articulatory loop really is the key!
Demonstration adapted from Baddeley, A. (1986). Working memory. Oxford, England: Clarendon Press. e. Demonstration 6.4: Sound-Based Coding The chapter mentions that people often make sound-based errors when holding information in working memory. This is because working-memory storage relies in part on an auditory buffer-the so-called inner ear. The inner ear, in turn, relies on mechanisms ordinarily used for hearing, mechanisms that are involved when you’re listening to actual, out-in-the-world sounds. The use of these mechanisms essentially guarantees that things that sound alike in actual hearing will also sound alike in the inner ear-and this produces the confusions that we see in our data, with people remembering that they saw an “F” (and thus the sound “eff”), for example, when they really saw an “S” (“ess”).
This proposal about the inner ear also has other implications, and we can use those implications as further tests of the proposal. For example, if sound-alike items are confusable with each other in memory, then these items may actually be harder to remember, compared to items that don’t sound alike. Is this the case? Read the list of letters below out loud, and then cover the list with your hand. Think about the list for 15 seconds or so, and then write it down. How many did you get right?
Here’s the list of letters:
E C V T G D B
Now, do the same with this list-read it aloud quickly, and then cover it. Think about it for 15 seconds, and then write it down.
F R J A L O Q
Again, how many did you get right?
It’s possible that this demonstration won’t work for you-because it’s possible you’ll recall both equally accurate with the two lists, did you have to work harder for lists perfectly! But if you were one list than for the other? And if you made errors in your recall, which list produced more errors?
Most people find the first (sound-alike) list more difficult and are more likely to make errors with that list than with the second one. This is just what we’d expect if working memory relies on some sort of sound-based code. Entering Long-Term Storage: The Need for Engagement We’ve already seen an important clue regarding how information gets established in long-term storage: In discussing the primacy effect, we suggested that the more an item is rehearsed, the more likely you are to remember that item later. To pursue this point, though, we need to ask what exactly rehearsal is and how it might work to promote memory. Two Types of Rehearsal The term “rehearsal” doesn’t mean much beyond “thinking about.” In other words, when a research participant rehears es an item on a memory list, she’s simply thinking about that item-perhaps once, perhaps mechanically, or perhaps with close attention to what the item perhaps means. Therefore, there’s considerable variety within the activities that count as rehearsal, and over and over psychologists find it useful to sort this variety into two broad types.
As one option, people engage in can maintenance rehearsal, in which they simply items to-be-remembered the focus on themselves, with little thought about what the items mean or how they relate to one another. This is a rote, mechanical process, recycling items in working memory by repeating them over and over. In contrast, relational, or elaborative, rehearsal involves thinking about what the to-be-remembered items mean and how they’re related to one another and to other things you already know.
Relational rehearsal is vastly superior to establishing information in memory. In fact, in many settings maintenance rehearsal for maintenance rehearsal provides long-term no benefit at all. As an informal demonstration of this point, consider the following experience (although, for a formal demonstration of this Craik & Watkins, 1973). You’re point, see watching your favorite reality show on TV. The announcer savs. “To vote for Contestant # 4, text 4 to 21523 from your mobile phone!” You reach into your pocket for your phone but realize you left it in the other room. So you recite the number to yourself while scurrying for your phone, but then, just before you dial, a text message. You you see that you’ve got pause, read the message, and then you’re ready you don’t have a clue what the to dial, but. .. number was. What went wrong? You certainly heard the number, and you rehearsed it a couple of times while moving to grab your phone. But despite these rehearsals, the brief interruption from reading the text message seems to have erased the number from your memory. However, this isn’t ultra-rapid forgetting. Instead, you never established the number in memory in the first place, because in this setting you relied only on maintenance rehearsal. That kept the number in your thoughts while you were moving across the room, but it did nothing to establish the number in long- term storage. And when you try to dial the number after reading the text message, it’s long-term storage that you need.
The idea, then, is that if you think about something only in a mindless and mechanical way, the item won’t be established in your long-term memory. Similarly, long-lasting memories aren’t created simply by repeated exposures to the items to be remembered. If you encounter an item over and over but, at each encounter, barely think about it (or think about it only in a mechanical way), then this too, won’t produce a long-term memory. As a demonstration, consider the ordinary penny. Adults in the United States have probably seen pennies tens of thousands of times. Adults in other countries have seen their own coins just as often. If sheer exposure is what counts for memory, people should remember perfectly what these coins look like.
But, of course, most people have little reason to pay attention to the penny. Pennies are a different color from the other coins, so they can be identified at a And, if it’s scrutiny that matters for memory-or, more broadly, if we remember what we pay attention glance without further scrutiny. to and think about-then memory for the coin should be quite poor. The evidence on this point is clear: People’s memory for the penny is remarkably bad. For example, most people know that Lincoln’s head is on the “heads” side, but which way is he facing? Is it his right cheek that’s visible or his left? What other markings are on the coin? Most people do very badly with these questions; their answers to the “Which way is he facing?” question are close to random (Nickerson & Adams, 1979). And performance is similar for people in other countries remembering their own coins. (Also see Bekerian &Baddeley, 1980; Rinck, 1999, for a much more consequential example.)
As a related example, consider the logo that identifies Apple products-the iPhone, the iPad, or one of the Apple computers. Odds are good that you’ve seen this logo hundreds and perhaps thousands of time, but you’ve probably had no reason to pay attention to its appearance. The prediction, then, is that your memory for the logo will be quite poor-and this prediction is correct. In one study, only 1 of 85 participants was able to draw the logo correctly-with the bite on the proper side, the stem tilted the right way, and the dimple properly placed in the logo’s bottom (Blake, Nazarian, & Castel, 2015; see Figure 6.10). And-surprisingly-people who use an Apple computer (and therefore see the logo every time they turn on the machine) perform at a level not much better than people who use a PC.
The Need for Active Encoding Apparently, it takes some work to get information into long-term memory. Merely having an item in front of your eyes isn’t enough-even if the item is there over and over and over. Likewise, repeatedly thinking about an item doesn’t, by itself, establish a memory. That’s evident in the fact that maintenance rehearsal seems ineffective at promoting memory.
Further support for these claims comes from studies of brain activity during learning. In several procedures, researchers have used fMRI recording to keep track of the moment-by-moment brain activity in participants who were studying a list of words (Brewer, Zhao, Desmond, Glover, & Gabrieli, 1998; Wagner, Koutstaal, & Schacter, 1999; Wagner et al., 1998; also see Levy, Kuhl, & Wagner, 2010). Later, the participants were able to remember some of the words they had learned but not others, which allowed the investigators to return to their initial recordings and compare brain activity during the learning process for words that were later remembered and words that were later forgotten. Figure 6.11 shows the results, with a clear difference, during the initial encoding between these two types of words. Greater levels of brain activity (especially in the hippocampus and regions of the prefrontal cortex) were reliably associated with greater probabilities of retention later on.
These FMRI results are telling us, once again, that learning is not a passive process. Instead, activity is needed to lodge information into long-term memory, and, apparently, higher levels of this activity lead to better memory. But this raises some new questions: What is this activity? What does it accomplish? And if-as it seems-maintenance rehearsal is a poor way to memorize, what type of rehearsal is more effective? Incidental Learning, Intentional Learning, and Depth of Processing Consider a student taking a course in college. The student knows that her memory for the course materials will be tested later (e.g., in the final exam). And presumably she’ll take various steps to help herself remember: She may read through her notes again and again; she may discuss the material with friends; she may try outlining the material. Will these various techniques work-so that she’ll have a complete and accurate memory when the exam takes place? And notice that the student is taking these steps in the context of wanting to memorize; she wants to do well on the exam! How does this motivation influence performance? In other words, how does the intention to memorize influence how or how well material is learned? In an early experiment, participants in one condition heard a list of 24 words; their task was to remember as many as they could. This is intentional learning-learning that is deliberate, with an expectation that memory will be tested later. Other groups of participants heard the same 24 words but had no idea that their memories would be tested. This allows us to examine the impact of incidental learning– that is, learning in the absence of any intention to learn. One of the incidental- learning groups was asked simply, for each word, whether the word contained the letter e. A different incidental-learning group was asked to look at each word and to report how many letters it contained. Another group was asked to consider each word and to rate how pleasant it seemed.
Later, all the participants were tested-and asked to recall as many of the words as they could. (The test was as expected for the intentional-learning group, but it was a surprise for the other groups.) The results are shown in Figure 6.12A (Hyde & Jenkins, 1969). Performance was relatively poor for the “Find the e” and “Count the letters” groups but appreciably better for the “How pleasant?” group. What’s striking, though, is that the “How pleasant?” group, with no intention to memorize, performed just as well as the intentional-learning (“Learn these!”) group. The suggestion, then, is that the intention to learn doesn’t add very much; memory can be just as good without this intention, provided that you approach the materials in the right way.
This broad pattern has been reproduced in many other experiments (to name just a few: Bobrow & Bower, 1969; Craik & Lockhart, 1972; Hyde & Jenkins, 1973; Jacoby, 1978; Lockhart, Craik,& Jacoby, 1976; Parkin, 1984; Slamecka & Graf, 1978). As one example, consider a study by Craik and Tulving (1975). Their participants were led to do incidental learning (i.e., they didn’t know their memories would be tested). For some of the words shown, the participants did shallow processing-that is, they engaged the material in a superficial way. Specifically, they had to say whether the word was printed in CAPITAL letters or not. (Other examples of shallow processing would be decisions about whether the words are printed in red or in green, high or low on the screen, etc.) For other words, the participants had to do a moderate level of processing: They had to judge whether each word shown rhymed with a particular cue word. Finally, for other words, participants had to do deep processing. This is processing that requires some thought about what the words mean; specifically, Craik and Tulving asked whether each word shown would fit into a particular sentence.
The results are shown in Figure 6.12B. Plainly, there is a huge effect of level of processing, with deeper processing (i.e., more attention to meaning) leading to better memory. In addition, Craik and Tulving (and many other researchers) have confirmed the Hyde and Jenkins finding that the intention to learn adds little. That is, memory performance is roughly the same in conditions in which participants do shallow processing with an intention to memorize, and in conditions in which they do shallow processing without this intention. Likewise, the outcome is the same whether people do deep processing with the intention to memorize or without. In study after study, what matters is how people approach the material they’re seeing or hearing. It’s that approach-that manner of engagement-that determines whether memory will be excellent or poor later on. The intention to learn seems, by itself, not to matter. e. Demonstration 6.5: Remembering Things You Hadn’t Noticed As you move around in the world, you pass by many of the same sights-the same buildings, the same furniture-over and over. But usually you have no reason to take note of these things, and so, despite the repeated exposures, you may have little or no memory for these often-passed places or objects. The chapter mentions a couple of examples, but let’s put this idea to the test.
To promote public safety, in case of a fire it will be immensely valuable if you can fire extinguisher and put the fire out before it has a chance to quickly grab spread. With this idea in mind, many public buildings have fire extinguishers positioned throughout the building so that this safety equipment will be near at hand at the moment of need. Take a minute. Can you list five or six places where you pass a fire extinguisher each day? Then, over the next few days, do your best to be alert to where the fire extinguishers really are. How many didn’t make it onto your list, probably because you never noticed them in the first place?
In the same fashion, automatic external defibrillators (AED devices) are positioned throughout public buildings so that, in case of need, you’ll be able to grab one quickly and use it to save someone who has suffered a cardiac arrest. Here, timing is crucial, and it’s important that you find and use the defibrillator quickly. Do you know where the defibrillators are in the buildings you often spend time in? Again, take a minute. Can you list the locations? Each one is clearly marked with an AED sign (with a picture of a lightning bolt on a heart).
Once more, try over the next days to notice where the AED signs and devices are. How many of them didn’t you remember because you had no reason to notice them?
One more test case: Most readers of this book are college or university students; most are taking several courses, and most of these courses have at least one textbook. Take a moment and describe what’s on the cover of each of your textbooks. Is there artwork of any sort? If so, is it an abstract design or a representation of some sort? If it’s a representation, is it a photograph or some other form of visual art? What text is shown on the book’s cover? The odds are good that you’ll do rather poorly on at least one of these exercises-remembering the fire extinguisher locations, or the defibrillators, or the book covers. But don’t despair. This isn’t an indication that you have a terrible memory. It’s simply a confirmation that you, like most people, don’t remember things that you had no reason to notice. However, bear in mind that you want to know where the fire extinguishers and defibrillators are; someone’s life could depend on you having these memories! So take a moment to notice these objects as you move around!
For more on this, see Hargis, M. B., McGillivray, S., & Castel, A. D. (2017). Memory for textbook covers: When and why we remember a book by its cover. Applied Cognitive Psychology, 32, 39-46 e. Demonstration 6.6: The Effects of Unattended Exposure How does information get entered into long-term storage? One idea is that mere exposure is enough -so that if an object is in front of your eyes over and over and over, you’ll learn exactly what the object looks like. However, this claim is false. Memories are created through a process of active engagement with materials; mere exposure is insufficient.
This point is easy to demonstrate, but for the demonstration you’ll need to ask one or two friends a few questions. (You can’t just test yourself, because the textbook chapter gives away the answer, and so your memory is already altered by reading the chapter.)
Approach a friend who hasn’t, as far as you know, read the Cognition text, and ask your friend these questions:
1. Which president is shown on the Lincoln penny? (It will be troubling if your friend gets this wrong!)
2. What portion of the president’s anatomy is shown on the “heads” side of the coin? (It will be even more troubling if your friend gets this one wrong.)
3. Is the head facing forward, or is it visible only in profile? (This question, too, will likely be very easy.)
4. If the head is in profile, is it facing to the viewer’s right, so that you can see the right ear and right cheek, or is it facing to the viewer’s left, so that you can see the left ear and left cheek?
(For Canadian or British readers, you can ask the same questions about your nation’s penny- assuming that pennies are still around when you run the test. Of course, your penny shows, on the “heads” side, the profile of the monarch who was reigning when the penny was issued. But the memory questions-and the likely outcome-are otherwise the same.)
Odds are good that half of the people you ask will say “facing right” and half will say “facing left.” In other words, people trying to remember this fact about the penny are no more accurate than they would be if they answered at random.
Now, a few more questions. Is your friend wearing a watch? If so, reach out and put your hand on his or her wrist, so that you hide the watch from view. Now ask your friend:
5. Does your watch have all the numbers, from 1 through 12, on it? Or is it missing some of the numbers? If so, which numbers does it have?
6. What style of numbers is used? Ordinary numerals or Roman numerals?
7. What style of print is used for the numbers? An italic? A “normal” vertical font? A font that’s elaborate in some way, or one that’s relatively plain? How accurate are your friends? Chances are excellent that many of your friends will answer these questions incorrectly-even though they’ve probably looked at their watches over and over and over during the years in which they’ve owned the watch.
By the way, this demonstration may yield different results if you test women than if you test men. Women are often encouraged to think of wristwatches as jewelry and so are more likely to think about what the watch looks like. Men are encouraged to take a more pragmatic view of their watches -and so they often regard them as timepieces, not jewelry. As a result, women are more likely than men to notice, and think about, the watch’s appearance, and thus to remember exactly what their watches look like!
Even with this last complication, the explanation for all of these findings is straightforward. Information is not recorded in your memory simply because the information has been in front of your eyes at some point. Instead, information is recorded into memory only if you pay attention to that information and think about it in some way. People have seen pennies thousands of times, but they’ve not had any reason to think about Lincoln’s position. Likewise, they’ve looked at their watches many, many times but probably haven’t had a reason to think about the details of the numbers. As a result, and despite an enormous number of “learning opportunities” these unattended details are simply not recorded into memory. Finally, one last point. You own many books, and those books are often in your view-for example, when they’re sitting on your desk. But you usually have no reason to pay attention to a book’s cover, and so, by the logic we’re considering here, you likely will have little (or no) memory for the cover This point was confirmed in a 2017 study: Students in a college course were asked what image was shown on the cover of the course’s textbook. Students were given four options for what the image might be, and so, if they were guessing randomly, they would have been right 25% of the time. In fact, performance was better than this-but not by much. Only 39% of the students chose the right answer. (Said differently, almost two-thirds of the students chose the wrong answer!) And now one ironic point: The textbook used as the “test stimulus” in this study was the 7th edition of the textbook you’re using-that is, Reisberg’s Cognition text. This leads to an obvious question: Do you remember what the cover of the book looks like?
Demonstration adapted from Nickerson, R., & Adams, M. (1979). Long-term memory for a common object. Cognitive Psychology, 11, 287-307. Also see Hargis, M. B., McGillivray, S., & Castel, A. D. (2017). Memory for textbook covers: When and why we remember a book by its cover. Applied Cognitive Psychology, 32, 39-46. e. Demonstration 6.7: Depth of Processing
Many experiments show that deep processing-paying attention to an input’s meaning, or its implications-helps memory. In contrast, materials that receive only shallow processing tend not to be well remembered. This contrast is reliable and powerful, and also easily demonstrated.
The following is a list of questions accompanied by single words. Some of the questions concern categories. For example, the question might ask: “Is a type of vehicle? Truck” For this question, the answer would be yes.
Some of the questions involve rhyme. For example, the question might ask: “Rhymes with chair? Horse.” Here, the answer is no.
Still other questions concern spelling patterns-in particular, the number of vowels in the word. For example, if asked “Has three vowels? Chair,” the answer would again be no.
Go through the list of questions at a comfortable speed, and say “yes” or “no” aloud in response to each question.
Rhymes with angle? Speech Rhymes with coffee? Chapel
Is a type of silverware? Brush Has one vowel? Sonnet
Has two vowels? Cheek Rhymes with rich? Witch
Is a thing found in a garden? Flame Is a type of insect? Roach
Has two vowels? Flour Has one vowel? Twig
Is a rigid object? Honey Rhymes with bin? Grin
Rhymes with elder? Knife Rhymes with fill? Drill
Has three vowels? Sheep Is a human sound? Moan
Rhymes with merit? Copper Has two vowels? Claw
Rhymes with shove? Glove Is a type of entertainer? Singer
Is a boundary dispute? Monk Rhymes with candy? Bear
Rhymes with star? Jar Has four vowels? Cherry
Has two vowels? Cart Is a type of plant? Tree
Is a container for liquid? Clove Rhymes with pearl? Earl
Is something sold on
street corners? Robber Has two vowels? Pool
Is a part of a ship? Mast Is a part of an airplane? Week
Has four vowels? Fiddle Has one vowel? Pail
This list contained 12 of each type of question-12 rhyme questions, 12 spelling questions, and 12 questions concerned with meaning. Was this task easy or hard? Most people have no trouble at all with this task, and they give correct answers to every one of the questions.
Each of these questions had a word provided with it, and you needed that word to answer the question. How many of these “answer words” do you remember? Without looking back at the list, write down as many of the answer words as you can remember on a piece of paper.
Now, go back and check your answers. First, put a checkmark alongside the word if it did in fact occur in the earlier list-so that your recall is correct.
Second, for each of the words you remembered, do the following:
Put an S next to the word you recalled if that word appeared in a spelling question (i., asking about number of vowels).
Put an R next to the word you recalled if that word appeared in one of the rhyme questions.
Put an M next to the word you recalled if that word appeared in one of the questions concerned with meaning. How many S words did you recall? How many R words? How many M words?
It’s close to certain that you remembered relatively few S words, more of the R words, and even more of the M words. In fact, you may have recalled most of the 12 M words. Is this the pattern of your recall? If so, then you just reproduced the standard level-of-processing effect, with deeper meaning) reliably producing better recall, for the reasons described in the processing (attention to textbook chapter.
Demonstration adapted from Craik, F., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 269-294. The Role of Meaning and Memory Connections
The message so far seems clear: If you want to remember the sentences you’re reading in this text or the materials you’re learning in the training sessions at your job, you should pay attention to what these materials mean. That is, you should try to do deep processing. And if you do deep processing it won’t matter if you’re trying hard to memorize the materials (intentional learning) or merely paying attention to the meaning because you find the material interesting, with no plan for memorizing (incidental learning)
But what lies behind these effects? Why does attention to meaning lead to good recall? Let’s start with a broad proposal; we’ll then fill in the evidence for this proposal. Connections Promote Retrieval
Perhaps surprisingly, the benefits of deep processing may not lie in the learning process itself Instead, deep processing may influence subsequent events. More precisely, attention to meaning may help you by facilitating retrieval of the memory later on. To understand this point, consider what happens whenever a library acquires a new book. On its way into the collection, the new book must be catalogued and shelved appropriately. These steps happen when the book arrives, but the cataloguing doesn’t literally influence the arrival of the book into the building. The moment the book is delivered, it’s physically in the library, catalogued or not, and the book doesn’t become “more firmly” or “more strongly” in the library because of the cataloguing.
Even so, the cataloguing is crucial. If the book were merely tossed on a random shelf somewhere, with no entry in the catalogue, users might never be able to find it. Without a catalogue entry, users of the library might not even realize that the book was in the building. Notice, then, that cataloguing happens at the time of arrival, but the benefit of cataloguing isn’t for the arrival itself. (If the librarians all went on strike, so that no books were being catalogued, books would continue to arrive, magazines would still be delivered, and so on. Again: The arrival doesn’t depend on cataloguing.) Instead, the benefit of cataloguing is for events that happen after the book’s arrival-cataloguing makes it possible (and maybe makes it easy) to find the book later on. The same is true for the vast library that is your memory (cf. Miller & Springer, 1973). The task of learning is not merely a matter of placing information into long-term storage. Learning also needs to appropriate indexing; it must pave a path to the newly acquired information, so that establish some this information can be retrieved at some future point. Thus, one of the main chores of memory acquisition is to lay the groundwork for memory retrieval.
But what is it that facilitates memory retrieval? There are, in fact, several ways to search through memory, but a great deal depends trigger another, and then that memory to trigger another, so that you’re “led,” connection by connection, to the sought-after information. In some cases, the connections link one of the items you’re trying to remember to some of the other items; if so, finding the first will lead you to the others. In other settings, the connections might link some aspect of the context-of-learning to the on memory connections. Connections allow one memory to target information, so that when you think again about the context (“I recognize this room-this is where I was last week”), you’ll be led to other ideas (“Oh, yeah, I read the funny story in this room”). In all cases, though, this triggering will happen only if the relevant connections are in place-and establishing those connections is a large part of what happens during learning.
This line of reasoning has many implications, and we can use those implications as a basis for testing whether this proposal is correct. But right at the start, it should be clear why, according to this account, deep processing (i.e., attention to meaning) promotes memory. The key is that attention to meaning involves thinking about relationships: “What words are related in meaning to the word I’m now considering? What words have contrasting meaning? What is the relationship between the start of this story and the way the story turned out?” Points like these are likely to be prominent when you’re thinking about what some word (or sentence or event) means, and these points will help you to find (or, perhaps, to create) connections among your various ideas. It’s these connections, we’re proposing, that really matter for memory. Elaborate Encoding Promotes Retrieval
Notice, though, that on this account, attention to meaning is not the only way to improve memory. Other strategies should also be helpful, provided that they help you to establish memory connections. As an example, consider another classic study by Craik and Tulving (1975). Participants were shown a word and then shown a sentence with one word left out. Their task was to decide whether the word fit into the sentence. For example, they might see the word “chicken” and then the sentence “She cooked _______.” The appropriate response would be yes, because the word does fit in this sentence. After a series of these trials, there was a surprise memory test, with participants asked to remember all the words they had seen. But there was an additional element in this experiment. Some of the sentences shown to participants were simple, while others were more elaborate. For example, a more complex sentence might be: “The great bird swooped down and carried off the struggling ________.” Sentences like this one produced a large memory benefit-words were much more likely to be remembered if they appeared with these rich, elaborate sentences than if they had appeared in the simpler sentences (see Figure 6.13).
Apparently, then, deep and elaborate processing leads to better recall than deep processing on its own. Why? The answer hinges on memory connections. Maybe the “great bird swooped” sentence calls to mind a barnyard scene with the hawk carrying away a chicken. Or maybe it calls to mind thoughts about predator-prey relationships. One way or another, the richness of the sentence offers to mind, each of which can be the potential for many connections as it calls other thoughts connected to the target sentence. These connections, in turn, provide potential retrieval paths– paths that can, in effect, guide your thoughts toward the content to be remembered. All of this e fewer connections and so establish a seems less likely for the simpler sentences, which will evoke narrower set of retrieval paths. Consequently, words associated with these sentences are less likely to be recalled later on. Organizing and Memorizing Sometimes, we’ve said, memory connections link the to-be-remembered material to other information already in memory. In other cases, the connections link one aspect of the to-be- remembered material to another aspect of the same material. Such connection ensures that if any part of the material is recalled, then all will be recalled.
In all settings, though, the connections are important, and that leads us to ask how people about discovering (or creating) these go connections. More than 70 years ago, a psychologist named George Katona argued that the key lies in organization (Katona, 1940). Katona’s argument was that the processes of organization and memorization are inseparable: You memorize well when you discover the order within the material. Conversely, if you find (or impose) an organization on the material, you will easily remember it. These suggestions are fully compatible with the conception were what organization developing here, since provides is memory connections. Mnemonics For thousands of years, people have longed for “better” memories and, guided by this desire, people in the ancient world devised various techniques to known as mnemonic strategies. In fact, many of improve memory-techniques the mnemonics still in use date back to ancient Greece. (It’s therefore appropriate that these are named in honor of Mnemosyne, techniques the goddess of memory in Greek mythology.)
How do mnemonics work? In general, these strategies provide some way of organizing the to-be-remembered material. For example, one broad class of mnemonic, often used for memorizing sequences of words, links the first letters of the words into some meaningful structure. Thus, children rely on ROY G. BIV to memorize the sequence of colors in the rainbow (red, orange, yellow…), and they learn the lines in music’s treble clef via “Every Good Boy Deserves Fudge” or “. .. Does Fine” (the lines indicate the musical notes E, G, B, D, and F). Biology students use a sentence like “King Philip Crossed the Ocean to Find Gold and Silver” (or: “.. . to Find Good Spaghetti”) to memorize the sequence of taxonomic categories: kingdom, phylum, class, order, family, genus, and species.
Other mnemonics involve the use of mental imagery, relying on “mental pictures” to link the to- be-remembered items to one another. (We’ll have much more to say about “mental pictures” in Chapter 11.) For example, imagine a student trying to memorize a list of word pairs. For the pair eagle-train, the student might imagine the eagle winging back to its nest with a locomotive in its beak. Classic research evidence indicates that images like this can be enormously helpful. It’s important, though, that the images show the objects in some sort of relationship again highlighting the role of organization. train sitting side-by-side (Wollen, Weber, & Lowry, 1972; for another example of a mnemonic, see or interaction- It doesn’t help just to form a picture of an eagle and a Figure 6.14).
A different type of mnemonic provides an external “skeleton” for the to-be-remembered materials, and mental imagery can be useful here, too. Imagine that you want to remember a list of largely unrelated items-perhaps the entries on your shopping list, or a list of questions you want to ask your adviser. For this purpose, you might rely on one of the so-called peg-word systems. These systems begin with a well-organized structure, such as this one:
One is a bun.
Two is a shoe.
Three is a tree.
Four is a door.
Five is a hive.
Six are sticks.
Seven is heaven.
Eight is a gate.
Nine is a line.
Ten is a hen. This rhyme provides ten “peg words” (“bun,” “shoe,” etc.), and in memorizing something you can “hang” the materials to be remembered on these “pegs.” Let’s imagine that you want to remember the list of topics you need to discuss with your adviser. If you want to discuss your unhappiness with chemistry class, you might form an association between chemistry and the first peg, “bun.” You might picture a hamburger bun floating in an Erlenmeyer flask. If you also want to discuss your plans for after graduation, you might form an association between some aspect of those plans and the next peg, “shoe.” (You could think about how you plan to pay your way after college by selling shoes.) Then, when meeting with your adviser, all you have to do is think through that silly rhyme again. When you think of “one is a bun,” it’s highly likely that the image of the flask (and therefore of chemistry lab) will come to mind. With “two is a shoe,” you’ll be reminded of your job plans. And so on. Hundreds of variations on these techniques- the first-letter mnemonics, visualization strategies peg-word systems-are available. Some variations are taught in self-help books (you’ve probably seen the ads-“How to Improve Your Memory!”); some are taught as part of corporate management training. But all the variations use the same basic scheme. To remember a list with no apparent organization, you impose an organization crucially, these systems all work. They help you remember individual items, and they also help you remember those items in a specific sequence. Figure 6.15 shows some of the data from one early on it by using a tightly organized skeleton or scaffold. And study; many other studies confirm this pattern (e.g., Bower, 1970, 1972; Bower & Reitman, 1972 Christen & Bjork, 1976; Higbee, 1977; Roediger, 1980; Ross & Lawrence, 1968; Yates, 1966). All of this strengthens our central claim: Mnemonics work because they impose an organization on the materials you’re trying to memorize. And, consistently and powerfully, organizing improves recall.
Given the power of mnemonics, students are well advised to use these strategies in their studies. In fact, for many topics there are online databases containing thousands of useful mnemonics- helping medical students to memorize symptom lists, chemistry students to memorize the periodic table, neuroscientists to remember the brain’s anatomy, and more.
Bear in mind, though, that there’s a downside to the use of mnemonics in educational settings. When using a mnemonic, you typically focus on just one aspect of the material you’re trying to memorize-for example, just the first letter of the word to be remembered-and so you may cut short your effort toward understanding this material, and likewise your effort toward finding multiple connections between the material and other things you know. To put this point differently, mnemonic use involves a trade-off. If you focus on just one or two memory connections, you’ll spend little time thinking about other possible connections, including those that might help you understand the material. This trade-off will be fine if you don’t care very much about the meaning of the material. (Do you care why, in taxonomy, “order” is a subset of “class,” rather than the other way around?) But the trade-off is troubling if you’re trying to memorize material that is meaningful. In this case, you’d be better served by a memory strategy that seeks out multiple connections between the material you’re trying to learn and things you already know. This effort toward multiple links will help you in two ways. First, it will foster your understanding of the material to be remembered, and so will lead to better, richer, deeper learning. Second, it will help you retrieve this information later. We’ve already suggested that memory connections serve as retrieval paths, and the more paths there are, the easier it will be to find the target material later.
For these reasons, mnemonic use may not be the best approach in many situations. Still, the fact remains that mnemonics are immensely useful in some settings (what were those rainbow colors?) , and this confirms our initial point: Organization promotes memory.
Understanding and Memorizing So far, we’ve said a lot about how people memorize simple stimulus materials-lists of randomly selected words, or colors that have to be learned in the right sequence. In our day-to-day lives, however, we typically want to remember more meaningful, more complicated, material. We want to remember the episodes experience, the details of rich scenes we’ve observed, or the many-step we arguments we’ve read in a book. Do the same memory principles apply to these cases?
The answer is clearly yes (although we’ll have more to say about this issue in Chapter 8). In other or complex bodies of knowledge is enormously pictures, organize the material to be remembered. With these more words, your memory for events, or dependent complicated materials, though, we’ve suggested that your best bet for organization isn’t some arbitrary skeleton like those used in mnemonics. Instead, the best organization of these complex on your being able to on understanding. That is, you remember best what you materials is generally dependent understand best. There are many ways to show that this is true. For example, we can give people paragraph to read and test their comprehension by asking questions about the material. Sometime later, we can test their memory. The results are clear: The better the participants’ understanding of a sentence or a paragraph, if questioned immediately after viewing the material, the greater the likelihood that they will remember the material after a delay (for classic data on this topic, see Bransford, 1979)
Likewise, consider the material you’re learning right now in the courses you’re taking. Will you remember this material 5 years from now, or 10, or 20? The answer depends on how well you understand the material, and one measure of understanding is the grade you earn in a course. With full and rich understanding, you’re likely to earn an A; with poor understanding, your grade is likely to be lower. This leads to a prediction: If understanding is (as we’ve proposed) important for memory, then the higher someone’s grade in a course, the more likely that person is to remember the course contents, even years later. This is exactly what the data show, with A students remembering the material quite well, and C students remembering much less (Conway, Cohen, & Stanhope, 1992). The relationship between understanding and memory can also be demonstrated in another way by manipulating whether people understand the material or not. For example, in an early experiment by Bransford and Johnson (1972, p. 722), participants read this passage:
The procedure is actually quite simple. First you arrange items into different groups. Of course one on how much there is to do. If you have to go somewhere else due pile may be sufficient depending to lack of facilities that is the next step; otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run, this may not seem important but complications can easily arise. A mistake can be expensive as well. At first, the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then, one never can tell. After the procedure is completed one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, that is part of life.
You’re probably puzzled by the passage, and so are most research participants. The story is easy to understand, though, if we give it a title: “Doing the Laundry” In the experiment, some participants were given the title before reading the passage; others were not. Participants in the first group easily understood the passage and were able to remember it after a delay. Participants in the second group, reading the same words, weren’t confronting a meaningful passage and did poorly on the memory test. (For related data, see Bransford & Franks, 1971; Sulin & Dooling, 1974. For another example, see Figure 6.16.)
Similar effects can be documented with nonverbal materials. Consider the picture shown in Figure 6.17. At first it looks like a bunch of meaningless blotches; with some study, though, you may discover a familiar object. Wiseman and Neisser (1974) tested people’s memory for this picture. far, their memory was good if they understood the picture-and Consistent with what we’ve seen bad otherwise. (Also see Bower, Karlin, & Dueck, 1975; Mandler & Ritchey, 1977; Rubin & Kontis, 1983.)
The Study of Memory Acquisition This chapter has largely been about memory acquisition. How do we acquire new memories? How is new information, new knowledge, established in long-term memory? In more pragmatic terms, what is the best, most effective way to learn? We now have answers to these questions, but our discussion has indicated that we need to place these questions into a broader context-with attention on the substantial contribution from the memorizer, and also a consideration of the interconnections among acquisition, retrieval, and storage. The Contribution of the Memorizer Over and over, we’ve seen that memory depends on connections among ideas, connections fostered by the steps you take in your effort toward organizing and understanding the materials you encounter. Hand in hand with this, it appears that memories are not established by sheer contact with the items you’re hoping to remember. If you’re merely exposed to the items without giving them any thought, then subsequent recall of those items will be poor. These points draw attention to the huge role played by the memorizer. If, for example, we wish to predict whether this or that event will be recalled, it isn’t enough to know that someone was exposed to the event. Instead, we need to ask what the person was doing during the event. Did she only do maintenance rehearsal, or did she engage the material in some other way? If the latter, how did she think about the material? Did she pay attention to the appearance of the words or to their meaning? If she thought about meaning, was she able to understand the material? These considerations are crucial for predicting the success of memory.
The contribution of the memorizer is also evident in another way. We’ve argued that learning depends on making connections, but connections to what? If you want to connect the to-be- remembered material to other knowledge, to other memories, then you need to have that other knowledge-you need to have other (potentially relevant) memories that you can “hook” the new material on to. This point helps us understand why sports fans have an easy time learning new facts about sports, and why car mechanics can easily learn new facts about cars, and why memory experts easily memorize new information about memory. In each situation, the person enters the learning situation with a considerable advantage-a rich framework that the new materials can be woven into. an easy time learning new facts about But, conversely, if someone enters a learning situation with little relevant background, then there’s no framework, nothing to connect to, and learning will be more difficult. Plainly, then, if we want to predict someone’s success in memorizing, we need to consider what other knowledge the individual brings into the situation. The Links among Acquisition, Retrieval, and Storage These points lead us to another important theme. The emphasis in this chapter has been on memory acquisition, but we’ve now seen that claims about acquisition cannot be separated from claims about storage and retrieval. For example, why is memory acquisition improved by organization? We’ve suggested that organization provides retrieval paths, making the memories “findable” later on, and this is a claim about retrieval. Therefore, our claims about acquisition are intertwined with claims about retrieval.
Likewise, we just noted that your ability to learn new material depends, in part, on your having a framework of prior knowledge to which the new materials can be tied. In this way, claims about memory acquisition need to be coordinated with claims about the nature of what is already in storage.
These interactions among acquisition, knowledge, and retrieval are crucial for our theorizing. But the interactions also have important implications for learning, for forgetting, and for memory accuracy. The next two Chapters explore some of those implications. COGNITIVE PSYCHOLOGY AND EDUCATION how should i study? Throughout your life, you encounter information that you hope to remember later-whether you’re a student taking courses or an employee in training for a new job. In these and many other settings what helpful lessons can you draw from memory research?
For a start, bear in mind that the intention to memorize, on its own, has no effect. Therefore, you don’t need any special “memorizing steps.” Instead, you should focus on making sure you understand the material, because if you do, you’re likely to remember it.
As a specific strategy, it’s useful to spend a moment after a class, or after you’ve done a reading assignment, to quiz yourself about what you’ve just learned. You might ask questions like these: “What are the new ideas here?”; “Do these new ideas fit with other things I know?”: “Do I know what evidence or arguments support the claims here?” Answering questions like these will help you find meaningful connections within the material you’re learning, and between this material and other information already in your memory. In the same spirit, it’s often useful to rephrase material you encounter, putting it into your own words. Doing this will force you to think about what the words mean-again, a good thing for memory. Surveys suggest, however, that most students rely on study strategies that are much more passive than this-in fact, far too passive. Most students try to learn materials by simply rereading the textbook or reading over their notes several times. The problem with these strategies should be obvious: As the chapter explains, memories are produced by active engagement with materials, not by passive exposure.
As a related point, it’s often useful to study with a friend-so that he or she can explain topics to you, and you can do the same in return. This step has several advantages. In explaining things, you’re forced into a more active role. Working with a friend is also likely to enhance your understanding, because each of you can help the other to understand bits you’re having trouble with. You’ll also benefit from hearing your friend’s perspective on the materials. This additional perspective offers the possibility of creating new connections among ideas, making the information easier to recall later on.
Memory will also be best if you spread your studying out across multiple occasions-using spaced learning (e.g., spreading out your learning across several days) rather than massed learning (essentially, “cramming” all at once). It also helps to vary your focus while studying-working on your history assignment for a while, then shifting to math, then over to the novel your English professor assigned, and then back to history. There are several reasons for this, including the fact that spaced learning and a changing focus will make it likely that you’ll bring a somewhat different perspective to the material each time you turn to it. This new perspective will let you see connections you didn’t see before; and-again-these new connections provide retrieval paths that can promote recall.
Spaced learning also has another advantage. With this form of learning, some time will pass between the episodes of learning. (Imagine, for example, that you study your sociology text for a while on Tuesday night and then return to it Thursday, these study sessions.) This situation allows so that two days go by between on some amount of forgetting to take place, and that’s actually helpful because now each episode of learning will have to take a bit more effort, a bit more thought. This stands in contrast to massed learning, in which your second and third passes through the material may only be separated by a few minutes. In this setting, the second and third passes may feel easy enough so that you zoom through them, with little engagement in the material.
Note an ironic point here: Spaced learning may be more difficult (because of the forgetting in between sessions), but this difficulty leads to better learning overall. Researchers refer to this as “desirable difficulty”-difficulty that may feel obnoxious when you’re slogging through the learn but that is material you hope to nonetheless beneficial, because it leaves you long-lasting complete, with more more memory.
What about mnemonic strategies, such as a peg-word system? These are enormously helpful-but often at a cost. When you’re first learning new, focusing on a mnemonic can divert your time and attention away from efforts at something understanding the material, and so you’ll end up understanding the material less well. You’ll also be not the multiple paths left with only the one or two retrieval paths that the mnemonic provides, created by comprehension. In some circumstances these drawbacks aren’t serious-and so, for example, mnemonics are often useful for memorizing dates, place names, or particular bits of terminology. But for richer, more meaningful material, mnemonics may hurt you more than they help. Mnemonics can be more helpful, though, after you’ve understood the new material. Imagine that you’ve thoughtfully constructed a many-step argument or a complex derivation of a mathematical formula. Now, imagine that you hope to re-create the argument or the derivation later on-perhaps for an oral presentation or on an exam. În this situation, you’ve already achieved a level of mastery and you don’t want to lose what you’ve gained. Here, a mnemonic (like the peg-word system) might be quite helpful, allowing you to remember the full argument or derivation in its proper sequence.
Finally, let’s emphasize that there’s more to say about these issues. Our discussion here (like Chapter 6 itself) focuses on the “input” side of memory-getting information into storage, so that it’s available for use later on. There are also steps you can take that will help you to locate information in the vast warehouse of your memory, and still other steps that you can take to avoid forgetting materials you’ve already learned. Discussion of those steps, however, depends on materials we’ll cover in Chapters 7 and 8. COGNITIVE PSYCHOLOGY AND THE LAW
the video-recorder view
One popular conception of memory is called the “video-recorder view.” According to this commonsense view, everything in front of your eyes gets recorded into memory, much as a video camera records everything in front of the lens. This view seems to be widely held, but it is simply wrong. As Chapter 6 discusses, information gets established in memory only if you pay attention to it and think about it. Mere exposure isn’t enough.
Wrong or not, the video-recorder view influences how many people think about memory- including eyewitness memory. For example, many people believe that it’s possible to hypnotize an eyewitness and then “return” the (hypnotized) witness to the scene of the crime. The idea is that the witness will then be able to recall minute details- the exact words spoken, precisely how things appeared, and more. All this would make sense if memory were like a video recorder. In that case, hypnosis would be similar to rewinding the tape and playing it again, with the prospect of noting things on the “playback” that had been overlooked during the initial event. However, none of this is correct. There is no evidence that hypnosis improves memory. Details that were overlooked the first time are simply not recorded in memory, and neither hypnosis nor any other technique can bring them back. Our memories are also selective in a way that video recorders are not. You’d worry, of course, if your DVD player or the video recorder on your phone had gaps in the playback, skipping every other second or missing half the image. But memories, in contrast, often have gaps in them. People recall what they paid attention to, and if someone can’t recall every part of an event, this merely tells us that the person wasn’t paying attention to everything. (And, in light of what we know about attention, the person couldn’t have paid attention to everything.) It’s inevitable, then, that people (including eyewitnesses to crimes) will remember some aspects of the event but not others, and we shouldn’t use the incompleteness of someone’s memory as a basis for distrusting what the person does recall.
In addition, storage in memory is in some ways superior to storage in a video recorder, and in some ways worse. Humans are wonderfully able to draw information from many sources and integrate this information to produce knowledge that sensibly represents the full fabric of what they’ve learned. In this way, humans are better than recorders-because recorders are, in essence, stuck with whatever they recorded at the outset, with no way to update a recording if, for example, new information demands a reinterpretation of earlier events. At the same time, the recorder does preserve the initial event, “uncontaminated” by later knowledge. Here’s an illustration of how these points play out. Participants in one study read a description of an interaction between a man and a woman. Half of the participants later learned that the man ended up proposing marriage to the woman. Other participants were told that the man later assaulted the woman. All participants were then asked to recall the original interaction as accurately as they could. Those who learned of the subsequent marriage recalled the original interaction as more romantic than it actually was; those who learned of the subsequent assault recalled the interaction as more aggressive than it was. In both cases, knowledge of events after the to-be- remembered interaction was apparently integrated with recall of the interaction itself. If we were to put this positively, light of subsequent information. (After all, why should you hold on to a view of this interaction that’s we would say that memory for the interaction was appropriately “updated” in now been made obsolete by later-arriving information?) But if we were to put the same point negatively, we might say that memory for the interaction was “distorted” by knowledge of later events.
Other examples, contrasting our memories with video recorders, are easy to find. The overall idea, though, is that once we understand how memory works, we gain more realistic expectations about what eyewitnesses will or will not be able to remember. With these realistic expectations, we’ll be in a much better position to evaluate and understand eyewitness evidence.