Short Paper 1- Client Results

Client Results

Table of Contents

Client Results
Client Results

Second part of previous assignment. 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Module Eight Assignment Guidelines 

Overview: This assignment will allow you to consider ways to deliver the results of an assessment in an ethical and strength-based manner. You will be using the results from a previous assignment and transforming them into a transcript that could be used with a real-life client.

Prompt: Before you begin this assignment, revisit the short paper you wrote for Module Five, in which you analyzed the results of Bob’s intelligence and achievement testing. Specifically, you identified his strengths and weaknesses related to the WRAT-4 and WASI-2. Elements of your paper included Bob’s 

strengths and weaknesses, how his strengths and weaknesses applied to his overall functioning, and suggestions or recommendations for him.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

For this assignment, you will be using the elements from that paper and turning them into a written or verbal transcript, as if you were delivering the results to Bob in real life. This must be done in an ethical manner, with the client’s best interests at the forefront of the delivery. You will be providing a review of the 

results in layman’s terms, using strength-based and nonjudgmental language and focusing on the summary of results, the use of strength-based language, the summary of recommendations, and an accurate portrayal of the findings.

Client Results

Your assignment must be submitted as a written transcript, an audio recording, or a video. There is no

Page requirement or time requirement for this assignment as long as all critical elements are visited. Remember that your intended audience is Bob and not your instructor, so remember to speak directly to 

Bob when delivering the results. You should use the terms “you,” “your,” and “yours.”

Specifically, the following critical elements must be addressed:

I. Summary of Results: Results from Module Five Short Paper are summarized in a manner that is organized and ethical.

II. Use of Strength-Based Language: Appropriate, ethical language is used to speak to the patient.

III. Summary of Recommendations: Recommendations from Module Five Short Paper are summarized in a manner that is organized and ethical.

IV. Accurate Portrayal of Findings: Results and recommendations are accurately portrayed to the patient.

Running head: ANALYZING A SAMPLE INTELLIGENCE-ACHIEVEMENT REPORT 1

ANALYZING A SAMPLE INTELLIGENCE-ACHIEVEMENT REPORT 2

Analyzing a Sample Intelligence-Achievement Report

Analyzing a Sample Intelligence-Achievement Report

The Sample Intelligence-Achievement Report articulates Bob’s scores in the Wide Range Achievement Test 4 (WRAT-4) AND Wechsler Abbreviated Scale of Intelligence 2 (WASI-2). In relation to the WASI-2 test, Bob’s Full Scale IQ Score (FSIQ-4) was established to be average. Average scores in the subscales of this test show that the individual shows performance or intellectual abilities that are normal relative to the peers of similar age. Such scores show that the individual should be able to exhibit what is considered normal intellectual performance. Bob’s ability in most of the subscales are average, including his Verbal Comprehension Index, his knowledge of English word definitions and verbal reasoning abilities, his Perceptual Reasoning Index, as well as his nonverbal problem solving abilities. However, Bob’s score in visual spatial skills fall within the low average range. This presents his first weakness. This means that Bob has weakness in positioning himself properly when confronted by differing interfaces. For example, when exposed to different visual environments, he may not perform as other peers of his age.

Client Results

On the other hand, the WRAT-4 test is used to evaluate fundamental academic skills (Keat & Ismail, 2011). There are specific subscales in this test where Bob exhibits average performance as compared to how his peers of the same age would perform, these include his Word Reading (standard score of 99), sentence comprehension (standard score of 93), and his Reading Composite (standard score of 95). However, Bob’s standard score of 78 in Spelling falls within the borderline range which suggests that he is more likely to perform much worse than his peers. This is clearly a weakness for Bob and reflective of a potentially poor performance in English word spelling tasks. Another weakness for Bob manifests in his Math Computation (standard score of 83). This means that Bob will most likely perform worse as compared to his peers, especially on tasks involving increasingly complex mathematical problems.

As already mentioned, an average score in the subscales of both WASI-2 and WRAT-4 show that Bob depicts normal intellectual ability in relation to his peers. These may not be characterized as strengths because a strength is a subjective characterization. Bob had to depict an ability of above average or higher in any one of the scores to achieve this characterization. However, it is clear that he has weaknesses in specific areas, especially those that require visual-spatial processing skills. Because Bob does not have any strength that can be distinguished from the average scores discussed above, this analysis will outline how his weaknesses may potentially affect his overall functioning. Bob’s comparative scores in the two areas of nonverbal abilities show that he may struggle among his peers. The WRAT-4 has outlined his weaknesses in both spelling and math computation. These weaknesses will definitely affect his functioning in academic environments. This is because spelling and math computation appear repetitively in numerous academic areas. This disadvantage may see him struggle in an academic environment and potentially perform lower than his peers.

Based on this analysis, there are some recommendations that can be advanced to Bob to help his situation. To begin with, there are specific behavioral interventions that can be instituted to help individuals sharpen their visual spatial skills. This can be recommended for Bob to help him improve his abilities in this competency. Additionally, it is possible to improve his spelling skills by embracing behavioral activities that sharpen this particular competency. Similarly, there are specific mathematics interventions that can be used on Bob to improve his computational skills (Codding, et al., 2007).

References

Codding, R. S., Shiyko, M., Russo, M., Birch, S., Fanning, E., & Jaspen, D. (2007). Comparing mathematics interventions: Does initial level of fluency predict intervention effectiveness? Journal of School Psychology, 45(6), 603-617.

Keat, O. B., & Ismail, K. B. (2011). The relationship between cognitive processing and reading. Asian Social Science, 7(10), 44.

HUMAN PERFORMANCE

Delivering effective performance feedback: The strengths-based approach

Herman Aguinis *, Ryan K. Gottfredson, Harry Joo

Kelley School of Business, Indiana University, 1309 E. Tenth Street, Bloomington, IN 47405-1701, U.S.A.

Business Horizons (2012) 55, 105—111

Available online at www.sciencedirect.com

www.elsevier.com/locate/bushor

KEYWORDS Human resource management; Performance management; Performance appraisal; Employee development; Job performance; Feedback

Abstract Performance feedback has significant potential to benefit employees in terms of individual and team performance. Moreover, effective performance feed- back has the potential to enhance employee engagement, motivation, and job satisfaction. However, managers often are not comfortable giving performance feedback and such feedback, if improperly relayed, causes more harm than good. In this installment of HUMAN PERFORMANCE, we describe a shift from traditional weaknesses-based feedback (which relies on negative commentary focused on employees’ shortcomings) to the more constructive approach of strengths-based feedback (which relies on employee affirmation and encouragement). We explain why a strengths-based approach to performance feedback is superior to the weaknesses- centered approach, and offer nine research-based recommendations on how to deliver effective performance feedback employing a strengths-based method. # 2011 Kelley School of Business, Indiana University. All rights reserved.

Success is achieved by developing our strengths, not by eliminating our weaknesses. � Marilyn vos Savant

1. Building up vs. breaking down

A key responsibility of successful managers is to help their employees improve job performance on an ongoing basis (Aguinis, Joo, & Gottfredson, 2011). Managers carry out this responsibility by implement- ing performance management systems that are de- signed to align performance at the individual, unit, and organizational levels. Notably, performance

* Corresponding author. E-mail address: haguinis@indiana.edu (H. Aguinis).

0007-6813/$ — see front matter # 2011 Kelley School of Business, I doi:10.1016/j.bushor.2011.10.004

feedback is a critical component of all performance management systems (Aguinis, 2009; DeNisi & Kluger, 2000). Performance feedback can be defined as information about an employee’s past behaviors with respect to established standards of employee behaviors and results. The goals of performance feedback are to improve individual and team per- formance, as well as employee engagement, moti- vation, and job satisfaction (Aguinis, 2009).

Unfortunately, managers are often uncomfort- able giving performance feedback (Aguinis, 2009), and such feedback often does more harm than good in terms of helping employees improve their perfor- mance (DeNisi & Kluger, 2000). For example, Kluger and DeNisi (1996) conducted an extensive literature review and concluded that in more than one-third of the cases, performance feedback actually resulted in decreased performance across the 131 studies

ndiana University. All rights reserved.

http://dx.doi.org/10.1016/j.bushor.2011.10.004

http://www.sciencedirect.com/science/journal/00076813

mailto:haguinis@indiana.edu

http://dx.doi.org/10.1016/j.bushor.2011.10.004

106 HUMAN PERFORMANCE

they analyzed. Furthermore, employees involved in a qualitative study said the following about the feedback that they had received: ‘‘The feedback meeting is a conflict meeting,’’ ‘‘It was devastat- ing,’’ ‘‘The process was a waste of time,’’ and ‘‘Feedback equals criticism and it is not nice’’ (Bouskila-Yam & Kluger, 2011). The discrepancy between performance feedback’s intended and ac- tual consequences constitutes a major concern to employees, managers, and organizations.

Although managers share an intuitive under- standing that feedback plays a crucial role in im- proving individual and team performance, many managers do not know how to deliver feedback effectively. More specifically, managers quite fre- quently provide feedback in a manner that is exces- sively focused on employees’ weaknesses. Yet, the same managers are typically unaware that such weaknesses-based feedback often fails to improve employee performance. To fully reap the benefits of using feedback, managers should instead primarily rely on a strengths-based approach to feedback that consists of identifying employees’ areas of positive behavior and results that stem from their knowledge, skills, or talents. Next, we describe the traditional weaknesses-centered approach to feed- back, the novel strengths-based approach, and why the strengths-based approach is superior. We close with a set of nine research-based recommendations on how to give effective performance feedback using a strengths-based approach.

2. The traditional weaknesses-based approach to feedback

Under the weaknesses-based approach to feedback, managers identify their employees’ weaknesses (e.g., deficiencies in terms of their job performance, knowledge, and skills); provide negative feedback on what the employees are doing wrong or what the employees did not accomplish; and, finally, ask them to improve their behaviors or results by overcoming their weaknesses. The rationale behind weaknesses- based feedback is that weaknesses are areas where employees have potential to improve, and it is as- sumed that informing them of these problems will motivate them to improve their performance. In other words, the assumption is that, absent such communication, employees will not improve their performance (Steelman & Rutkowski, 2004).

Because employees’ weaknesses can be detri- mental to not only individual but also team and organizational performance, managers often point out what the employee did wrong and why the employee needs to improve. Such negative

feedback can be illustrated with the following con- versation between Tony, a branch manager at a bank, and Lisa, a teller at the bank:

Tony: Lisa, you haven’t been greeting customers by saying, ‘‘Hi, welcome to XYZ Bank.’’ We’ve talked about this a number of times now.

Lisa: I haven’t done it a couple of times, but I’m getting better.

Tony: Okay; well, then, I need you to do even better. We need to make sure that we receive high customer service rankings so that we can get a big bonus at the end of the year.

Lisa: (Thinking to herself: He hasn’t paid any at- tention to what I have been doing. I’ve been greeting almost all of my customers the way that he has asked. He never acknowledges me when I do things right and takes it for granted, but he sure is quick to point out any relative shortcomings. What a jerk!)

Although weaknesses-based feedback informs employees that certain behaviors and results are inappropriate or inadequate, several studies have concluded that such feedback entails unintended negative consequences. For example, negative feedback and criticism often lead to employee dis- satisfaction, defensive reactions, a decreased de- sire to improve individual performance, and less actual improvement in the same (Burke, Weitzel, & Weir, 1978; Jawahar, 2010; Kay, Meyer, & French, 1965). Negative feedback is also frequently per- ceived as being inaccurate, and is unlikely to be accepted by the person receiving it (Fedor, Eder, & Buckley, 1989; Ilgen, Fisher, & Taylor, 1979; Steel- man & Rutkowski, 2004). When feedback is focused on employee weaknesses, those giving the feedback generally adopt negative views of and attitudes toward the employees being evaluated (Gardner & Schermerhorn, 2004). These negative conse- quences help explain the general lack of empirical support for the benefits of feedback and why many managers have not experienced significant success in using feedback to boost employee performance (Kluger & DeNisi, 1996). Next, we describe an alter- native and superior approach to feedback.

3. The superior strengths-based approach to feedback

Under the strengths-based approach to feedback, managers identify their employees’ strengths in

HUMAN PERFORMANCE 107

terms of their exceptional job performance, knowl- edge, skills, and talents; provide positive feedback on what the employees are doing to succeed based on such strengths; and, finally, ask them to maintain or improve their behaviors or results by making continued or more intensive use of their strengths. The reasons behind strengths-based feedback are that employee strengths are of great potential for growth and development, and that highlighting how these strengths can generate success on the job motivates employees to intensify the use of their strengths to produce even more positive behaviors and results (Buckingham & Clifton, 2001).

In contrast to weaknesses-based feedback, strengths-based feedback enjoys a significant num- ber of advantages with few, if any, negative con- sequences. For example, strengths-based feedback enhances individual well-being and engagement (Clifton & Harter, 2003; Seligman, Steen, Park, & Peterson, 2005). This effect is particularly notewor- thy because employee engagement is negatively related to turnover (r = -.30) and positively related to business-unit performance (r = .38) (Clifton & Harter, 2003). Strengths-based feedback also tends to increase employees’ desire to improve their productivity (Jawahar, 2010) and heightens actual productivity (Clifton & Harter, 2003). Moreover, employees experience increased job satisfaction, perceptions of fairness, and motivation to improve job performance when their managers adopt helpful and constructive attitudes that are typical under the strengths-based approach (Burke et al., 1978; Seligman & Csikszentmihalyi, 2000).

Put simply: Given its documented advantages, the strengths-based approach to providing feedback is a superior alternative to the weaknesses-based approach. As is the case with many other manage- ment practices, however, execution is key (Bossidy & Charan, 2002). For instance, managers can make the mistake of being too vague, thereby limiting the potential performance and job satisfaction-related benefits that such feedback can have on employees.

So, what can managers do to improve the effec- tiveness of performance feedback? To answer this question, we provide nine research-based recom- mendations on how to deliver feedback focused on a strengths-based approach.

4. Research-based recommendations for implementing a strengths-based approach to performance feedback

Table 1 represents a summary of our nine recom- mendations. Based on earlier discussion, our first

recommendation is to focus on a strengths-based approach. The strengths-based approach involves identifying strengths, providing positive feedback on how employees are using their strengths to ex- hibit desirable behaviors and achieve beneficial results, and asking them to maintain or improve their behaviors or results by making continued or more intensive use of their strengths.

The second recommendation is to not completely abandon a discussion of weaknesses, but concentrate on employees’ knowledge (i.e., facts and lessons learned) and skills (i.e., steps of an activity) rather than talents (i.e., naturally or mainly innately recur- ring patterns of thought, feeling, and behavior). The feedback should be focused thus because knowledge and skills can be learned and improved, while talents are typically inherent to the individual. Given this recommendation, what are managers to do when an employee’s inappropriate behaviors or inadequate results stem from weaknesses in certain talents rath- er than weaknesses in knowledge and skills? Our next recommendation addresses this issue.

The third recommendation is that managers adopt a strengths-based approach to managing their employees’ talent weaknesses. In doing so, manag- ers can follow Buckingham and Clifton’s (2001) five suggestions. The first suggestion is to help employ- ees improve a bit on the desired talents. But, keep in mind that employees are unlikely to substantially improve the talents that they lack. The second suggestion is that both managers and employees should design a support system that will serve as a crutch for talent weaknesses. For example, em- ployees who engage in public speaking can remain calm by imagining that the audience members are naked. According to Buckingham and Clifton’s third suggestion, managers should encourage their em- ployees to see how their strongest talents can compensate for their talent weaknesses. For exam- ple, if an employee possesses the talent of respon- sibility yet struggles in networking because he possesses few social talents, then help the employ- ee see that networking is an important responsibili- ty. To follow the fourth suggestion, make it easier for employees to work with partners who possess the talents that the employees lack. The fifth and final suggestion is to prevent employees from engaging in tasks that strongly require talents they lack. Ways to implement this last suggestion include re-designing jobs for employees who are deficient in certain talents or giving other employ- ees the responsibilities that require talents certain employees lack.

The fourth recommendation in Table 1 is that the person providing feedback needs to be familiar with the individual reviewee’s knowledge, skills, and

108 HUMAN PERFORMANCE

Table 1. Nine recommendations for delivering effective performance feedback focusing on a strengths-based approach

Recommendation Short description

1. Adopt the strengths-based approach as the primary means of providing feedback

� Identify employees’ strengths. � Provide positive feedback on how employees are using their strengths

to exhibit desirable behaviors and achieve beneficial results. � Ask employees to maintain or improve their behaviors or results by making continued or more intensive use of their strengths.

2. Closely link any negative feedback to employees’ knowledge and skills rather than talents

� Focus weaknesses-based feedback on knowledge and skills (which are more changeable) rather than talents (which are more difficult to acquire).

3. Adopt a strengths-based approach to managing employees’ talent weaknesses

� Help employees improve a bit on the desired talents with an understanding that employees are unlikely to substantially improve the talents that they lack. � Create a support system that will serve as a crutch for a talent weakness. � Encourage employees to see how their strongest talents can compensate

for their talent weaknesses. � Make it easier for employees to work with partners who possess the

talents that they lack. � Re-design jobs for employees who are deficient in certain talents, and

give other employees the responsibilities that require talents that certain employees lack.

4. Make sure the person providing feedback is familiar with the employee and the employee’s job requirements

� Make sure you are familiar with the employee’s knowledge, skills, and talents. � Make sure you are familiar with the employee’s job requirements

and work context.

5. Choose an appropriate setting when giving feedback

� Deliver feedback in a private setting.

6. Deliver the feedback in a considerate manner

� Provide at least three pieces of positive feedback for every piece of negative feedback. � Start the feedback session by asking the employee what is working. � Allow employees to participate in the feedback process.

7. Provide feedback that is specific and accurate

� Avoid making general statements such as ‘‘Good job!’’ � Evaluate and give feedback closely based on concrete evidence.

8. Tie feedback to important consequences at various levels throughout the organization

� Explain that the behaviors exhibited and results achieved by the employee have an important impact not only on the employee in terms of rewards or disciplinary measures, but also on the team, unit, or even organization.

9. Follow up � Provide specific directions by including a development plan and checking up on any progress that is made after a certain period of time.

talents, as well as his or her job requirements (Fulk, Brief, & Barr, 1985; Kinicki, Prussia, Wu, & Mckee-Ryan, 2004; Landy, Barnes, & Murphy, 1978; Steelman & Rutkowski, 2004). This is important because the credibility of the feedback provider can be quickly lost if feedback is given improperly. An example of feedback coming from a source with insufficient familiarity is when a district manager, who is not involved in the day-to-day operations of a work group and does not know the job requirements and work context very well, visits a local office and

provides feedback that is based on hearsay or indi- rect third-party information.

Our fifth specific recommendation is to choose an appropriate setting when giving feedback, as the setting/location in which feedback is delivered truly matters. Specifically, feedback should be relayed in a private rather than public setting. Receiving feed- back in front of coworkers can be very demeaning and detrimental to the employee. Also, although most people do not have a problem receiving strengths-based feedback in public, managers

HUMAN PERFORMANCE 109

should take into account that certain individuals may be uncomfortable in the spotlight of public praise or recognition. Regardless of the approach, public feedback will not result in positive conse- quences if given in the wrong setting.

Our sixth recommendation is to deliver feedback in a considerate manner (Steelman & Rutkowski, 2004). One way of doing so is to maintain an opti- mal ratio between strengths- and weaknesses-based feedback. That is, a manager should provide at least three pieces of positive feedback for every piece of negative feedback (Bouskila-Yam & Kluger, 2011). Another way of providing feedback in a considerate manner is to start the feedback by asking the em- ployee what is working (Foster & Lloyd, 2007). Doing so allows the employee to feel more hopeful regard- ing their future and remain less defensive when negative feedback is given (Foster & Lloyd, 2007). Finally, we also encourage managers to allow em- ployees to participate in the feedback process. Employees’ satisfaction with their given feedback increases and their defensiveness decreases when they have an active role in the feedback process (Cawley, Keeping, & Levy, 1998).

Our seventh recommendation is that feedback should be specific and accurate. It should center on certain work behaviors and results, as well as the situations in which these were observed (Goodman, Wood, & Hendrickx, 2004). Avoid making general statements such as ‘‘Good job,’’ ‘‘You’re struggling today,’’ or ‘‘Pick up the pace.’’ Lack of specificity will result in failure to get the message through (Aguinis, 2009). In addition to being specific, feedback must be accurate (Elicker, Levy, & Hall, 2006; Steelman & Rutkowski, 2004). One way to maximize accuracy is to rely on concrete evidence (Jawahar, 2010).

Under our eighth recommendation, we encour- age managers to give feedback that ties employee behaviors and results to other important conse- quences at various levels throughout the organization (Aguinis, 2009). Specifically, the person providing feedback should explain that the behaviors exhib- ited and results achieved by the employee have an important impact on not only the employee in terms of rewards or disciplinary measures, but also that person’s team, unit, and even organization (Aguinis, 2009). If employees’ behaviors and results are not explained as being closely linked to other important outcomes, employees might develop the impression that their positive behaviors and results produced by their strengths are not sufficiently beneficial or im- portant; they may similarly think that their negative behaviors and results are not particularly detrimental or significant.

Finally, our ninth recommendation is to follow up on feedback (Aguinis, 2009). Doing so entails

providing specific directions to the employee through a development plan, as well as checking up on any progress that is made after a certain period of time. Via such diligence, employees will recognize that the feedback should be taken seriously.

5. How it’s done: The nine principles of effective performance feedback at play

How would our recommended principles of feedback play out in an actual feedback session? Recall the conversation between Tony and Lisa that we used previously to provide an example of concepts related to feedback. Now, consider the following vignette in which Tony has been informally observ- ing Lisa’s performance and decides to provide feed- back, both because of things she did well and areas in which she could improve when interacting with customers:

Tony: Lisa, after helping the remaining customers in line, will you come talk to me in my office? I want to compliment you on the great work you have been doing. I also want to talk about areas in which you can improve to become even better.

(10 minutes later)

Tony: Come in, Lisa; have a seat. As I mentioned earlier, I want to talk to you about some of the great things that you’ve been doing lately, as well as areas where you can improve. I’d like this time to be about how I can help you be your very best.

Lisa: I hope I have been doing well. I’ve been trying.

Tony: I can tell. Specifically, in what ways do you feel like you’ve been standing out?

Lisa: Well, maybe it’s just me, but I hate it when our customers have to wait in line. Because of this, I really try my best to work quickly so that people don’t have to wait so long.

Tony: That’s really good. In fact, our monthly fig- ures show that of all the tellers during the month of April, you conducted the most transactions. How does that make you feel?

Lisa: Really? I even took a few vacation days last month.

110 HUMAN PERFORMANCE

Tony: And because of your great work, we have a $50 gift card for you.

Lisa: Wow, thanks!

Tony: Obviously, you’re great at being quick and efficient when working with customers. How do you feel this affects the quality of inter- actions that you have with them?

Lisa: I’m not sure. I can see that I could probably be more engaging, but I figure our customers just want to get in and get out. I mean, I always make sure that I greet them and ask how their day is going. So, I feel like I have a good balance between speed and quality.

Tony: I like how you are maintaining such a good balance; that’s why you’re one of our most accomplished employees. At the same time, I want to fulfill my duty of helping you become even better, so I’d appreciate your reflection on our monthly teller goal of 15 referrals for new bank accounts, checking accounts, and credit cards. Last month you had four refer- rals, and so far this month you’ve acquired two. How do you feel you’re doing in this area?

Lisa: I guess I’m not doing as well as I probably could. I get so concerned with moving people through the line that I forget to ask them if they want to start up new accounts.

Tony: I see. So it seems that you are more likely to ask for referrals when there isn’t a line, but when there is a line, you have a tendency to not ask for referrals. I want you to remember that your monthly bonus and the bank’s over- all yearly bonus are tied directly to the num- ber of referrals you get. I want you to be happy with your bonuses, so what do you think you can do better?

Lisa: Now that I think about it, I do typically ask for referrals when there isn’t a line. I don’t know. I always see the prompting on the computer screen before I end a transaction, but I just don’t want to inconvenience the people standing in line.

Tony: Preventing customer inconvenience is an im- portant aspect of the job. So, what if, rather than asking people at the end of transactions whether they’re interested in a new account, you instead ask them while you are running their transactions?

Lisa: Hmm, that’s actually a good idea. I always just think about it after I am done with the transaction. Let me give it a shot the next few days and see how it goes.

Tony: Great. I’ll follow up with you at the end of the week. Why don’t we plan on having another conversation like this before you go to lunch on Friday?

Lisa: That sounds good. I’ll look forward to it. Thanks!

In this vignette, Tony followed nearly all of the recommendations for effective strengths-based feedback. He began the interview by praising and discussing in detail Lisa’s strengths, but he did not shy away from discussing her weaknesses, either. Tony emphasized how Lisa can use her strengths to improve performance even further, and demon- strated that he was familiar with the work Lisa was doing. By establishing a proper setting in which to provide his feedback, Tony guaranteed that the conversation was confidential, thereby limiting any defensiveness on Lisa’s part. To ensure Lisa that he was providing credible feedback, Tony was consid- erate and very specific. Although Tony did not dis- cuss three positive pieces of feedback for each piece of negative feedback, he did provide Lisa with a reward in the form of a gift card, which probably made her more open to the weaknesses-based feed- back that he provided. In addition, Tony’s feedback was based on concrete evidence; for example, he was able to motivate Lisa to mention when she had a tendency to ask for referrals and when she did not. Tony also discussed how Lisa’s lack of referrals tied into specific rewards, demonstrating that referrals were important to her as well as to the bank. Finally, Tony gave Lisa some time to improve her behavior and then established when he could follow up with her.

6. Conclusion

The purpose of performance feedback is to improve individual and team performance, as well as em- ployee engagement, motivation, and job satisfac- tion. In this article, we described two alternative approaches to feedback: the traditional weaknesses- based approach and the superior strengths-based approach. There are significant negative conse- quences associated with the exclusive use of the weaknesses-based approach. Accordingly, managers should primarily adopt a strengths-based approach, which focuses on what employees do well and encourages the continued and further use of

HUMAN PERFORMANCE 111

these strengths. Table 1 provides a summary of nine specific recommendations on how to deliver feed- back using a strengths-based approach. Following these recommendations will not only improve future performance, but also make it easier for managers to deliver feedback that will result in important benefits for employees, managers, and organizations.

References

Aguinis, H. (2009). Performance management (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

Aguinis, H., Joo, H., & Gottfredson, R. K. (2011). Why we hate performance management–—and why we should love it. Busi- ness Horizons, 54(6), 503—507.

Bossidy, L., & Charan, R. (2002). Execution: The discipline of getting things done. New York: Crown Publishing.

Bouskila-Yam, O., & Kluger, A. N. (2011). Strength-based perfor- mance appraisal and goal setting. Human Resource Manage- ment Review, 21(2), 137—147.

Buckingham, M., & Clifton, D. O. (2001). Now, discover your strengths. New York: The Free Press.

Burke, R. J., Weitzel, W., & Weir, T. (1978). Characteristics of effective employee performance review and development interviews: Replication and extension. Personnel Psychology, 31(4), 903—919.

Cawley, B. D., Keeping, L. M., & Levy, P. E. (1998). Participation in the performance appraisal process and employee reactions: A meta-analytic review of field investigations. Journal of Applied Psychology, 83(4), 615—633.

Clifton, D. O., & Harter, J. K. (2003). Investing in strengths. In K. S. Cameron, J. E. Dutton, & R. E. Quinn (Eds.), Positive organizational scholarship: Foundations of a new discipline (pp. 111—121). San Francisco: Berrett-Koehler.

DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness: Can 360-degree appraisals be improved? Academy of Management Executive, 14(1), 129—139.

Elicker, J. D., Levy, P. E., & Hall, R. J. (2006). The role of leader- member exchange in the performance appraisal process. Journal of Management, 32(4), 531—551.

Fedor, D. B., Eder, R. W., & Buckley, M. R. (1989). The contribu- tory effects of supervisor intentions on subordinate feedback

responses. Organizational Behavior and Human Decision Processes, 44(3), 396—414.

Foster, S. L., & Lloyd, P. J. (2007). Positive psychology principles applied to consulting psychology at the individual and group level. Consulting Psychology Journal: Practice and Research, 59(1), 30—40.

Fulk, J., Brief, A. P., & Barr, S. H. (1985). Trust-in-supervisor and perceived fairness and accuracy of performance evaluations. Journal of Business Research, 13(4), 301—313.

Gardner, W. L., & Schermerhorn, J. R., Jr. (2004). Unleashing individual potential: Performance gains through positive or- ganizational behavior and authentic leadership. Organiza- tional Dynamics, 33(3), 270—281.

Goodman, J. S., Wood, R. E., & Hendrickx, M. (2004). Feedback specificity, exploration, and learning. Journal of Applied Psychology, 89(2), 248—262.

Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4), 349—371.

Jawahar, I. M. (2010). The mediating role of appraisal feedback reactions on the relationship between rater feedback-related behaviors and ratee performance. Group and Organization Management, 35(4), 494—526.

Kay, E., Meyer, H. H., & French, J. R. P., Jr. (1965). Effects of threat in a performance appraisal interview. Journal of Applied Psychology, 49(5), 311—317.

Kinicki, A. J., Prussia, G. E., Wu, B., & Mckee-Ryan, F. M. (2004). A covariance structure analysis of employees’ response to per- formance feedback. Journal of Applied Psychology, 89(6), 1057—1069.

Kluger, A. N., & DeNisi, A. S. (1996). The effects of feedback interventions on performance: A historical review, a meta- analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254—284.

Landy, F. J., Barnes, J. L., & Murphy, K. R. (1978). Correlates of perceived fairness and accuracy of performance evaluation. Journal of Applied Psychology, 63(6), 751—754.

Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55(1), 5—14.

Seligman, M. E. P., Steen, T. A., Park, N., & Peterson, C. (2005). Positive psychology progress: Empirical validation of inter- ventions. American Psychologist, 60(5), 410—421.

Steelman, L. A., & Rutkowski, K. A. (2004). Moderators of em- ployee reactions to negative feedback. Journal of Managerial Psychology, 19(1), 6—18.

  • Delivering effective performance feedback: �The strengths-based approach
    • Building up vs. breaking down
    • The traditional weaknesses-based approach to feedback
    • The superior strengths-based approach to feedback
    • Research-based recommendations for implementing a strengths-based approach to performance feedback
    • How it's done: The nine principles of effective performance feedback at play
    • Conclusion
    • References

Ethical Considerations in Writing Psychological Assessment Reports

Mark H. Michaels Private Practice

In this article, the author addresses the ethical questions and decision evaluators associated with the writing of psychological assessment reports. Issues related to confidentiality, clinical judgment, harm, labeling, release of test data, and computer usage are addressed. Specific suggestions on how to deal with ethical concerns when writing reports are discussed, as well as areas in need of further research. © 2005 Wiley Periodicals, Inc. J Clin Psychol 62: 47–58, 2006.

Keywords: ethics; report writing; assessment

As the final product, and often the only communication about an evaluation, the psycho- logical report is a powerful tool for influencing change or making decisions about the individual being evaluated. The impact of such an evaluation can be life changing, such as employment decisions, or simply informative, such as what psychiatric symptoms are most prominent. Because the psychological report is often given immense weight, care must be taken to ensure any written work is completed with due respect to the ethical obligations involved. Some ethical issues, such as requests by employers for confidential information regarding an employee’s evaluation, are fairly straightforward. Ethical deci- sions in report writing, however, are less distinct and more subtle. Decisions are made all throughout the process about matters such as the wording of reports or what data to include.

Some guidance in making these ethical decisions can be found in the Ethical Prin- ciples of Psychologists and Code of Conduct (EPPCC; American Psychological Associ- ation [APA], 2002). However, ethical standards delineated by diverse sources do not always coincide. For example, the Standards for Educational and Psychological Testing (SEPT; American Education Research Association [AERA], APA, National Council on Measurement in Education, 1999) state:

Correspondence concerning this article should be addressed to: Mark H. Michaels, 211 E. Ocean Blvd. #258, Long Beach, CA 90802; e-mail: drsmnj@earthlink.net

JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 62(1), 47–58 (2006) © 2006 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/jclp.20199

When test scores are used to make decisions about a test taker or to make recommendations to a test taker or a third party, the test taker or the legal representative is entitled to obtain a copy of any report of scores or test interpretation, unless the right has been waived or is prohibited by law or court order.

This is not completely consistent with the EPPCC standard 9.04, which states that “ . . . Psychologists may refrain from releasing test data to protect a client/patient or others from substantial harm or misuse or misrepresentation of the data or the test . . .” (p. 14). Updates of ethical codes, such as the 2002 EPPCC revision, typically superseded previous versions. However, when various codes are not consistently comparable, or when guidelines are offered by separate groups, newer codes may conflict with, rather than supersede, previous practices. When addressing ethical questions, especially when faced with disparate ethical guidelines, clinicians should make decisions with due delib- eration of several general considerations.

This article will address ethical questions that fall within three general areas: the balance between (a) providing information and protecting client welfare, (b) providing information and protecting client confidentiality, and (c) utilizing information that may be of assistance and ensuring information is reliable and valid.

Beneficence and Autonomy

Bricklin (2001) raises a critical issue of autonomy and beneficence. She underscores the dilemma inherent in decisions about what and how to share information. Providing infor- mation respects a client’s right to know (autonomy), while not providing information that would be potentially harmful or disturbing protects that individual’s welfare (benefi- cence). Though there is no one approach that will universally balance these disparate aspirations, there are several considerations that help inform the evaluator’s approach to writing a report.

Harm

One especially significant consideration when writing a report is how conclusions or included data may harm the individual. Directly, a report can cause harm if it leads to negative consequences for the individual. A few immediate consequences that can result from the information added to a report include being denied employment, required to stand trial, or denied health care services. Conversely, harm to others may be prevented even if the client does not obtain a desired outcome. For example, an unqualified indi- vidual being denied a public safety position may ultimately benefit others in the commu- nity. In general, harm to a client may manifest in two primary ways—through a direct impact on the individual’s emotional state or indirectly by modifying how others behave toward that individual.

Smith (1978) discusses two problems that can arise when a client reads a report about himself or herself: misuse of the knowledge obtained and impaired trust in the clinician. Trusting the evaluator, provided that the clinician’s sole role is as tester, is unlikely to be a problem in most cases. Harm from the information included in the report, however, continues to be relevant long after any contact with the evaluator is concluded. Although there is little direct information about harm caused by obtaining knowledge from psychological reports specifically, information regarding psychiatric records may enlighten this issue.

48 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

One way in which harm may be manifest is through the impact of report content on others’ perceptions of the individual, including health care providers, school personnel, or others. Markham (2003) found nursing staff rated their experience with patients receiv- ing certain diagnoses more negatively. Socall and Holtgraves (1992) found that greater rejection was evident toward people labeled as mentally ill, even when exhibiting similar behavior as non-ill individuals. Similar findings regarding differential perceptions of students have been found in school settings (Schwartz & Wilkinson, 1987).

A second aspect of harm is how emotional distress may be created for the client when presented with information about them. Though there is little direct research on how seeing report information impacts clients, there are some research findings that bear on this issue. Kosky and Burns (1995) found that, for the most part, patients’ access to their own records created no problems, though this was not true for all individuals. Roth, Wolford, and Meisel (1980) note how limited access to records can be beneficial. Spe- cifically, when patients were allowed to view their records, though not keep copies, reactions were generally positive. Others have also found that access to records can facilitate rapport and client cooperation (Doel & Lawson, 1986; Golodetz, Ruess, & Milhous, 1976). Bernadt, Gunning, and Quenstedt (1991) found that 28% of patients were upset after seeing a clinical summary. They also found differing reactions depend- ing on diagnosis. Kantrowitz (2004) found that patients had varied, though generally positive, reactions when reading about themselves.

Interestingly, Kantrowitz (2004) also noted that some of the writers acknowledged that knowing patients were going to be reading what was written modified what was included. Although she was investigating written work about treatment, it is easy to imagine how knowing that a client will read an assessment report would affect the con- tent as well. Specific investigation of this area would be illuminating.

A concern specifically raised by Smith (1978) is that report information may be misused. For example, misunderstanding or inaccurate application of IQ scores, diagno- ses, or personality descriptions may be utilized to limit access to services or funding. Smith (1978) specifically argues that misuses may also involve prematurely gained self- knowledge, perhaps leading to treatment resistance. In addition, misinterpretation of tech- nical terms can lead to erroneous conclusions about the individual. Consistent understanding of technical terms has been found to be absent even among clinicians (Rucker, 1967). Given this, it is easy to imagine how technical information could be inaccurately applied by those not already well versed in psychological principles and jargon.

Labeling

Diagnoses of mental retardation, psychiatric illness, or other personal challenges can be stigmatizing (Hayne, 2003). For example, psychiatric, as compared to medical, patients have been found to be viewed more unfavorably (Fryer & Cohen, 1988). Standard 8.8 of the Standards of Educational and Psychological Testing (AERA et al., 1999) specifies that if labels are employed, the least stigmatizing label be used. This presents a dilemma when omitting a label such as a diagnosis might deny the individual resources. In such a situation, beneficence can be assigned to both providing and not providing a label. For example, providing a diagnosis may benefit the client by ensuring external resources, but not providing a diagnosis may be of benefit by avoiding emotional distress. This high- lights the complicated considerations involved in weighing beneficence and autonomy.

In addition to diagnoses, labeling can occur in subtle ways as well. Comments on cognitive weaknesses or “poor coping skills” can be construed as congenital flaws rather

Ethical Considerations in Reports 49

Journal of Clinical Psychology DOI 10.1002/jclp

than as stylistic differences or as areas needing additional training. The report writer must carefully consider how evaluation results are presented, as well as the intended and poten- tial audiences, when offering any information that may be construed negatively.

An ancillary point is where a diagnosis may fall on the continuum of severity. Pro- viding a more severe diagnosis may allow an individual to receive or afford services needed while a more benign diagnosis might rescue the person from potentially prejudi- cial labeling. Provided the decision follows the EPPCC section 6.06, regarding accuracy of information given to payers for services, this decision should be made taking the best interest of the individual in mind. This choice does require some judgment and may present a struggle for many clinicians.

Caution about labeling is particularly relevant for evaluations of minors; such results may be viewed by numerous individuals on a treatment team and by parents (Howe & Miramontes, 1992). Moreover, evaluation comments can become incorporated into other documents (e.g., Individual Education Plans) without the context of the original report, and then transferred along with the child’s records from year to year.

Intelligence Quotient Scores

Like diagnosis, IQ represents technical information that may be misconstrued by untrained individuals. Providing intelligence test scores has long been a point of debate (Kaufman & Lichtenberger, 2002; Lezak 1988). This debate bears directly on EPPCC standard 9.04. Inclusion of scores allows for easy comparison, either normatively or to past eval- uation findings. In contrast, IQ scores can easily become the focus, with subsequent discussion of cognitive strengths and weaknesses being lost. For example, providing a full-scale IQ may result in a child’s exclusion from an accelerated academic program, even if the report subsequently explains the limited accuracy of the single score given verbal and nonverbal differences or subtest scatter. The evaluator’s decision must weigh the relative benefit of having a score included against the potential drawbacks.

Given the well-documented increase in IQ over time (Flynn, 1998), reporting IQ scores from older tests (e.g., more than 10 years old) are likely to be inaccurate. For example, in a few years, scores on the Wechsler Adult Scales of Intelligence (third edi- tion) is anticipated to be 3 points higher than when the test was first published in 1997. Accuracy of IQ test scores takes on new magnitude when decisions are being made about life and death consequences, such as forensic evaluation in death penalty cases (Cici, Scullin, & Kanaya, 2003). Using a range of error helps to ameliorate this problem, but providing a numeric score could be considered inaccurate enough to raise ethical ques- tions. The problem of increasing IQ scores may fall in a gray area when considering what would constitute outdated test results (EPPCC section 9.08: Obsolete Tests). Not taking the age of normative data into account may result in improper use of test results.

Bricklin (2001) states when considering autonomy and beneficence, autonomy usu- ally takes precedence. Utilizing this guiding idea must be tempered, however, when a choice may advance both principles to some extent. In the example of providing a diag- nosis, including a label in the report both respects the individual’s right to know and their welfare, at least in part, by facilitating provision of resources. Including intelligence test scores, in particular, may span multiple ethical questions, being relevant to both potential harm and to client confidentiality.

Beneficence and Confidentiality

Psychological evaluations contain some of the most intimate and influential information one can obtain about another person, and care should be taken to ensure the report is only

50 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

shared with appropriate consent or as required by law. Confidentiality is a concern when developing the content of the report as well. Section 4.04 of the EPPCC states, “Psy- chologists include in written and oral reports and consultations, only information ger- mane to the purpose for which the communication was made” (p. 7). For example, if notable information emerges during an evaluation, that was not part of the original refer- ral question, should that information be included in the report? On one hand, such infor- mation could be very helpful to the referral source, and ultimately to the individual. On the other hand, providing information that was not requested may violate the individual’s right to confidentiality (EPPCC section 4.04: Privacy). Consent to release information would solve this dilemma in most cases; however, there may be instances when including information would not be desired by the client, such as in some personnel or forensic evaluations. This ethical problem may be best addressed by clarifying in advance, how such potential findings will be handled. If permitted by law, psychologists may release information without consent to provide needed professional services (EPPCC section 4.05: Disclosures). It is questionable if this would apply to inclusion of information in a report; however, despite the potential utility of that information. In the absence of advance clarification and consent, evaluators should be cautious and keep information in reports germane to the referral question.

Release of Test Data

Inclusion of raw data within a report has given rise to a significant debate (Matarazzo, 1995; Naugle & McSweeny, 1995). The Standards for Educational and Psychological Testing generally encourages omission of test data (AERA et al., 1999). Although it has been argued that raw data should be routinely appended to neuropsychological reports, Naugle and McSweeny (1995) point out potential ethical violations, particularly of Stan- dard 2.02 of the Ethical Principles of Psychologists (APA, 1992) regarding misuse of test information and section 5.03 regarding privacy. Though renumbered with the 2002 ethics code revision (APA, 2002), the sections noted by Naugle and McSweeny (1995) remain. Moreover, a new section addressing the inclusion of test data clarifies that, in the absence of a release from the client, data should be provided only as required by law or court order. Notably, reference to the qualifications of those receiving information was deleted.

In the Statement on the Disclosure of Test Data (SDTD) by the American Psycho- logical Association Committee on Psychological Tests and Assessment (1996), several considerations on the disclosure of raw test data are identified. These include consent to release information, disclosure to unqualified individuals, test security and copyright obligations, and conformity with legal statutes, regulatory mandates, and organizational rules. Some of these considerations do not directly parallel those in the EPPCC. For example, the SDTD discourages release of information to “unqualified” individuals, though the EPPCC has no such admonition. Evaluators should reflect on all positions carefully before deciding how much, if any, information is disclosed.

Although differences may occur across jurisdictions, in general, legal and ethical release of test information cannot be done without the client’s consent (APA, 2002). Obtaining consent to release the report would effectively allow release of all relevant data regardless of form. In some instances, however, such as when the client is an orga- nization rather than an individual, release of any test scores to the individual tested might not be authorized.

Perhaps the most compelling concern with regard to raw data is the intended reader (Pieniadz & Kelland, 2001), especially for clinicians who are working in the legal arena.

Ethical Considerations in Reports 51

Journal of Clinical Psychology DOI 10.1002/jclp

It is not uncommon that raw data be requested in psycho-legal evaluations. Moreover, the test items or questions that form the basis for an individual’s responses are also some- times called into question. Although the limitations on who is qualified to have access to raw data has changed with the 2002 revision of the EPPCC, the evaluator should still be aware of who is requesting the release of data. Though qualifications are no longer addressed in the EPPCC, test data may still be withheld (a) if data may be misused or misrepre- sented, and (b) to protect the client or others from substantial harm (EPPCC section 9.04). Hence, release of data to unqualified individuals still may present an ethical trans- gression (APA, 1996).

Even prior to the revision of the APA ethics code, some argued that release of data did not represent an ethical problem (Matarazzo, 1995). Recent revision of the ethics code and introduction of the Health Information Portability and Accountability Act of 1996 (HIPAA; 1996) requirements may have actually lessened concerns about release of data. For example, Erard (2004) notes that release of test data present even less of a dilemma, especially because the EPPCC no longer requires data be released only to qualified individuals. However, he also notes that the changes do not fully clarify this question, and suggests that clinicians continue to follow the Specialty Guidelines for Forensic Psychologists (Committee on Ethical Guidelines for Forensic Psychologists, 1991) in taking reasonable steps to ensure data be interpreted by qualified professionals. Presently, clinician’s may be wise to choose the more cautious approach.

Release of Test Procedures and Materials

It is standard practice for test publishers to require clinicians’ agreement not to release any information about a test or test materials to unqualified persons. This agreement is typically a prerequisite for a test publisher to allow use of that instrument. Moreover, psychologists are generally discouraged from releasing information on ethical grounds and are required to respect copyright laws (APA, 1996). However, psychologists are frequently asked to provide information about the contents of a test. This may be for comparison to more current evaluation results, to clarify the basis of the evaluator’s conclusions, or for opposing parties in legal action to challenge the results or the test itself. Whatever the purpose, maintaining the copyright or proprietary rights to the test material may conflict with legal and clinical needs. The APA SDTD (1996) states:

It is prudent for psychologists to be familiar with the terms of their test purchase or lease agreements with test publishers as well as reasonably informed about relevant provisions of the federal copyright laws. Psychologists may wish to consult with test publishers and/or knowledgeable experts to resolve possible conflicts before releasing specific test materials to ensure that the copyright and proprietary interests of the test publisher are not compromised.

The Statement also suggested that individuals consider the audience that might receive the test materials, and obtain permission of test publishers before reprinting or copying any test material. Additionally, the EPPCC specifically distinguishes between test data and test materials, and encourages psychologists to make reasonable efforts to “maintain the integrity and security of test materials and other assessment techniques” (p. 14).

Knapp and VandeCreek (2001) note that in forensic reports it is particularly impor- tant to substantiate findings. A clinician may feel it is important to include relevant examples of responses or specific data to substantiate conclusions. For example, listing Minnesota Multiphasic Personality Inventory-2 (MMPI-2) critical items the individual endorsed is a powerful way of communicating about that person’s functioning. In these instances, release of test material is not supplemental to the report, but directly included

52 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

in the report content. However, maintaining test security is an ethical requirement (EPPCC section 9.11: Test Security). It is reasonable for evaluators to exert the same caution applicable to release of appended test data when deciding whether to include such mate- rial within the report narrative. Of course, the evaluator can choose not to include any specific information about an instrument within the report itself. This approach circum- vents having to address this question unless a separate direct request is made for test materials.

In her discussion of the decision making process, Bricklin (2001) states that one consideration in resolving ethical dilemmas is whether there are “compelling reasons to deviate from the standard.” This process is made more challenging when disparate stan- dards are, themselves, at variance. Any deviation from confidentiality standards can be even more problematic as many jurisdictions require confidentiality by law. Again, choices that enhance all competing interests may serve the client best. In the absence of this possibility, striving to achieve the first objective set forth in the EPPCC (General Prin- ciple A) should be the foremost guiding principle. That is, do no harm.

Validity and Utility

One specific aspect of providing information ethically involves the use of data that may be of limited reliability or validity. Section 9.02 of the EPPCC states, “Psychologists use assessment instruments whose validity and reliability have been established for use with members of the population tested. When such validity or reliability has not been estab- lished, psychologists describe the strengths and limitations of test results and interpretation” (p. 13).

Further, client statements and behavior during an evaluation are considered to be test data (EPPCC section 9.04: Release of Test Data). Many reports include client statements, information from collateral sources, or data from less reliable instruments. This may occur as background data or within the evaluation results. Providing information about client’s statements or behavior may stretch the limits of section 9.02, but may also have great utility in helping the reader understand the individual. Hence, the psychologist must be cautious to ensure, or at least explain the limitations of, the validity of statements in all sections of the report. These decisions are crucial as the referral source may not distin- guish which information is more reliable or valid once it has been combined into a report.

Clarifying the limitations of an evaluator’s observations may be a delicate venture. For example, “clarifying” the limited reliability of a client’s statement that she never drinks alcohol may suggest the individual is lying. The evaluator needs to weigh the benefit of incorporating less-reliable information against the drawback of violating EPPCC section 9.02. Judging how critical questionably valid information is in clarifying assess- ment findings may be the yardstick for determining if a compelling reason for violating the standard is present.

Computer-Aided Assessment and Ethics

One area that has become increasingly relevant to report writing is computer-based test interpretation (CBTI). Many programs incorporate, along with computer scoring, inter- pretive statements organized in a format similar to portions of a written report. Though test publishers usually include a disclaimer that these statements are not to be considered a final report, even aggregate incorporation of the statements may risk a breech of ethics. Neither the individual’s unique characteristics (Butcher, Perry, & Hahn, 2004), nor com-

Ethical Considerations in Reports 53

Journal of Clinical Psychology DOI 10.1002/jclp

binations of information from different sources are incorporated in generating these statements. The algorithmic basis for statement generation may also not be available to the evaluator. Nevertheless, evaluators are still ethically bound to ensure the information utilized is accurate (EPPCC section 9.09: Interpretation Services). The level of detail necessary to make this determination has yet to be determined, representing an important area of future research.

Ensuring the accuracy of information is also challenging because the basis for inter- pretive statements may not be clear (Lichtenberger, this issue, 2006, pp. 19–32). Mat- arazzo (1986) notes that many computer interpretation programs follow from an expert’s judgment, and disparate opinions are rarely included. Providing only one analysis does not mean that interpretations are inaccurate, but does make the evaluator’s job discerning interpretive precision more challenging. Butcher, Perry, and Atlis (2000) review several studies that address the accuracy of CBTI interpretations. They conclude that most stud- ies supported the accuracy of interpretation, though as much as 50% of interpretative statements will not apply to a specific client. These findings are not universal though (Feldstein et al., 1999).

Questions about validity of interpretive statements and computer algorithms are not easily addressed within the narrative of a written report. However, deciding how much weight to put on specific results can ultimately affect the written document contents.

Taking CBTI statements at face value, such as wholesale pasting of statements into a report, would likely be considered unethical. Attempting to explain the limitations of such data may result in the evaluator writing more about the interpretive process than about the client. In the end, incorporation of CBTIs may be best addressed by integrating computer-generated information as merely one source of data, analogous to all other data generated from the evaluation. Any interpretation written in the report would thus reflect an amalgam of information that converges on a particular conclusion. In this way, any limitations of computer-generated statements’ reliability or validity are at least tempered. Including CBTI interpretations only after confirming with other sources, such as refer- ence texts or even clinical experience, would also be helpful.

How to Address Ethical Questions in the Written Report

Provided written reports are kept to a readable length (Harvey, this issue, 2006, pp. 5– 18), evaluators have limited space in which to offer information or suggestions. Some decisions about what to include or leave out are required, including choices about back- ground information, previous test results, interpretive statements, diagnoses, recommen- dations, and raw data. Perhaps the most significant of these are choices regarding interpretation of a person’s cognitive and personality functioning. For example, incorpo- rating statements about a person’s weaknesses or problematic behavior could lead to negative perceptions by others, or emotional distress for the individual. Decisions about including any interpretive statements should be governed by the guiding principles of autonomy, beneficence, confidentiality, and above nonmaleficence. Moreover, inclusion of any information should minimize intrusion on privacy (EPPCC section 4.04).

Decisions about what to incorporate can follow several steps. The first consideration is whether information included will harm the client. If so, it is better to leave that infor- mation out or reword the explanation such that it is less likely to cause distress or lead to labeling. Second, information should not be included if it will clearly or very likely breech confidentiality. Clinicians should take reasonable care to avoid including data that go beyond the agreed upon scope of the evaluation, even if that information may be of help to the client. Perhaps the wisest approach is to ensure the individual is informed and

54 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

provides consent to all findings being discussed, even if those findings end up being adverse or disagreeable. Finally, information should be included if it will be of benefit to the individual, provided doing so does not compromise the prior considerations.

Language

As significant as whether information is provided is how it is provided. Specifically, how interpretations and opinions are worded can have a significant impact on the conclusions a reader may draw. For example, stating that an individual has a “weakness in simulta- neous processing” has a more negative, and hence potentially deleterious, connotation than stating that the individual “learns more effectively in a step-by-step manner.” The content and style of any statements can moderate, or exacerbate, the impact a report may have on perceptions of the client by the referral source or even by the client.

Writing clearly and precisely has been advocated by many authors (Harvey, this issue, 2006, pp. 5–18; Lichtenberger, Mather, Kaufman, & Kaufman, 2004; Ownby, 1997). Additionally, it is preferable that information be provided in a positive way, focusing on the individual’s strengths (Snyder, Ritschel, Rand, & Berg, this issue, 2006, pp. 33– 46). Utilizing precise and thoughtful language helps address the issues of harm, labeling, and confidentiality. Moreover, emotional distress is likely to be minimized if the information presented is focused on capabilities rather than liabilities.

Presenting the Report

Allen et al. (1986) note that a report may generate an in-person discussion of test find- ings. Providing verbal feedback along with written information has been demonstrated to enhance therapeutic rapport and client self-perception (Allen, Montgomery, Tubman, Frazier, & Escovar, 2003); it also allows for questions to be answered thoroughly. Although not a mandate that psychologists present information in-person, section 9.10 of the EPPCC encourages psychologists to take “reasonable steps to ensure that explanations of results are given to the individual” (p. 14). Providing verbal feedback along with the written report, rather than merely a copy of the report, seems preferable given the potential shortcomings of the latter approach (Kantrowitz, 2004).

The need to provide feedback to a client raises a final ethical question: Should a copy of the report be given to a client? The position of refraining from releasing information (section 9.04) and providing information (section 9.10) may present an ethical challenge when the request is for a copy of the report rather than merely an explanation of results. Although it has been argued that access to one’s own record enhances treatment in some ways (Doel & Lawson, 1986), the full effect of such releases has yet to be established with regard to psychological reports specifically. Clarifying this question will help inform clinician’s decisions about balancing the competing interests of autonomy and beneficence.

Future Needs

The present discussion is not meant to be comprehensive. There will inevitably be vari- ations or elaborations of the ideas discussed in this review when working with specific populations or questions. Custody, organizational, and worker’s compensation evalua- tions each present unique report writing challenges (Ackerman, this issue, 2006, pp. 59– 72). Still, many of the concepts currently identified are fundamental to all reports. Some questions raised in this discussion require more information before a comprehensive list of options for ethical resolution can be generated. These include:

Ethical Considerations in Reports 55

Journal of Clinical Psychology DOI 10.1002/jclp

1. What impact does the release of reports or interpretive information have on cli- ents? This includes the release to others as well as to the client directly.

2. Does knowledge that a client will read a report change the content included?

3. How do clinicians take confidentiality into account when deciding to incorporate specific conclusions or use specific wording in reports?

4. How do clinicians address and evaluate the accuracy of computer programs inter- pretive algorithms?

5. What impact does the Internet have on computer-aided interpretation and narra- tive generation, as well as confidentiality of report documents and test stimuli? This reflects a broader question about confidentiality of reports in electronic media.

6. One final, if only tangentially related, emerging question is how much the prolif- eration of readily available professionally developed tests that yield narrative reports (e.g., e-harmony.com-style evaluations) change the way consumers view psychological reports in general?

Answering these questions will go a long way toward improving the basis for mak- ing sound ethical decisions when writing psychological reports.

References

Ackerman, M.J. (2006). Forensic report writing. Journal of Clinical Psychology, 62(1), 59–72.

Allen, A., Montgomery, M., Tubman, J., Frazier, L., & Escovar, L. (2003). The effects of assess- ment feedback on rapport-building and self-enhancement process. Journal of Mental Health Counseling, 25, 165–182.

Allen, J.G., Lewis, L., Blum, S., Voorhees, S., Jernigan, S., & Peebles, M.J. (1986). Informing psychiatric patients and their families about neuropsychological assessment findings. Bulletin of the Menninger Clinic, 50, 64–74.

American Education Research Association, American Psychological Association, & National Coun- cil on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

American Psychological Association. (1992). Ethical principles of psychologists and code of con- duct. American Psychologist, 47, 1597–1611.

American Psychological Association (2002). Ethical principles of psychologists and code of con- duct. American Psychologist, 57, 1060–1073.

American Psychological Association, Committee on Psychological Tests and Assessment. (1996). Statement on the disclosure of test data. Washington, DC: Author.

Bernadt, M., Gunning, L., & Quenstedt, M. (1991). Patients’ access to their own psychiatric records. British Medical Journal, 303, 967.

Bricklin, P. (2001). Being ethical: More than obeying the law and avoiding harm. Journal of Per- sonality Assessment, 77, 195–202.

Butcher, J.N., Perry, J., & Hahn, J. (2004). Computers in clinical assessment: Historical develop- ments, present status, and future challenges. Journal of Clinical Psychology, 60, 331–345.

Butcher, J.N., Perry, J.N., & Atlis, M.M. (2000). Validity and utility of computer based test inter- pretation. Psychological Assessment, 12, 6–18.

Cici, S.J., Scullin, M., & Kanaya, T. (2003). The difficulty of basing death penalty eligibility on IQ cutoff scores for mental retardation. Ethics & Behaviors, 13, 11–17.

Committee on Ethical Guidelines for Forensic Psychologists. (1991). Specialty guidelines for foren- sic psychologists. Law and Human Behavior, 15, 655– 665.

56 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

Doel, M., & Lawson, B. (1986). Open records: The client’s right to partnership. British Journal of Social Work, 16, 407– 430.

Erard, R.E. (2004). Release of test data under the 2002 ethics code and the HIPPA privacy rule: A raw deal or just a half-baked idea? Journal of Personality Assessment, 82, 23–30.

Feldstein, S.N., Keller, F.R., Portman, R.E., Durham, R.L., Klebe, K.J., & Davis, H.P. (1999). A comparison of computerized and standard versions of the Wisconsin card sorting test. Clinical Neuropsychologist, 13, 303–313.

Flynn, J.R. (1998). IQ gains over time: Toward finding the causes. In U. Neisser (Ed.), The rising curve: Long term gains in IQ and related measures. Washington, DC: American Psychological Association.

Fryer, J.H., & Cohen, L. (1988). Effects of labeling patients “psychiatric” or “medical”: Favorabil- ity of traits ascribed by hospital staff. Psychological Reports, 62, 779–793.

Golodetz, A., Ruess, J., & Milhous, R.L. (1976). The right to know: Giving the patient his medical record. Archives of Physical Medicine Rehabilitation, 57, 78–81.

Harvey, V.S. (2006). Variables affecting the clarity of psychological reports. Journal of Clinical Psychology, 62(1), 5–18.

Hayne, Y.M. (2003). Experiencing psychiatric diagnosis: Client perspectives on being named men- tally ill. Journal of Psychiatric & Mental Health Nursing, 10, 722–729.

The Health Information Portability and Accountability Act. Pub. L No. 104–191 (1996).

Howe, K.R., & Miramontes, O.B. (1992). The ethics of special education. New York: Teachers College Press.

Kantrowitz, J.L. (2004). Writing about patients: II. Patients’ reading about themselves and their analysts’ perceptions of its effect. Journal of the American Psychoanalytic Association, 52, 101–123.

Kaufman, A.S., & Lichtenberger, E.O. (2002). Assessing Adolescent and Adult Intelligence (2nd ed.). Boston: Allyn & Bacon.

Knapp, S., & VandeCreek, L. (2001). Ethical issues in personality assessment in forensic psychol- ogy. Journal of Personality Assessment, 77, 242–254.

Kosky, N., & Burns, T. (1995). Patient access to psychiatric records: Experience in an inpatient unit. Psychiatric Bulletin, 19, 87–90.

Lezak, M.D. (1988). IQ: R.I.P. Journal of Clinical and Experimental Neuropsychology, 10, 351–361.

Lichtenberger, E.O. (2006). Computer utilization and clinical judgment in psychological assess- ment reports. Journal of Clinical Psychology, 62(1), 19–32.

Lichtenberger, E.O., Mather, N., Kaufman, N.L., & Kaufman, A.S. (2004). Essentials of assess- ment report writing. New York: Wiley.

Markham, D. (2003). Attitudes towards patients with a diagnosis of ‘borderline personality dis- order’: Social rejection and dangerousness. Journal of Mental Health (UK), 12, 595– 612.

Matarazzo, J.D. (1986). Computerized clinical psychological test interpretations: Unvalidated plus all mean and no Sigma. American Psychologist, 41, 14–24.

Matarazzo, R.G. (1995). Psychological report standards in neuropsychology. The Clinical Neuro- psychologist, 9, 249–250.

Naugle, R.I., & McSweeny, A.J. (1995). On the practice of routinely appending neuropsychological data to reports. The Clinical Neuropsychologist, 9, 245–247.

Ownby, R.L. (1997). Psychological reports: A guide to report writing in professional psychology (3rd ed.). New York: Wiley.

Pieniadz, J., & Kelland, D.Z. (2001). Reporting scores in neuropsychological assessments: Ethi- cality, validity, practicality, and more. In C.G. Armengol, E. Kaplan, & E.J. Moes (Eds.), The consumer-oriented neuropsychological report (pp. 123–140). Lutz, FL: PAR.

Roth, L.H., Wolford, J., & Meisel, A. (1980). Patient access to records: Tonic or toxic? American Journal of Psychiatry, 137, 592–96.

Ethical Considerations in Reports 57

Journal of Clinical Psychology DOI 10.1002/jclp

Rucker, C.N. (1967). Technical language in the school psychologist’s report. Psychology in the Schools, 4, 146–150.

Schwartz, N.H., & Wilkinson, W.K. (1987). Perceptual influence of psychoeducational reports. Psychology in the Schools, 24, 127–135.

Smith, W.H. (1978). Ethical, social, and professional issues in patients’ access to psychological test reports. Bulletin of the Menninger Foundation, 42, 150–155.

Snyder, C.R., Ritschel, L.A., Rand, K.L., & Berg, C.J. (2006). Balancing psychological assess- ments: Including strengths and hope in client reports. Journal of Clinical Psychology, 62(1), 33– 46.

Socall, D.W., & Holtgraves, T. (1992). Attitudes toward the mentally ill: The effects of label and beliefs. Sociological Quarterly, 33, 435– 445.

58 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

Calculate the price
Make an order in advance and get the best price
Pages (550 words)
$0.00
*Price with a welcome 15% discount applied.
Pro tip: If you want to save more money and pay the lowest price, you need to set a more extended deadline.
We know how difficult it is to be a student these days. That's why our prices are one of the most affordable on the market, and there are no hidden fees.

Instead, we offer bonuses, discounts, and free services to make your experience outstanding.
How it works
Receive a 100% original paper that will pass Turnitin from a top essay writing service
step 1
Upload your instructions
Fill out the order form and provide paper details. You can even attach screenshots or add additional instructions later. If something is not clear or missing, the writer will contact you for clarification.
Pro service tips
How to get the most out of your experience with Scholary Essays
One writer throughout the entire course
If you like the writer, you can hire them again. Just copy & paste their ID on the order form ("Preferred Writer's ID" field). This way, your vocabulary will be uniform, and the writer will be aware of your needs.
The same paper from different writers
You can order essay or any other work from two different writers to choose the best one or give another version to a friend. This can be done through the add-on "Same paper from another writer."
Copy of sources used by the writer
Our college essay writers work with ScienceDirect and other databases. They can send you articles or materials used in PDF or through screenshots. Just tick the "Copy of sources" field on the order form.
Testimonials
See why 20k+ students have chosen us as their sole writing assistance provider
Check out the latest reviews and opinions submitted by real customers worldwide and make an informed decision.
Nursing
n/a
Customer 452725, April 23rd, 2021
Business Studies
great job!!!!
Customer 453127, October 31st, 2022
Advanced Technical Writing
Excellent service as always, Thank you!
Customer 452995, January 20th, 2022
Logistics
The presentation slides were not narrated as asked per the instructions.
Customer 452623, September 28th, 2021
Sociology
Thank you, the journal wa submitted today and I hope to continue receiving services.
Customer 452919, October 28th, 2021
Sociology
I have never experienced receiving a paper past the due date and time. That is the only thing that displeases. I don't have time o Overall, your team does a great job.
Customer 452919, November 18th, 2021
Nursing
Very good service
Customer 453075, April 27th, 2022
Sociology
Thank you!
Customer 452919, April 5th, 2022
Sociology
Thank you for your service as a team .
Customer 452919, December 1st, 2021
Psychology
Thank you so much!! Very much appreciated!
Customer 452717, April 20th, 2021
Sociology
Thank you for following the guidelines of discussion. Awesome! Hopefully the grade will be better than the first 2 discussions submitted. I was fearful when I received an email requesting to send a copy of the book. Please remember that I confirmed whether or not you all had access to the book before I submitted my first request for assistance.
Customer 452919, February 1st, 2023
Natural Sciences
Revision in a short period of time!
Customer 452947, November 9th, 2021
11,595
Customer reviews in total
96%
Current satisfaction rate
3 pages
Average paper length
37%
Customers referred by a friend
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Live Chat+1(978) 822-0999EmailWhatsApp

Order your essay today and save 30% with the discount code ESSAYHELP