Test Review(Gordon Diagnostic System) 1
Table of Contents
Test Review(Gordon Diagnostic System)

I’ve added an example of how it should look like when completed. I also added the actual review form two reviewers. all the data is collected, the questions below will need to be answered. PLEASE LOOK AT THE EXAMPLE THAT I ATTACHED AND LOOK/READ OVER THE ACTUAL REVIEW.
1)The Test- cost, time to take the test, theory behind the test, number of items, age appropriateness, and any other information relevant to teaching me about the test ( Approximately one page double spaced)
2)Reviewer #1- norm sample, practicality and cultural fairness, validity, reliability, final comments ( At a Minimum, one page double spaced)
3)Reviewer #2- norm sample, practicality and cultural fairness, validity, reliability, final comments ( At a Minimum, one page double spaced)

4) Your thoughts on norm sample, practicality and cultural fairness validity, reliability, final comments about using the test. Why or why not. (At a Minimum, one page double spaced). I want your thoughts based on specific information and not just opinions such as “I don’t like the GRE’s” or “I don’t think it’s fair to subject students to standardize testing.” I want to know what you think about the norm sample, practicality and cultural fairness validity, reliability based specifically on what you learned from both reviewers and any other source.

Tests and Measurements
Achievement Test Review
Review of the Becker Work Adjustment Profile 2
Reviewer 1: James T. Austin & Stephanie D. Tischendorf
Reviewer 2: Pam Lindsey
Description: The Becker Work Adjustment Profile 2 was created to measure the vocational
competency of people with disabilities in their work environments. By assessing work habits, attitudes, and skills of people with special needs, it aims to target problem areas and assess the level of
supports needed. It is targeted for those ages 13 and older who are disabled, including those who
are mentally retarded, physically disabled, emotionally disturbed, learning disabled, and-or
economically disadvantaged. The instrument is completed by a rater-observer who, according to
the user’s manual and test booklet, have “closely observed the daily work behavior of the client and
has knowledge of the individual’s work adjustment”. Austin & Tischendorf describe the Becker
Work Adjustment Profile 2(BWAP:2) as a “restandardization of the 1989 Work adjustment
profile (as the items have not changed).” They also offer information about the test’s first version
from 1989, which was a revision of a rating scale developed as part of a 1965 Federal grant.
Reviewer 2, Pam Lindsey, points out that vocational competence is an element of rehabilitation
in those with disabilities, thus this test is useful for those working with disabilities and helping
them to be successful in the workplace. By knowing a clients vocational competence, we can
help address areas the client may be having problems in that affect their job performance.
Types of Items:

The first page of the instrument is used to gather information on the patients background
(name, sex, date, grade, date of birth, age in years, IQ, school-facility, primary disability,
secondary disability, name, & title of evaluator). Pages 2 through 12 include 63 ratings that are
divided into four categories. Work habits and attitudes is the first category, with 10 items or
ratings questions, Interpersonal relations is the second with 12 items, Cognitive skills is the third
with 19 items, and work performance skills is the last with 22 items. Page 13 and 14 provide
information on the results of the test and psychometric information. The rater scores each
domain and enters the score at the end of each domain. Vocational personnel familiar with daily
demands of the job and the individual being assessed complete the questionnaire (Pam Lindsey).
Scoring Information:
The items are rated on a 0-4 rating, with 0 being a negative score and 4 being a positive
score. Four domains and a total composite score (Broad Work Adjustment) are used in creating
a profile of the client and work placement that can be linked to work support needs. As stated
above, the rater scores each domain and enters the score at the end of each domain. Raw scores
are translated to T scores and percentile ranks by disability category. Again, this information can
be found on page 13 of the questionnaire booklet. Also found on page 13 is raw scores,
percentiles, work placement and work support needs, and other useful information for
interpreting the results of the questionnaire.
Technical Aspects/Psychometric Properties: A user’s manual is included with the exam that
Austin & Tischendorf feel is quite “extensive”. The user’s manual does 2 things. It defines the
four major methods of client work evaluation (work sample, job analysis, standardized tests and
situational assessment), this test falling into the situational assessment category. It also discusses
vocational competence and how it relates to work adjustment. It provides information on
administering the test, scoring the test, and use of the test. It also provides technical evidence of
reliability and validity.
The BWAP:2 was normed against 4,019 individuals with various disabilities, although Austin &
Tischendorf felt it was weighted towards those with MR. The norm group is categorized by
diagnostic category (disability) and gender, and the sample is geographically diverse. All
normative data is found in Appendix A of the user’s manual.
Reliability: Different estimates of reliability of the BWAP:2 is shown, derived from subsamples
based on diagnostic category. Internal Consistency estimates for domain (.80 to .93) and for
BWA total score (.87 to .91), retest estimates of over 2-week interval (range .82 to .96 across
domains), standard errors of measurement (.91 to 5.84) and interrater reliability estimated by
pairs of raters for a sample of 117 adults in 3 sheltered workshops (.82 to .89 across domains, .87
for Broad Work Adjustment). Both Austin & Tischendorf and Pam Lindsey felt that these
numbers signified multiple estimates of reliability and normally accepted values.
Validity: The user’s manual also provides information on validity. Construct and criterion-
related validity data were stated to be sufficient. To measure criterion-related validity, 167
people with MR had their scores on the BWAP:2 compared with their scores on the AAMR
Adaptive Behavior scale. The AAMR Adaptive Behavior Scale measures vocational adjustment
and adaptive behavior in its test takers. The evidence from this comparison was used as proof
that the criterion-related validity was satisfactory. Internal consistency, test-retest and interrater
reliability studies were conducted and all showed to be adequate and/or stable. Pam Lindsey felt
that “Overall, technical data appear to support adequate reliability and validity”. Austin &
Tischendorf, however, felt that really the task of comparing the AAMR Adaptive Behavior Scale
and the BWAP:2 was showing convergent validity, as they felt this did not show how the
instrument predicts success in some external outcome.
Possible Accommodations and modifications: Austin & Tischendorf felt the test includes an
economically disadvantaged subgroup that they feel needs to be better supported to be beneficial.
Another modification idea by Austin & Tischendorf was to increase the length of the retest
period to more than 2 weeks in an attempt to strengthen the reliability evidence of the test. They
also felt that the breakdown of the standardization sample or the norm group was not broken
down enough. They wrote that it would be better if it were additionally broken down into age,
ethnicity, and sex. Currently, it is broken down into mean age by disadvantage. They also
thought statistically significant differences in regard to these break downs (age/ethnicity/sex)
should also be included.
Practical Applications and Uses: As stated in the description, the BWAP:2 was made to assess
work habits, attitudes, and skills of those with disabilities. By doing this, we can assess the level
of support needed for these people with disabilities and help them get the support that they need
to perform a job the best they can. It should be used as part of a larger assessment process and
not solely used to assess one’s capabilities of performing or holding a job. Instead, it should be
used to help find weak areas that need help or services.
Clinical recommendations and cautions for use: Austin & Tischendorf caution that the
instrument is emphasizes mental retardation and may be dated when compared with current
theories and approaches to learning disabilities. They also recommend using modern test theory
and confirmatory factor analysis in analyzing data and in appraising the construct validity of the
four different domains and the BWA composite score. They also felt that it is important that the
vocational competence is standardized across all raters in order to establish the validity of
observational data.
Summary of Reviewers:
In general, Pam Lindsey provides a less extensive review of the BWAP: 2 than do James T.
Austin & Stephanie D. Tischendorf. Pam Lindsey seems to give the basics needed to get a
picture of the BWAP:2. Pam Lindsey mentions the bias that comes along with having an
observer who is observing the client or test taker and that perceptions and prejudices can get in
the way. However, she feels that the instrument is valuable in helping professionals try to
measure the vocational competence of persons with disabilities, targeting areas that need special
attention, and assist them in building a rehabilitation plan that will be appropriate for their
special needs. She feels that it is technically adequate or sound and that scores could be
compared with other measures like adaptive behavior or cognitive abilities. She also feels it is
not meant to be an assessment of an individual’s ability to work or be successful on the job but
rather to be part of a bigger plan or process to help the professional target areas that are strengths
and weaknesses to help them get support that they need.
Austin & Tischendorf offer a much more in depth and critical review of the BWAP:2. While
Pam Lindsey felt that overall the data showed satisfactory reliability and validity, Austin &
Tischendorf felt that comparing the AAMR Adaptive Behavior Scale and the BWAP:2 did not
prove that the BWAP:2 could predict success in some external outcome, which is basically the
definition of criterion related validity. Pam Lindsey also did not offer many ideas in terms of
modifications or problems with the test. Austin& Tischendorf felt the norm group was heavily
weighted towards those with MR, and the test is for those with who are mentally retarded,
physically disabled, emotionally disturbed, learning disabled, and-or economically
disadvantaged, and thus should have a norm group more equally weighed between the different
disabilities. They felt that including an economically disadvantaged should be better supported
to be beneficial, and that the test-retest period should be made longer and should be compared
with week 2 re-test results in order to strengthen the reliability evidence of the test. They also
called for a more extensive breakdown of scores(age/ethnicity/sex) should be incorporated into
the BWAP:2. In terms of final thoughts about the BWAP:2, Austin & Tischendorf felt the test
was easily administered and scored by raters, with “ample opportunity to observe the focal
individual” (Austin & Tischendorf). They felt that several improvements had been made from
its previous version. They left it by stating the following:
“Any issues that remain do not preclude a recommendation to use the BWAP:2 but rather
suggest continuing to develop its knowledge base”(Austin & Tischendorf).
My Opinion of the Instrument/Reviewers: Overall, I felt that the Becker Work Adjustment
Profile 2 seems it could be a very useful tool for helping those with disabilities be the best that
they can be. I agreed with both about the reliability being adequate, however I sided with Austin
& Tischendorf when they discussed the issue of proving evidence that the BWAP:2 has criterion
related validity. By comparing tests, I do not see how that proves that BWAP:2 proves a client
would be successful in the future. I also agreed with Pam Lindsey about the bias that can come
along with having an observer that is rating someone else, and the interrater reliability that goes
along with this. However, I also agree with Pam Lindsey that this comes with the territory of all
observational instruments, and I feel it should be kept in mind when interpreting scores. Also,
the BWAP:2 does have adequate interrater reliability at .82 to .89 across domains, and that I
think should put most those considering using the BWAP:2 at ease.
I also feel that in terms of reliability and validity that it is sufficient and is an advantage to using
this test. I think that the goals of this test also really make it special or worth using, because it is
looking for strengths and weaknesses in the client’s vocational skills, habits and attitudes. As a
rehabilitation counselor this is a huge goal. Having a way to try to find these weaknesses and
helping a client work on them or receive help can help a client’s job stability, self esteem, and
help them feel more socially accepted by others because they are performing up to standards.
As Pam Lindsey mentioned, I think it is very important with all tests to remember that it should
be used as part of a larger process which includes talking to and learning about the client and
their experiences/what they might already know about themselves, other tests or inventories, and
information about the disability. Tests should never be thought of as a sort of as an
unquestionable source of information. It is also important to look at how appropriate the test
might be for the client. From what I have read about the test, it may not be the best test for
someone with learning disabilities due to seeming a little dated in terms of learning disabilities,
however depending on the case maybe there is information to gather that could be useful by
using the BWAP:2. Every situation is different and it is part of the rehabilitation counselor’s job
to figure out if a test is appropriate for the user. References:
Information from the reviews of James T. Austin & Stephanie D. Tischendorf , and Pam Lindsey
The Becker Work Adjustment Profile 2
http://ovidsp.tx.ovid.com.gate.lib.buffalo.edu/spa/ovidweb.cgi
Accession Number | 13191523 |
Classification Code | Behavior-Assessment [19] |
Database | Mental Measurements Yearbook |
Mental Measurements Yearbook | The Thirteenth Mental Measurements Yearbook 1998 |
Title | The Gordon Diagnostic System. |
Acronym | GDS. |
Authors | Gordon, Michael. |
Purpose | Aids in the evaluation of Attention Deficit Hyperactivity Disorder. Is also used in the neuropsychological assessment of disorders such as subclinical hepatic encephalopathy, AIDS dementia complex, post concussion syndrome, closed head injury, and neurotoxicity. |
Publisher | Gordon Systems, Inc, PO Box 746, DeWitt, NY 13214 |
Publisher Name | Gordon Systems, Inc |
Date of Publication | 1982-1996 |
Population | Children, adolescents, and adults |
Scores | 11 tests: Standard Vigilance Task, Standard Distractibility Test, Delay Task, Preschool Delay Task, Preschool Vigilance “0” Task, Preschool Vigilance “1” Task, Vigilance “3/5” Task, Adult Vigilance Task, Adult Distractibility Task, Auditory Vigilance Task, Auditory Interference Task. |
Administration | Individual |
Manual | Instruction manual, 1996, 102 pages. |
Price | 1996 price data: $1,595 per GDS III microprocessor-based portable unit including all tasks, capacity for automatic output to a printer, instruction manual, interpretive guide, 50 record forms, 4 issues of ADHD/Hyperactivity Newsletter, and 1-year warranty; $200 plus shipping per 2-month trial rental; $299 plus shipping per GDS compatible printer; $30 per 50 GDS record forms; $399 plus shipping per optional auditory module. |
Cross References | See T4:1051 (6 references). |
Time | (9) minutes per task. |
Reviewers | Harrington, Robert G. (University of Kansas); Oehler-Stinnett, Judy J.(Oklahoma State University). |
Review Indicator | 2 Reviews Available |
Comments | A micro-processor-based unit that administers tests of attention and impulse control. |
Full Text | Review of the Gordon Diagnostic System by ROBERT G. HARRINGTON, Professor of Educational Psychology and Research, University of Kansas, Lawrence, KS: The Gordon Diagnostic System (GDS) is a portable, solid-state, child-proof, microprocessor-based unit operating independently of a microcomputer (Post, Burko, & Gordon, 1990), designed to administer a series of three game-like tests. The GDS has been used primarily to provide a behavior-based measure of the vigilance or sustained attention span and self-control (Gordon, 1987) of children, adolescents, and adults with attention deficit/hyperactivity disorder. Vigilance and behavioral inhibition have been considered two of the central components in the diagnosis of attention deficit disorder in children and adults. The device also can be used to monitor responses to stimulant medication (Barkley, Fisher, Newby, & Breen, 1988; Brown & Sexton, 1988), as well as in the evaluation of AIDS-related complex (Saykin et al., 1990), closed head injury (Risser & Hamsher, 1990), Fragile X Syndrome (Hagerman, Murphy, & Wittenberger, 1988), and Alzheimer’s disease (Gordon, Beeber, & Mettelman, 1987). The GDS provides a reading or printout of the number of correct responses, incorrect responses, and failures to respond and comes complete with parallel forms of each task for retesting. The GDS has been cleared as a medical device by the Food and Drug Administration. Computerized assessment of ADD/Hyperactivity has arisen as a result of concern with the unreliability of diagnostic decisions based upon subjective clinical judgments, informal interviews, and rating scales standardized on small samples of clinic-referred children (Gordon, 1986, 1987). Approximately 1,300 nonhyperactive boys and girls, 4-16 years of age, were included in the standardization. Norms also are available for college students, adults, and geriatric populations. “An additional 1100 hyperactive and nonhyperactive protocols from various subject populations, including deaf, blind, emotionally disturbed, learning disabled, and Spanish-speaking have also been gathered” (Gordon, 1987, p. 57). One limitation of this standardization sample is that the selection was limited mostly to the upstate area of New York and thus the representativeness of the sample must be called into question. Currently, research is being conducted to extend the standardization of the GDS to represent a Puerto Rican sample (Bauermeister, 1986; Bauermeister, Berrios, Jimenez, Acevedo, & Gordon, 1990). Normative data are presented in Threshold Tables, which show score ranges demarcating Normal, Borderline, and Abnormal ranges of performance by age (4-5, 6-7, 8-11, 12-16 years). The author claims that the norms are not presented by sex or socioeconomic status because these variables are not correlated with GDS performance, but this finding is curious because other research has clearly shown a much higher prevalence of ADD/hyperactivity in males than in females (Ross & Ross, 1982). In studies conducted by the author of the GDS, tasks on the GDS have been found to have moderate but significant levels of test-retest reliability over 2 to 45 days (r = .60 or higher) and stability over a one-year time period (r = .52 or higher) (Gordon & Mettelman, 1988). The GDS also appears to correlate moderately but significantly with other neuropsychological instruments (Grant, Ilai, Nussbaum, & Bigler, 1990), behavior-based measures (McClure & Gordon, 1984), and a variety of teacher and parent ratings of attention deficit hyperactivity disorders (Gordon, Mettelman, Smith, & Irwin, 1990). In an independent study using a sample of 119 age 6 to 12 year, 11 month-old ADHD males, only two tasks, the number of correct responses for Vigilance and Distractibility Tasks, correlated consistently with other measures (WISC-R, the WRAT-R Arithmetic, Beery Test of Visual and Motor Integration, and various sensory-motor variable from the Halstead-Reitan neuropsychological battery) (Grant, Ilai, Nussbaum, & Bigler, 1990). Gordon defends this lack of concurrent validity with other major measures used in ADD diagnosis by contending that the GDS makes a unique contribution in the measurement of attention, not assessed by more traditional tests. Gordon also argues that efforts to validate the utility of the GDS have been limited by disagreement among professionals with regard to a consensus definition of ADD/Hyperactivity (Gordon, Di Niro, & Mettelman, 1988). Despite these arguments it would seem clear that this research suggests that continuous performance tests such as the GDS may be useful, but are insufficient alone in the diagnosis of difficulties in impulsivity or sustained attention in children. Scores from the GDS, such as the Efficiency Ratio (Gordon, 1979) and the Delay Task (McClure & Gordon, 1984; Barkley, 1991), have demonstrated some discriminant validity with regard to distinguishing accurately between groups classified as hyperactive and normal. In another study of school-referred children, the GDS discriminated among children classified as ADD and those identified as reading-disabled, overanxious, and normal (Gordon & McClure, 1983). In a recent study (Wherry et al., 1993) research failed to support the discriminant validity of any GDS score regardless of whether the Child Behavior Checklist-Teacher Report Form or the ADHD Rating Scale was used as a criterion measure. These authors concluded that teacher rating forms should remain the “gold standard” for identifying ADHD youngsters. In one recent study concerns were raised about the extent to which the GDS may underidentify children who are classified by parent and teacher reports as ADHD (DuPaul, Anastopoulos, Shelton, Guevremont, & Metevia, 1992). In fact, it has been found that although the GDS will classify a normal child as ADHD in only 2% of the cases (Gordon, Mettelman, & Di Niro, 1989), it will produce false negative classifications anywhere from 15 to 35% of the time depending on the age of the child, criteria for subject selection, and the combination of scores employed (Gordon et al., 1989; Trommer, Hoeppner, Lorber, & Armstrong, 1988). On the other hand, the Vigilance Task Commission Score has been found to be particularly sensitive to the effects of stimulant medication, especially at higher doses (Barkley & Edelbrock, 1986). There is definitely need for further validity studies. As Gordon (1987) himself has indicated, the mere computerization of a measure does not preclude the need for “meaningful studies of validity” (p. 54). Furthermore, because many of the validity studies related to the GDS have been conducted by its developers, Barkley (1991) has suggested that there is a need for validity studies replicated by other independent researchers. To evaluate consumer satisfaction with the GDS, Gordon (1994) sent a survey to a sample of 475 GDS users who were randomly selected from a list of 900 users. He found that the GDS was used most frequently in private practice to evaluate ADD in children and adolescents (89%) and adults (48%). Most used the GDS as part of a multifaceted test battery that included standardized behavior checklists of parents (89%) and teachers (86%), achievement and intelligence tests (75%), formal observations (50%), and interviews with the child and parent (33%). Users indicated that the GDS agreed with other clinical information in about 73% of cases and when it disagreed 92% of the clinicians saw the discrepancy as a justification for further evaluation rather than as test error. Eighty-four percent of clinicians felt that the GDS provided opportunities for direct observations of a child’s actual behavior in a standardized situation that requires attention and self-control. A somewhat disconcerting finding was that half the sample used the GDS, at least in part, because its objectivity helped “sell” the diagnosis to parents and schools. Eighty percent of the respondents marked either a 4 or 5, indicating that they were likely moderately or very confident with the final diagnosis of ADD when the GDS was used for confirmation as a part of a multidisciplinary battery. In summary, Continuous Performance Tests (CPTs) have a long history (Rosvold, Mirsky, Sarason, Bronsone, & Beck, 1956) and are playing an increasingly broader role in the assessment of attentional processes. Unfortunately, research on the ability of CPTs such as the GDS to discriminate children with ADHD from their normal counterparts or to detect stimulant drug effects is limited. Research is hampered, in part, by nuances in subject selection criteria for ADHD/Hyperactivity. “Studies employ different rating scales, laboratory measures, observational techniques and interviews in addition to varying cutoff scores and exclusionary criteria” (p. 539, Gordon, Di Niro, & Mettelman, 1988). There is a need for a generally accepted set of research criteria in defining a sample with ADD/Hyperactivity of the sort suggested by Barkley, Fischer, Newby, and Breen (1988). Furthermore, like most other CPTs the GDS relies on visually presented stimulus materials, despite the fact that there is recent research (Baker, Taylor, & Leyva, 1995) that indicates that auditory presentations of stimuli can increase the difficulty of tasks and should be considered in the evaluation of vigilance and impulse control. Nevertheless, instruments such as the GDS may in the future provide clinically useful, objective, convenient, and relatively inexpensive measures of sustained attention and impulse control for ADHD children, adolescents, and adults. As Gordon concedes, no score on the GDS should be the sole determinant of a diagnosis of ADHD. In fact, Rasile, Burg, Burright, and Donovick (1995) found that in a sample of college students the GDS is not a substitute for other commonly used tests of visual and auditory attention, including the Digit Span, Digit Symbol, and Arithmetic subtests of the WAIS-R, Kagan’s Matching Familiar Figures Test, the Visual Span Subtest of the Wechsler Memory Scale–Revised, and the Stroop. The GDS should be viewed as providing only one source of information to be integrated with other sources in reading a final diagnostic decision about the presence or absence of attentional problems. When compared to other automated CPTs on the market such as the Test of Variables of Attention (T.O.V.A., 13:336; Greenberg & Waldman, 1991) the GDS would appear to have a much greater amount of published research support and a longer history supporting its use in clinical settings. In conclusion, Barkley (1991) has argued that a thorough assessment of ADD/Hyperactivity should include direct observations of ADHD symptoms in their natural settings, but concedes that analogue observations such as the GDS may be more feasible. Despite this concession, Barkley (1991) has warned against claiming a lab measure as the single standard for the diagnosis of ADHD. Given this caveat, clinicians should consider the GDS to be one of the several useful tools they might employ in the determination of attentional problems of children, adolescents, and adults and researchers should find the GDS a rich source of data in studying this complex variable called attention. REVIEWER’S REFERENCES Rosvold, H. E., Mirsky, A. F., Sarason, I., Bronsone, E. D., Jr., & Beck, L. H. (1956). A continuous performance test of brain damage. Journal of Consulting Psychology, 20, 343-350. Gordon, M. (1979). The assessment of impulsivity and mediating behaviors in hyperactive and nonhyperactive boys. Journal of Abnormal Child Psychology, 7, 317-326. Ross, D. M., & Ross, S. A. (1982). Hyperactivity: Current issues, research, and theory (2nd ed.). New York, Wiley. Gordon, M., & McClure, F. D. (1983). The assessment of ADD/Hyperactivity in a public school population. Unpublished raw data. McClure, F. D., & Gordon, M. (1984). The performance of disturbed hyperactive and nonhyperactive children on an objective measure of hyperactivity. Journal of Abnormal Child Psychology, 12, 561-572. Barkley, R. A., & Edelbrock, C. (1986, August). Attention Deficit Disorder with and without hyperactivity: Empirical corroboration of subtypes. Paper presented at the 94th annual convention of the American Psychological Association, Washington, DC. Bauermeister, J. J. (1986, August). ADD and hyperactivity in Puerto Rican children: Norms for relevant assessment procedures. Paper presented at the meeting of the American Psychological Association, Washington, DC. Gordon, M. (1986). Microprocessor-based assessment of Attention Deficit Disorder. Psychopharmacology Bulletin, 22, 288-290. Gordon, M. (1987). How is a computerized attention test used in the diagnosis of Attention Deficit Disorder? In J. Loney (Ed.), The young hyperactive child: Answers to questions about diagnosis, prognosis and treatment (pp. 53-64). New York: Haworth Press. Gordon, M., Beeber, A., & Mettelman, B. (1987). Primary degenerative dementia and continuous performance tasks. Unpublished manuscript. SUNY Health Science Center, Syracuse, NY. Barkley, R. A., Fisher, M., Newby, R. F., & Breen, M. J. (1988). Development of a multimethod clinical protocol for assessing stimulant drug response in children with attention deficit disorder. Journal of Clinical Child Psychology, 17, 14-24. Brown, R. T., & Sexton, S. B. (1988). A controlled trial of methylphenidate in black adolescents: Attention, behavioral and psychological effects. Clinical Pediatrics, 27, 74-81. Gordon, M., Di Niro, D., & Mettelman, B. B. (1988). Effect upon outcome of nuances in selection criteria for ADHD/Hyperactivity. Psychological Reports, 62, 539-544. Gordon, M., & Mettelman, B. B. (1988). The assessment of attention: I. Standardization and reliability of a behavior-based measure. Journal of Clinical Psychology, 44, 682-690. Hagerman, R. J., Murphy, M. A., & Wittenberger, M. D. (1988). A controlled trial of stimulant medication in children with the Fragile X syndrome. American Journal of Medical Genetics, 30, 377-392. Trommer, B. L., Hoeppner, J. B., Lorber, R., & Armstrong, K. (1988). Pitfalls in the use of a continuous performance test as a diagnostic tool in attention deficit disorder. Developmental and Behavioral Pediatrics, 9, 339-345. Gordon, M., Mettelman, B. B., & Di Niro, D. (1989). Are continuous performance tests valid in the diagnosis of ADHD/hyperactivity? Paper presented at the 97th annual convention of the American Psychological Association, New Orleans. Bauermeister, J. J., Berrios, V., Jimenez, A. L., Acevedo, L., & Gordon, M. (1990). Some issues and instruments for the assessment of Attention-Deficit Hyperactivity Disorder in Puerto Rican Children. Journal of Clinical Child Psychology, 19, 9-16. Gordon, M., Mettelman, B., Smith, D., & Irwin, M. (1990). ADHD profiles based upon a cluster analysis of clinic measures. Unpublished manuscript. Grant, M. I., Ilai, D., Nussbaum, N. L., & Bigler, E. D. (1990). The relationship between the continuous performance tasks and neuropsychological tests in children with attention-deficit hyperactivity disorder. Perceptual and Motor Skills, 70 435-445. Post, E. M., Burko, M. S., & Gordon, M. (1990). Single-component microcomputer driven assessment of attention. Behavior Research Methods, Instruments, and Computers, 22, 297-330. Risser, A. H., & Hamsher, DeS. (1990, February). Vigilance and distractibility on a continuous performance task by severely head-injured adults. Paper presented at the 18th Annual Meeting of the International Neuropsychological Society, Orlando, FL. Saykin, A., Janssen, R., Cannon, L., Moreno, I., Spehn, G., O’Connor, B., Watson, S., & Allen, R. (1990, February). Neurobehavioral patterns in HIV-I infection: Relation of cognitive and affective changes and activities of daily living. Symposium presented at 18th Annual Meeting of the International Neuropsychological Society, Orlando, FL. Barkley, R. A. (1991). The ecological validity of the laboratory and analogue assessment methods of ADHD symptoms. Journal of Abnormal Child Psychology, 19, 149-178. Greenberg, L. M., & Waldman, I. D. (1991). Developmental normative data on the Test of Variables of Attention (T.O.V.A.). Unpublished manuscript. DuPaul, G. J., Anastopoulos, A. D., Shelton, T. L., Guevremont, D. C., & Metevia, L. (1992). Multimethod assessment of Attention Deficit-Hyperactivity Disorder: The diagnostic utility of clinic-based tests. Journal of Clinical Child Psychology, 21, 394-402. Wherry, J. N., Paal, N., Jolly, J. B., Adam, B., Holloway, C., Everett, B., & Vaught, L. (1993). Concurrent and discriminant validity of the Gordon Diagnostic System: A preliminary study. Psychology in the Schools, 30, 29-36. Gordon, M. (1994). Clinical and research applications of the Gordon Diagnostic System: A survey of users. DeWitt, NY: Gordon Systems, Inc. Baker, D. B., Taylor, C. J., & Leyva, C. (1995). Continuous performance tests: A comparison of modalities. Journal of Clinical Psychology, 51, 548-551. Rasile, D. A., Burg, J. S., Burright, R. G., & Donovick, P. J. (1995). The relationship between performance on the Gordon Diagnostic System and other measures of attention. International Journal of Psychology, 30, 35-45. Review of the Gordon Diagnostic System by JUDY J. OEHLER-STINNETT, Associate Professor of Applied Behavioral Studies, Oklahoma State University, Stillwater, OK: INTRODUCTION. The Gordon Diagnostic System (GDS) is a computerized instrument designed to measure sustained attention, sustained attention under a distractibility condition, and impulse control through delayed responding. The electronic device is portable, child-proof, and requires no external computer. The GDS directly measures children’s ability to perform three tasks, which primarily require attention through correct responding and inhibition of incorrect responding (limited norms for adults are also available). The test is intended for use as one component of a comprehensive assessment for children experiencing difficulties in the areas of attention, impulsivity, and hyperactivity (the authors are to be commended for stressing the importance of multifactored assessment). It is also intended to aid in the diagnosis of Attention Deficit Hyperactivity Disorder (ADHD), the determination of whether stimulant medication is indicated, and the monitoring of medication response. Because of the seriousness of the diagnosis and the frequent resulting medication of children, any instrument contributing to such a diagnosis must meet the highest standard of validity including convergent, discriminant, and predictive validity. The impetus for developing direct measures related to attention stems in part from the relative absence of such tasks in traditional psychoeducational batteries and to problems noted in using indirect measures such as behavior rating scales (e.g., respondent bias and lack of discriminant validity of the subtests purported to measure attention; see Oehler-Stinnett & Stinnett, 1995 for a comparative review of rating scales). Although many subtests of cognitive measures require considerable attention on the part of the examinee, these subtests are designed to assess such skills as perceptual organization, processing speed, and memory. The tasks also are typically power tests in which items become increasingly more difficult. Thus, although the child’s attention level during tasks can be observed through behavior and used as part of an overall assessment of attentional difficulties, the scores represent more than a measure of attention. Also, most of these tasks are continuous for only about 2 minutes, so attention to a discrete task over time is not measured. One type of measure developed to assess attention directly is the continuous performance test (CPT), which has a rich history in the research and clinical literature. Most CPTs require basic visual perception and discrimination of numbers (sometimes alternative stimuli such as pictures or auditory stimuli are also used), a motoric response to the target stimuli (through pushing a button, clicking keyboard keys/mouse, or using paper and pencil), and inhibiting responding to irrelevant stimuli. Task difficulty may be kept constant, or become increasingly more difficult by increasing presentation speed or frequency of the target stimuli. Difficulty is also increased if additional distracters are included. However, higher-order cognitive processing difficulty is minimized and remains constant. Although these steps increase the likelihood of measuring attention per se, they may also decrease the relevance of the task to examinees. Therefore, motivational issues must be considered. Length of the task in its entirety will also affect performance, with a task lasting 30 minutes more likely to show decrements across time blocks than a task lasting 5 minutes (Das, Naglieri, & Kirby, 1994). Two of the three subtests on the GDS can be described as CPTs. The Vigilance (or sustained attention) task is a straightforward CPT that requires respondents to press a button when the target visual stimuli (a pair of numbers in the correct order) appears and to refrain from responding to any other stimuli (manual). The child version of the task lasts for 6 minutes for 4- and 5-year-olds and 9 minutes for the other age groups. This task has a standard 1/9 stimulus format and an alternate form in which the target stimulus is 3/5. The Distractibility task, which lasts 9 minutes and has norms for ages 6-16, is a version of the CPT in which distracter numbers appear in columns on either side of the column in which the target stimuli appear. These tasks require visual discrimination of numbers and detection of physical identity of the stimuli (as opposed to having to remember categorical data), sustained attention, memory for digits (minimal in that the target stimuli are only two digits), accurate responding, responding fast enough to avoid a commission error to the next stimulus, and learning to inhibit responding to the non-target stimuli (such as only one of the numbers in the pair). The Distractibility task increases the difficulty of inhibiting responding to irrelevant stimuli and may pick up impulsive responding and difficulties in selective attention when the Vigilance task does not. For both tasks, the Total Correct score is considered primarily a measure of attention, whereas the Total Commission score is considered a measure of attention and impulse control. No feedback is given to the child during the procedures on either the Vigilance or Distractibility subtests; therefore, children must sustain motivation throughout the task. Users must use good judgment as to whether motivation significantly impacts scores and may want to assess this area further if warranted, as noted in the manual. Behaviors indicative of hyper- or hypoarousal, anxiety, frustration, or anger should also be further investigated for underlying factors and/or comorbidity. A third task, the Delay task, is designed as a more direct measure of impulsivity or ability to delay responding, and was derived through the author’s doctoral work. It requires respondents to press a button after a certain time interval (at least 6 seconds for ages 6-16), which is not revealed in the instructions. However, feedback is given to the child to aid in determining the correct time interval and to sustain motivation. Thus, the child must develop a strategy to determine the correct time length, respond appropriately, and inhibit responding before the time is up. Attention during the time interval must be most directed at determining time rather than responding to visual stimuli as in the other tasks. The strategy requirements also necessitate higher level learning, planning, and problem solving in relation to the other two tasks. Efficiency Ratio (ER) or percentage of correct responses, Total Correct Responses (not usually used), and the Total Number of Responses (which might indicate hypo- or hyperarousal) scores are provided. The author states that this task may be related to the impulsivity component of ADHD such as lack of inhibition and delay of gratification, awareness of cause and effect, and deficits in time estimation and management. Given the paucity of direct measures of impulsivity, this task could add important information to the overall diagnostic picture. NORMATIVE INFORMATION. The normative data for the GDS were derived from scores of regular education children in the Syracuse, New York, and Charlottesville, VA areas collected in the mid 1980s (technical guide). For the Delay Task there were 1,183 4-16-year-olds, for the Vigilance Task there were 1,019 4-16-year-olds, and for the Distractibility Task there were 362 6-16-year-olds. Racial and ethnic makeup of the normative sample is not described. Although these data represent an improvement over previously available CPT norms and have the advantage of being based on a nonclinical sample, the lack of a well-described national sample and the relative date of the norms are weaknesses. Norms for minority children, such as Hispanics, are in development. As other CPTs become available, the GDS will need scores based upon updated national norms to remain competitive. It would also be useful to have norms based on relevant well-defined subgroups within the clinical population (e.g., ADD versus ADHD groups). Although there were statistically significant gender differences on all three subtests, these were found to account for a clinically insignificant amount of variance; therefore, norms are collapsed across gender. The advantage of collapsed norms is that boys, who tend to be overrepresented in ADHD diagnoses, will not be missed, and girls, who tend to be underrepresented, will not be overidentified. However, the interpretative manual does state that girls will tend to score lower. Socioeconomic status effects were also considered impractical for consideration in developing the normative tables. Age differences corresponding to developmental increases in task performance were taken into account and percentile tables are available for six different age groups (technical guide). This developmental increase in GDS performance lends support to the instrument as a measure of age-appropriate attention level. That is, a critical feature of attention/impulsivity difficulties is a perception by adults that a child exhibits behaviors that may be appropriate for younger children but are below expected levels for the child’s age. The test does become less sensitive in the older age groups, and results within normal limits are interpreted more cautiously for examinees over age 14. Two neuropsychological studies conducted by Dr. Saykin and colleagues (Saykin et al., 1995) utilized 80 and 43 normal adults, respectively. Norms based on these small samples are also made available in the interpretative manual. Until norms based on a larger, more representative sample of adults are made available, interpretation of adult scores should be conducted with extreme caution, if at all. However, the GDS does show promise in the area of neuropyschological assessment with adults, and more research is encouraged. RELIABILITY. Because the tasks are always administered on the same device, variance that might have been due to differences in computer keyboards and monitors across administrations, as is typical with software-based CPTs, is controlled. Major strengths of the GDS are the circumvention of interscorer error through the computerized device and the automated printout of results. The direct measure of children’s performance also avoids the interrater error that hounds behavior rating scales. Test-retest reliability was examined utilizing 32 to 90 children (technical guide). For the Delay Task, the retest interval was 30-45 days; for the Vigilance and Distractibility Tasks, 2-22 days. The average reliability coefficient was .76, with correlations ranging from a high for the Distractibility Commission Score of .85 to a low for the Delay Task ER of .60. A one-year retest interval yielded correlations between .52 and .94. The highest reliability overall is for the commission scores. Although these coefficients are acceptable given the relatively low N utilized for some analyses and the retest interval of others, these results do not meet the highest standards for test-retest reliability. The authors should stress this limitation more adamantly in the manual. For example, standard error of measurement should be calculated and utilized in the scoring and interpretation sections of the manual. Particularly on borderline cases, or those involving results that are discrepant from other findings, a second administration should be recommended. For the Vigilance Task alternate forms, a t-test found no significant differences between groups administered one version or the other. The Vigilance Task form used was also found to have no effect on later performance on the Distractibility Task. No correlation of the alternate forms utilizing the same group of children was reported. Practice effects of the Vigilance/Distractibility Tasks have not been reported. Because the standardization was performed with the Vigilance Task first, this is the recommended administration order. There is no Total Score yielded for the GDS, primarily because the author views the tasks as measuring different components of the attention/impulsivity constructs. It would be useful to have internal consistency data in the form of alpha coefficients reported in order to examine this issue more closely, particularly in light of the moderate intertask correlations discussed below. VALIDITY. Although alpha coefficients are not reported in the manual, the task intercorrelations reported by Gordon and Mettelman (1988) yield some evidence of internal consistency and construct validity. In general, the highest correlations are found between the Vigilance and Distractibility commission scores and between the two Total Correct scores. Correlations between the Vigilance Total Correct and Total Commission scores and the Distractibility Total Correct and Total Commission scores are also significant. However, the correlations are moderate, ranging from -.41 (negative due to scoring direction) to .66. This overall pattern lends support to the author’s contention that the tasks and scores measure in common some components of attention/inhibition but also contribute uniquely to the profile of the child being assessed. In fact, the interpretation section reviews each task separately. Given the relationship of the scores to each other, interpretative suggestions and research studies that focus on such scores in common (i.e., profile analysis) would be useful. The Delay Total Correct score is moderately related to the other correct scores, suggesting that perhaps it, too, should be interpreted with similar scores. The ER score from the Delay Task bears little relationship to the other measures of the GDS, even the commission scores, which supposedly tap into impulsivity, suggesting that the ER measures a different subconstruct of the attention/impulsivity construct. Given currently available results, it is important to administer and interpret all tasks to best determine the functioning of children being assessed. It would be useful to have factor analytic data to better determine the task relationships. In terms of concurrent validity, there is a large body of research relating the GDS to other measures of attention/impulsivity and to traditional behavior rating scales and psychoeducational instruments. Correlations with other direct measures of attention, such as the Intermediate Visual and Auditory Continuous Performance Test (IVA; 22), Test of Variables of Attention (T.O.V.A.; 336), or Conners Continuous Performance Test (CCPT) are not reported, making it difficult to select among these instruments or to bootstrap their construct validity. Correlations with other direct measures of children’s functioning have yielded interesting results. Overall, the GDS is not highly correlated with measures of intelligence and it is safe to say that the GDS measures something other than overall intelligence (Gordon & Mettelman, 1988). However, the GDS is more related to intelligence among children with learning problems than for normal children (Carter, 1993; Rush, 1995). According to the PASS theory of intelligence (Das, Naglieri, & Kirby, 1994), arousal and attention contribute to higher order cognitive processes, so we would expect some relationships to occur between measures. Where relationships have been found, in general they are consistent with expected results. The Total Correct scores are typically more related than the other scores to intellectual measures (Grant, Ilai, Nussbaum, & Bigler, 1990). The GDS has been shown to be related to numerical fluency and memory through its correlations with measures such as the Visual-Aural Digit Span, the Freedom from Distractibility Factor of the WISC-R and WISC III, the Matching Familiar Figures test, and other visually or numerically loaded tests (e.g., Grant et al., 1990). A review of available studies indicates that the GDS adds information to a comprehensive assessment over and above that found in more traditional batteries due to reasons discussed in the introduction to this review. The traditional GDS is less related to verbal-type tasks except for correct scores. An auditory version is becoming available, and preliminary results indicate that the visual and auditory tasks discriminate different subgroups of children with attentional difficulties. These results indicate that the visual GDS alone may not identify all children with attentional difficulties. The GDS has also been extensively compared to behavior rating scales traditionally used to identify children as ADD or ADHD (Bauermeister, Berrios, Jimenez, Acevedo, & Gordon, 1990; Kinstlinger, 1988; Mitchell & Quittner, 1996; Wherry et al., 1993; Williams-White, 1993). Unfortunately, many of the rating scales used suffer from discriminant validity problems. They may identify children as ADHD who exhibit conduct problems, aggression, and oppositional defiant disorder and miss ADD children who do not display overt hyperactivity and impulsivity. Thus, children described in the literature as “GDS false negatives” may in fact be rating scale “false positives.” Even so, the GDS has been shown to be related to the appropriate subtests of the Conners Rating Scale, the ADD-H: Comprehensive Teacher’s Rating Scale (ACTeRS), the Burks Behavior Rating Scale, the Achenbach Child Behavior Checklist and Teacher Report Form, and the Iowa Conners. Relating the GDS more carefully to the rating subscales that differentiate attention, hyperactivity, impulsivity, and conduct problems will likely lead to stronger results (for example, utilizing the ACTeRS or the Behavior Assessment System for Children [BASC], whose attention scales measure only attention and not using hyperactivity subtests that confound motor activity with conduct problems). Unfortunately, all the scales confound impulsivity with other constructs such as attention, hyperactivity, and conduct problems, making it difficult to use them in determining the GDS’s ability to measure impulsivity. Given currently available information, the Delay Task would be a useful adjunct to behavioral rating scales in determining whether impulsivity under GDS conditions exists. It would then be up to the examiner to use the GDS results along with other data, such as observation and interview, to determine whether impulsive responding in the environment appears to be under the control of the child (conduct problems) or not (impulsivity). Intentionality is a critical component in differential diagnosis of ADHD versus conduct problems, and the GDS may aid in this decision. It should be noted that the GDS is not a measure of hyperactivity per se, but rather of attention and impulse control in a defined situation. This fact should be considered more closely in research studies, in clinical practice, and in the interpretation section of the manual. The closest the GDS comes to measuring motor activity is in the commission score–a child who has an extremely high score may have motoric inhibition difficulties related to fine-motor hyperactivity. A child with gross motor overactivity may not be able to sit through the test, thus invalidating it, or may have overriding activity that obviously prevents attention to the tasks. This kind of problem would most likely be evident no matter what psychological test was being attempted. In fact, it may be missed during the GDS administration if the tasks are perceived as interesting enough and exacerbated by tests of the child’s known weaknesses such as academics (see Roberts & Landau, 1995, for a review of curriculum-based assessment with attention problems). The manual does state that behavioral observations of overactivity during testing should be taken into account. Otherwise, the GDS scores should not be expected to diagnose “hyperactivity” but a particular type of impulsivity. Impulsive responding on the GDS may generalize to a general pattern of impulsivity, but it is more likely that it relates most strongly to impulsive responding on tasks with similar behavioral demands, such as academic tasks. Carefully differentiating attention, hyperactivity, and impulsivity, and the subconstructs of each, should also be more carefully done in the literature and in the manual in order to avoid misdiagnoses. The GDS has also been used extensively to determine group differences (Kinstlinger, 1987; McClure & Gordon, 1984; Oppenheimer, 1988; Tucker, 1991). The strongest results come from its ability to differentiate clinical from nonclinical groups. However, much data suggest that the GDS may not be able to discriminate subgroups of clinical problems, such as ADHD from developmental difficulties or conduct problems. Although some have stated that the equivocal group difference results limit the utility of the GDS, it is more likely that the instruments and procedures used to develop group formation in the first place (e.g., DSM diagnoses, rating scales that lack discriminant validity) led to difficulties in the GDS discriminating groups, and that the GDS is able to ferret out the children who do have an attention problem! There is also evidence that children with attentional difficulties due to depression or other internalizing disorders will show attentional deficits on the GDS, suggesting that children with ADD only or ADD comorbid with internalizing disorders will be identified on the GDS. GDS scores that are significantly different from average will identify kids with attention problems and kids with attention problems comorbid with or exacerbated by other conditions (high positive predictive ability). Despite the ability of the scale to detect problems, the GDS is likely to produce some false negatives due to the nature of the examination process. Children who exhibit attentional/impulsivity problems in school may not display them in a clinical, one-on-one setting during a relatively entertaining task (as those who give traditional batteries often note as well). Children with auditory attentional problems may also be missed by the visual GDS, and the GDS does not measure the full range of attentional skills. It does predict generalization to the academic setting in that it is related to achievement scores, classroom behavior observations, and ratings and children’s behavior in school. Despite these limitations, the utility of the GDS has been demonstrated in medication trials (Anderson, 1990; Brown & Sexson, 1988; Fischer & Newby, 1991). Children who show mild difficulties on the GDS are more likely to be given no or low dose medication and/or be medication nonresponders. This is the strongest evidence that children showing difficulties on the GDS are “true” ADHD cases for whom medication is appropriate. The GDS also is sensitive to dosage levels and can be used in conjunction with other measures to monitor treatment regimes. It has also distinguished children in behavioral and cognitive treatment programs from controls (Poley, 1996). SCORING AND INTERPRETATION. Scoring is relatively straightforward. The GDS yields percentile ranks that are interpreted through cutoff points determined through “statistical and clinical convention.” Scores beyond the 25th percentile are described as borderline, whereas scores lower than the 5th percentile are considered abnormal. It would be useful to have in the manual a description of effects of these cutpoints on identification rates within referred populations. Use of these cutpoints yields identification of children within one standard deviation of the mean (i.e., the 16th percentile) typically considered within normal limits, as “borderline.” Examination of the percentile rank tables indicates that the conversion from raw to percentile scores results in the typical exaggeration of score differences in the middle of the distribution. For example, one raw score point difference in many instances results in a large jump in the percentile rank score, and unfortunately, many of these jumps occur right around the borderline cutoff range of the 25th percentile. Use of standard scores may obviate this difficulty and allow more ready interpretative use of the standard error of measurement. In the mean time, practitioners are cautioned to be flexible in their use of the proscribed cutpoints and interpret scores in light of all available data. The bottom line is that although the GDS cannot, by itself, discriminate children with ADD or ADHD from children with other difficulties, it can add to the diagnostic picture by helping to determine if attentional difficulties are present. Identification of GDS deficits do not rule out comorbidity with or causative factors of other conditions, such as developmental delay, depression, or conduct problems. Also, a normal GDS score alone cannot rule out attention problems. As noted by the author, the GDS should never be used alone as a diagnostic instrument, and should not even be used as the definitive instrument in a broad-based assessment. Unfortunately, despite this caution throughout the author’s writings, the case examples in the manual encourage practitioners to interpret the GDS alone as diagnostic of ADHD (e.g., if the score is abnormal, then we would likely utilize an ADHD diagnosis) and pay only lip service to other instruments (e.g., if all other data . . .). The author does describe several cases with comorbid conditions or underlying emotional reasons for attentional problems, and it must be noted that even the WISC computerized reports are guilty of the same single instrument interpretation. The author’s handbook on “How to Operate an ADHD Clinic or Subspecialty Practice” is a useful adjunct to the GDS manual. Although the title to this resource suggests we are looking for one type of pathology, the introduction provides a nice overview of childhood pathology that guides practice in a general direction. The GDS should be used much like a social skills scale–just because a child has a poor social skills score, we do not label him or her as having a “social skills disorder.” We assume the social skills problem is contributing to or resulting from other adjustment problems and we look for them. If the GDS shows an attention/impulsivity problem, we should look to other factors as well before assuming the child has “attention deficit disorder” only due to the limited discriminant validity. School psychologists are encouraged to expand their repertoire in ADHD assessment through the use of the GDS and other direct measures of attention in conjunction with traditional psychoeducational measures. It is hoped the price of these instruments will come down as have computerized tools in other areas. Psychologists practicing in clinics are encouraged to work with school-based practitioners in the areas of assessment and intervention rather than making a diagnosis based primarily on clinic measures without consulting school-based personnel in a meaningful way. Specifically, clinicians need to understand the limited utility of DSM classifications in the schools and utilize the expertise of school psychologists in making school-based diagnoses that meet the guidelines of the Individuals with Disabilities Education Act (IDEA) and Section 504 of the Rehabilitation Act in the school setting. One of the primary criteria is that the decision must be made by a school-based multidisciplinary team rather than a sole professional. If the GDS or clinic manual included these recommendations and examples, it would be of benefit to all who are working to help children fulfill their potential. REVIEWER’S REFERENCES McClure, F. D., & Gordon, M. (1984). Performance of disturbed hyperactive and nonhyperactive children on an objective measure of hyperactivity. Journal of Abnormal Child Psychology, 12, 561-572. Brown, R. T., & Sexson, S. B. (1988). A controlled trial of methylphenidate in black adolescents: Attention, behavioral, and physiological effects. Clinical Pediatrics, 27, 74-81. Gordon, M., & Mettelman, B. B. (1988). The assessment of attention: I. Standardization and reliability of a behavior-based measure. Journal of Clinical Psychology, 44, 682-690. Kinstlinger, G. (1988). The use of the Gordon Diagnostic System for assessing attention deficit disorders as compred with traditional adppative behavior measures in the public schools. Dissertation Abstracts International, 49(1), 72-A. Oppenheimer, P. M. (1988). A comparison of methods for assessing attention deficit disorders with hyperactivity in emotionally disturbed, learning-disabled and normal children. Dissertation Abstracts Internationa, 48, 2610-A. Anderson, K. C. (1990). Assessment and treatment outcomes of medicated and unmedicated groups of children with attention deficit hyperactivity disorder. Dissertation Abstracts International, 51, 1483-B. Bauermeister, J. J., Berrios, V., Jimenez, A. L., Acevedo, L., & Gordon. (1990). Some issues and instruments for the assessment of attention-deficit hyperactivity disorder in Puerto Rican children. Journal of Clinical Child Psychology, 19, 9-16. Grant, m. L., Ilai, D., Nussbaum, N. L., & Bigler, E. D. (1990). The relationship between continuous performance tasks and neuropsychological tests in children with attention-deficit hyperactivity disorder. Perceptual and Motor Skills, 70, 435-445. Fischer, M., & Newby, R. F. (1991). Assessment of stimulant response in ADHD children using a refined multimethod clinical protocol. Journal of Clinical Child Psychology, 20, 232-244. Tucker, R. L. (1991). The ability of the Gordon Diagnostic System to differentiate between attention deficit hyperactivity disorder and specific developmental disorders in children. Dissertation Abstracts International, 51, 4072-A. Carter, J. D. (1993). The relationship between intelligence and attention in kindergarten children. Dissertation Abstracts International, 54(2), 460-A. Wheery, J. N., Paal, N., Jolly, J. B., Adam, B., Holloway, C., Everett, B., & Vaught, L. (1993). Concurrent and discriminant validity of the Gordon Diagnostic System: A preliminary study. Psychology in the Schools, 30, 29-36. Williams-White, S. C. (1993). The association of attention processes and internalizing and externalizing symptoms among inpatient boys. Dissertation Abstracts International, 53, 3801-B. Das, J. P., Naglieri, J. A., & Kirby, J. R. (1994). Assessment of cognitive processes: The PASS theory of intelligence. Boston: Allyn & Bacon. Oehler-Stinnett, J. J., & Stinnett, T. A. (1995). Teacher Rating Scales for Attention Deficity-Hyperactivity: A comparative review. Journal of Psychoeducational Assessment, Monograph Series Advances in Psychoeducational Assessment: Assessment of Attention-Deficit Hyperactivity Disorders, 88-105. Roberts, M. L., & Landau, S. (1995). Using curriculum-based data for assessing children with attention deficits. Journal of Psychoeducational Assessment, Monograph Series Advances in Psychoeducational Assessment: Assessment of Attention-Deficit Hyperactivity Disorders, 74-87. Rush, H. C. (1995). The relationship between children’s attention and WISC-III IQ scores: Implications for psychoeducational assessment. Dissertation Abstracts International, 55, 3144. Saykin, A. J., Gur, R. E., Shtasel, D. L., Flannery, K. A., Mozley, L. H., Malamut, B. L., Watson, B., & Mozley, P. D. (1995). Normative neuropsychological test performance: Effects of age, education, gender, and ethnicity. Applied Neuropsychology, 2, 79-88. Mitchell, T. V., & Quittner, A. L. (1996). Multimethod study of attention and behavior problems in hearing-impaired children. Journal of Clinical Child Psychology, 25, 83-96. Poley, J. A. (1996). Effects of classroom cognitive behavioral training with elementary school ADHD students: A pilot study. Dissertation Abstracts International, 56, 2616-A. |
Copyright | Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing. All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute, Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the University of Nebraska and may not be used without express written consent. |
Update Code | 20140731 |