Objective: Neuropsychological literature reports varying prevalence of cognitive impairment within patient populations, despite assessment with standardized neuropsychological tests. Within the domain of oncology, the International Cognition and Cancer Task Force (ICCTF) proposed standard cutoff points to harmonize the operationalization of cognitive impairment. We evaluated how this binary classification affects agreement between two highly comparable test batteries.
Method: Two hundred non-central nervous system (non-CNS) cancer patients who finished treatment (56% females; median age 53 yrs) completed traditional tests and their online equivalents in a counterbalanced design. Following ICCTF standards, impairment was defined as a score of ≥ 1.5 standard deviations (SDs) below normative means on two tests and/or ≥ 2 SDs below normative means on one test. Agreement of classification between traditional and online assessment was evaluated using Cohen's κ. Additional Monte Carlo simulations were conducted to demonstrate how different cutoff points and test characteristics affect agreement.
Results: The correlation between total scores of traditional and online assessment was .78. Proportions of impaired patients did not differ between assessment methods: 40% using traditional tests and 38% using online equivalents, χ²(1) = .17, p < .68. Nevertheless, within-person agreement in impairment classification between traditional and online assessment was merely fair (K = .35). Monte Carlo simulations showed similarly low agreement scores (K = .41 for 1.5 SD; K = .33 for 2 SD criterion).
Conclusions: Our results show that binary classification can lead to a situation where two highly similar batteries fail to identify the same individuals as impaired. Additional simulations suggest that within-person agreement between assessment methods using binary classification is inherently low. Modern statistical tools may help to improve validity of impairment detection. (PsycInfo Database Record (c) 2023 APA, all rights reserved).