Performance Validity Testing and Accuracy in Assessment Practices
The necessity of including performance validity measures in neuropsychological evaluations has been understood for some time (Chafetz et al., 2015; Greher & Wodushek, 2017). One cannot draw inferences about brain-behavior relationships unless they are confident the examinee performed to the best of their ability on all assessments. Understanding performance validity is important not only within the context of forensic cases but is important any time maximum performance tests are used (e.g., tests of intelligence, aptitude, achievement, and/or neuropsychological performance). Knowing if the examinee gave their best effort during assessment is critical to the accuracy of test score interpretation. This need has been recognized in adults; however, pediatric validity testing has been strongly encouraged only in the last decade but is now considered a standard of care in pediatric assessment (Guilmette et al.; 2020). And it is clear that the field of neuropsychology is not alone in needing this safeguard.
We sat down with Dr. Cecil Reynolds, co-author of MHS’ Pediatric Performance Validity Test SuiteTM (PdPVTSTM), to discuss the importance of creating a standard around performance validity testing within clinical settings.
The interview below has been edited for length and clarity.
What was the motivation that began your work on the Pediatric Performance Validity Test Suite?
The vast majority of practitioners who use performance validity tests (PVTs) have been using PVTs designed for adults with children and youth. There was but one PVT on the market designed for children, and it was a single task. We know that best practice requires the use of multiple PVTs over the course of a comprehensive assessment. So, we set out to develop a battery of five co-normed PVTs explicitly designed for use with children and youth—with age-appropriate stimuli and age-corrected cutoffs. This also then allows a major advance—one that is not even available with PVTs for adults that have been around far longer—we provide base rates of pass/fail for any and every combination of two to five of the tests that make up the PdPVTS, and we provide these base-rates for non-clinical as well as nine different clinical populations, something no one else has been able to provide for any age. Base-rates of pass/fail across multiple PVTs are critical to correct interpretation of the results.
The PdPVTS is the only assessment on the market that provides base-rates within the testing suite. How does this impact the understanding of the assessment itself?
If you’re going to use a PVT, you need to know the base rates of failure for the combinations of tests that you’re giving. The PdPVTS is the only set of tests for children or adults that provides base rates across more than one test. Once you complete the PdPVTS, the base-rates of pass/fail are part of the report generated at the end of the assessment. As we know, other tests on the market not made for youth are being used as PVTs for youth. A practitioner may use one of these tests either independently or in combination with another PVT assessment. What happens when they pass one of these tests but fail the other? What is the base-rate of that pattern within the population? No one knows. We don’t have that information.
Now suppose you give any two tests within the PdPVTS—let’s say the youth passes one and fails one. Whichever one is passed and whichever one is failed, you as a practitioner are provided the base-rates of that pattern in the non-clinical population as well as nine separate clinical populations. Over 600 youth diagnosed with ADHD (attention deficit hyperactivity disorder), anxiety, depression, disruptive disorders (including Oppositional Defiant Disorder, Conduct Disorder, and Intermittent Explosive Disorder), Intellectual Disability, language disorders, learning disabilities, and traumatic brain injuries (mild, moderate, or severe) are part of that clinical population. If you don’t know the base-rate of the combination of tests you’re providing, you can’t interpret your data correctly.
Are PVTs just for neuropsychologists? If not, why is it important that all psychologists within clinical settings incorporate these tests within their assessments?
Statistically speaking, the rate of non-credible effort within the clinical population is relatively high, around 20-25%. In a non-clinical population, the rate of non-credible effort is quite a bit lower. The majority of the time where a clinician might not get best effort during assessment can frequently happen due to many factors such as boredom or tiredness and is not purposeful. If you’re going to interpret results from whatever you may be testing for, you need to make sure you’re receiving best effort the whole way through. If you do not get maximum effort from the examinee, you cannot draw inferences about brain-behavior relationships. How do you measure this?
Most examining clinicians think it is obvious to them when children are not giving their best effort or are engaged in malingering or other forms of dissimulation. The research on detecting poor effort and deception indicates this is not true. In fact, we are only slightly better than chance at detecting such issues clinically, in the absence of objective, quantitative data. Our professional organizations have recognized this, and current consensus documents clearly state that using objective PVTs is best practice.
Part of the best practice recommendations for PVTs is that they should be given throughout the course of assessment. How does this practice improve the overall assessment of the examinee?
With children, we know that the level of effort will change during the course of their exam. As practitioners, we think we are really good at what we do—and as stated above, we often think we know exactly the moment when effort starts to wain or when a child starts to engage in dissimulation. Again, we know we were wrong about this. We’re not as good as we think we are at detecting this. I’ve examined thousands of children throughout my career since I first began in 1975. I thought, like many others, “I’m really good at this. I’ll recognize, and I’ll know when a child is not giving me their best effort, and I’ll bring them back to best effort.” We’ve all done a lot of exams, and that’s why we think we can trust our gut from a clinical standpoint. However, we know that the research proves this wrong. We must follow the research and engage in evidence-based practice and admit that we’re not as good at detecting this as we think we are. It’s a hard thing to accept. It feels like you’re questioning your clinical judgment.
Once you accept this, using PVTs particularly with children, throughout assessment, will make you a better examiner. It will make your exam results better, more interpretable, and more relatable to whatever diagnoses and intervention strategies you’re aiming for.
Learn more about MHS’ Pediatric Performance Validity Test Suite.
Listen to Dr. Reynold’s podcast episode on PVTs with Dr. Jeremy Sharp.