Within the world of clinical research, clinical outcome assessments (COAs) can measure patients’ health status and define end points that can be interpreted to indicate the benefits of a medical intervention on how patients feel, function, or survive.1
COAs can take many forms— clinician-rated, patient-rated, interview-based, functional performance, observer-rated, and still others—and individual COAs can be used in different ways in different settings. This article focuses on clinician-rated outcome assessments (ClinRos), some of which may be novel; others may be well-established and familiar to individuals practicing in clinics and research settings alike. When the right ClinRo is selected and patient data is collected in a consistent manner over time, it can enable key insights into a patient’s state of mind, behavior, cognitions, visual acuity, limb movement, gait, and much more, providing clinical context for measures reflecting pharmacological attributes of a novel therapy.2
Yet variations in the types of ClinRos, as well as in rater experience, knowledge, and background, can create multiple challenges in the context of a clinical study.
The importance of establishing and maintaining inter-rater reliability cannot be underestimated. Variability arising from differences in rater experience, sophistication, and training can complicate and potentially undermine the credibility of a study and its findings. While regulators do sometimes request evidence of inter-rater reliability before a clinical trial commences, in many cases they do not. In cases where they do not, questions can come up later if regulators are concerned about the consistency of the evidence presented. For this reason, strategies to proactively promote good inter-rater reliability should be established at the outset of a study.