In general, increased industry costs are driving reflection towards current practices in order to determine where efforts can be decreased in order to save costs. Many approaches have been investigated and implemented successfully without any compromise on data integrity. Risk based monitoring is one of these, it has drastically reduced the cost of clinical monitoring.
From the various regulatory releases over the last 5 years, it is clear that regulatory authorities are supportive of taking risk-based approaches to clinical trial management. This applies to data management and a different approach to data cleaning than the traditional blanket cleaning (clean as much as possible as thoroughly as possible) as well. They are telling us through their recent releases that they are not expecting all data to be clean but are rather simply expecting “the absence of errors that matter”. This confirms that notion that not all data should be cleaned equally.
To investigate this a group of companies (who presented at this year’s SCDM2020 European conference) did a meta-analysis of 20 phase III trials to try and determine if the amount of effort going into data cleaning activities provide an appropriate Return On Investment (ROI). They calculated the average cost of a query to be anywhere between $28 and $225, dependent on the type of check, and then proceeded to determine the actual query rates and results of these queries (data changes). The results of the study which included just under 50 000 000 data points and roughly 2 000 000 queries showed an astounding result of only 1.7% data points actually being changed as a result of the queries raised. The cost of these very few updates is anywhere between $2 700 000 and $22 000 000. The conclusion, simple, the ROI does not justify the effort being invested.
This has sparked many companies to think about transforming their traditional data cleaning ideals by investigating focused data management reviews, or risk based data management. The thought of this terrifies most data managers, they immediately start worrying about the things that could get lost/looked over. However, this does not have to be as scary as it seems. In fact this can be done in a safe and controlled manor, still resulting in high quality data.
This principle is based on the fact that not all data is critically important analysis, and secondly that based on a few variables (like CRF design) some fields don’t need to be thoroughly reviewed since there is low risk for “dirty” data. If you think about this, this principle is not actually that new to us. We, as data managers, have been applying this principle for many years. We have been reviewing CRF designs, choosing edit checks and focusing manual review efforts in certain areas based on primary and secondary end point analysis for quite a while.
Risk based data management is not “looking past” data points, or ignoring certain data points/sets, it is planning a review focused on certain areas and doing high level reviews on the rest. It is however important for a company to have proper strategy when wanting to implement such an approach. Companies who are not certain on where or how exactly to start with this should consider partnering with experts, albeit just to get started.