From The Editor | August 3, 2015

Source Data Verification: A Quality Control Measure in Clinical Trials

Ed Miseta

By Ed Miseta, Chief Editor, Clinical Leader

Thinkstock_Mobile_Health

In an industry that seems to be focused on cutting the cost of clinical trials, it’s no surprise that reducing the amount of source data verification (SDV) performed in studies—the process of cross-referencing data recorded in a case-report form to the original source information—is an integral part of risk-based monitoring (RBM) strategies. Eliminating source data checks that do not add value to the study is certainly a breakthrough for trials where we have historically performed 100 percent data verification. After all, why verify data that we already know to be correct, and is a low risk to the study as well?

“Data has shown us that 97 percent of data entered into any EDC system is accurate,” says Kyle Given, Principal, Consulting Services at Medidata Solutions. “The data does not change. Since a very high percentage of that data is already in good shape, what we need to do is focus on how we can perform clinical monitoring in a much smarter way.”

In 2013, TransCelerate put out its first position paper on the subject, which provided the group’s initial guidance on SDV. The paper also solicited feedback from readers, with the hope that additional proof points would support the notion that SDV has a low value impact in overall clinical data quality.

The SDV working group put together an RFP seeking a company that could perform additional analyses using a larger data set than what existed at any of the member companies. Medidata was selected to perform the tests because of its access to robust operational data—and one of the largest EDC data sets in the industry—aggregated in its cloud platform.

TransCelerate approached Medidata with some specific ideas, and Given notes the Medidata team helped reshape the analysis parameters based on what might be most interesting to the study. A number of hypotheses were tested, and the results were analyzed to see what kind of conclusions could be derived from them.

Is SDV A Problem?

“We looked at two specific things,” states Given. “First, we looked at the specific impact of SDV on the total number of changes made to data entered into the case-report form. We already knew that only around three percent of all data was changed. However, we knew that not all of those changes were directly related to SDV. There were other data cleaning activities, such as data management and medical review of data, which can generate queries resulting in data changes.”

The additional tests consisted of looking at the three percent figure to determine how many of the changes were directly related to SDV. Not surprisingly, the number was very small. The study determined that only about 1.1 percent of all data changes were related to SDV.

Once the team gathered information on data that was changed after it went into the EDC system, it set about performing the more interesting part of the analysis: determining the percentage of changes in data which had not yet been entered into the EDC system and thus cannot be determined through SDV (if the data isn’t there, it can’t be cross-checked and verified). The focus of this part of the study was really around adverse events that are detected through a more holistic review process known as source data review (SDR), which checks the quality of the source data, reviews protocol compliance and ensures source documentation and critical processes are adequate.

“Adverse events, as they pertain to clinical data sets, are not as predictable,” notes Given. “There are multiple sources of data, such as lab science and clinical science systems, which can create an adverse event. We were hoping to determine the percentage of adverse events entered into an EDC system after SDR occurred. Basically, the site should have entered it, they didn’t enter it, SDR occurred, and then the adverse event data was entered after SDR occurred. We wanted to determine whether there is a higher percentage of data that’s missing but gets entered through SDR.”

The team did find a higher percentage in adverse events. The percentages were seven to eight percent after day one, increasing to around 11 percent of new adverse events entered within seven days of SDR.

According to Given, it definitely made the team take a closer look at that safety data. Adverse event data is a bit more difficult to manage, and therefore there may be more value associated with SDR than the basic data transcription checks performed by SDV..

All Source Data Checks Are Not Equal

The guidance that will be forthcoming from TransCelerate and Medidata will hopefully help to shape the industry in regard to SDV and SDR. Given notes there are different types of data checks, and companies should acknowledge that they are not equal.

“When you think about strategies on how to look at incorporating SDV and SDR in the future, there’s probably more of a bias towards looking for adverse event detection versus cleaning data that is already clean,” he says. “We need to reinforce the idea of using effective risk-based monitoring strategies. That means looking for risk indicators, since that is the more appropriate way of targeting your clinical monitoring activity to the places that need it. This will be far more effective than verifying 100 percent of everything.”

While Given was not surprised by any of the results, he believes it provides proof points, or validation, for the notion that 100 percent SDV does not have a substantial amount of value in and of itself. All clinical monitoring should be packaged with risk management tools to make it targeted and much more impactful in the clinical trial process. This will give companies a more holistic RBM strategy that allows them to see their study risks and know where they should target their efforts.

Going forward, Medidata will continue to perform additional analysis on the data. According to Given, tests like this often seem to raise as many new questions as they answer.  “One of the areas of further exploration will be to identify if there are any patterns within SDR that might help clinical teams further hone their targeted monitoring activities. For instance, are self-reported adverse events from the subjects themselves easier for sites to capture accurately versus diagnostic events?”

RBM: Not As Difficult As It Seems

For sponsors, the results should spur some internal discussions, especially around the topic of RBM. Given notes many companies think implementing an RBM program may be more difficult than it actually is. He hopes the results of this analysis might help to change that.

“Clearly, the more data and analysis we have, the more guidance we can provide to companies attempting to undertake a new RBM strategy. Information like this, at the very least, will give them a lot more comfort in getting this effort off the ground. The guidance in this paper, which we have provided in conjunction with TransCelerate, will be a good starting point for many companies. Sponsors finally realize it is time to act, and are talking about their successes in public forums. RBM is definitely going to become the foundational way of performing clinical trials in the future.”