Conducting clinical trials is a high stakes game, but disturbingly, the risk of selecting nonperforming or underperforming investigative sites is greater than losing at the gambling table. The use of data analytics, along with workflows and visualization tools, will go a long way toward redefining the site selection process, removing it from the realm of a frustrating gamble.
Underperforming investigative sites have long been a puzzling issue for clinical trial stakeholders. Sponsors' overreliance on paper-based or simple spreadsheet methods results in a lack of transparency as to what is happening in real time and an inability to mitigate risk that could stall clinical trial conduct.
Key to reining in budget overruns and delays, which often result in fueling the growing rescue studies industry, is the selection of high-performing sites that are ideally suited to running the study under investigation. The selection process is, however, often manual, cumbersome, and error prone. So, what criteria can be used to help optimize site selection?
Sites are dynamic environments, and new technology and industry initiatives are an important complement to the critical need for relationship building and maintenance. But are they helping improve site and sponsor-CRO collaboration and creating a competitive edge through improved clinical trial performance?
A panel of site-centric industry organizations will present and discuss the findings from a new study aimed at addressing these questions.
With a proliferation of cloud-based technologies already improving clinical trial performance, it is surprising that Excel spreadsheets are still a predominant force. Research dating back to the late 1990s and early 2000s documents that Excel was not designed to collect and analyze clinical trial data and it lacks project management capability, yet its extensive use persists.