White Paper

Deep Knowledge Of Clinical Study Startup Points Data In Right Direction

Purpose-built SSU solutions track clinical trial operations using much needed standardized performance metrics

Data, data everywhere, and not a byte to help us think is a playful yet truthful adaptation of a line from Samuel Coleridge’s famous poem, The Rime of the Ancient Mariner.1 It reflects the current state of metrics, whereby massive volumes of data are generated during clinical trials, but they are woefully inadequate at helping stakeholders spot risk factors and bottlenecks that can disrupt cycle times and budgets. This is due to the inefficient ways in which operational data are captured and analyzed, often relying on outdated methods such as paper, shared file drives, and Excel, which lack much needed project- and risk management functionality.

"massive volumes of data are generated during clinical trials, but they are woefully inadequate at helping stakeholders spot risk factors and bottlenecks that can disrupt cycle times and budgets"

These shortcomings are particularly acute during study startup (SSU), a phase that is widely regarded as complicated, slow, and in need of better operational tools.2 It is also pivotal to successful clinical trial operations. Specifically, the Tufts Center for the Study of Drug Development (CSDD) has reported that SSU is a major cause of long cycle times, which have stagnated for two decades,3 and that eight months is an average timeframe for moving from pre-visit to site initiation.4 Anxious to improve SSU operations, stakeholders are embracing solutions with automated workflows that guide team members through the many steps involved and provide alerts for tasks needing attention. These tools are purpose-built, show a deep understanding of SSU, and enable users to comply with regulatory metrics at the country and site levels. They also allow users to develop performance metrics, which are the basis for predictive analytics, and are critical to building a dynamic atmosphere of continuous improvement.

This white paper describes how industry-proven tools for SSU are key to better operations in the increasingly global realm of clinical trials. The tools have focused end-to-end offerings, starting with site identification, moving to feasibility assessment, and finally, site activation. Case studies are presented describing how these tools had a major impact on SSU cycle time, reducing it by more than 30%. Also, the white paper makes the case for use of standardized performance metrics to identify bottlenecks for each of the steps involved in SSU. For example, knowing that it now takes ten weeks instead of twelve to complete a series of tasks may be encouraging, but not knowing how long each individual element takes lets bottlenecks continue. Moving toward standardized performance metrics via purpose-built solutions allows stakeholders to measure what is happening in real time so they can identify risk proactively and take corrective action. This is a big step forward for the industry.

Focused Offerings and Standardized Metrics
SSU is a complex business, composed of country selection, pre-study visits, site selection and initiation, regulatory document submission, budget and contract negotiations, patient recruitment initiatives, and enrolling the first patient.5 Each of these steps means sending a litany of documents among various stakeholders within the clinical team, and to institutional review boards (IRB) or ethics committees, and regulatory agencies. Because of the volume of documents involved and the number of people engaged in communication, electronic systems are standard practice for capturing and handling the flow of information related to clinical trial operations. These include the clinical trial management systems (CTMS) and the electronic trial master file (eTMF), which stores documents required for regulatory submission, and ultimately, for archiving. But neither of these eClinical tools was designed for SSU, and therefore, cannot generate metrics to identify risk associated with that portion of the trial.

Fortunately, there are cloud-based solutions focused specifically on SSU. goBalto’s Select and Activate are workflow-driven tools, with Select used for intelligent site profiling, and Activate for expediting document completion and management processes. Analyze is a data analytics platform, which allows the clinical team to aggregate data and customize graphs, dashboards, and other data visualizations of study status. Together, these tools optimize the SSU steps and generate data that can be used for metrics that provide real-time insight into study status. By drilling down into specifics of each workflow, they can gather real-time answers to: What percentage of sites are activated? When was the clinical trial agreement template approved? Which sites and which countries are behind in receiving approval from ethics committees?

Answering these questions is useful for any study, but the real value is in comparing those results to an internal benchmark based on information from numerous studies. Datasets from past performance are rich in detail, and through functionalities in Select and Activate, disparate datasets can be consolidated and used to develop standardized metrics that evaluate the status of each step along the SSU continuum, making it possible to spot bottlenecks. Importantly, these metrics can be re-used in future studies.

"Activate provides realtime information and uses standardized performance metrics to create its reports based on real-time analyses of which countries are on-track or are falling behind, and which are performing against metrics for SSU document submission."

Status of Metrics
Currently, metrics tend to be company-specific as the development of industry standards for performance metrics is in the early stages. Going forward, addressing this shortfall on a broad scale will be a breakthrough moment for the industry, as companies will be able to determine whether their processes conform to industry standards. Metrics Champion Consortium (MCC), an industry association, is dedicated to the development of standardized performance metrics to improve clinical trials.6 To date, in collaboration with biopharmaceutical and device sponsors, service providers, and sites, MCC has released several dozen metrics to measure clinical trial performance, some of which are SSU-specific (Chart 1).

Complementing MCC’s groundbreaking work, goBalto has identified additional parts of the SSU process that need performance metrics. This includes:

  • Country status
  • Compliance monitoring
  • Cycle times
  • Milestones tracking against plan
  • SSU documents and submission status
  • Site status

Older spreadsheet-based tools may evaluate this information and build reports, but they are outdated and not based on performance metrics that would allow stakeholders to compare the information to standards built from results from previous studies. In contrast, Activate provides real-time information and uses standardized performance metrics to create its reports based on realtime analyses of which countries are on-track or are falling behind, and which are performing against metrics for SSU document submission. Without this information, pinpointing risk of bottlenecks is a serious challenge, given the level of complexity involved and the global nature of clinical trials.

A standardized approach to metrics is discussed in an article by Rick Piazza, which states that the clinical trial sector is sharply focused on improving productivity while reining in costs.7, 8 Achieving this goal requires detailed analytics that can be leveraged to support key organizational goals, namely efficiency, cost reduction and overall process improvement. But, most importantly, analytics need to be actionable. To make this happen, performance metrics must be data-driven, standardized across studies, indication, and therapeutic areas, and timely. Piazza explains further that status reports derived from standard technologies are informative but rarely provide actionable information.

"Analytics need to be actionable. To make this happen, performance metrics must be data-driven, standardized across studies, indication, and therapeutic areas, and timely."

This is where purpose-built SSU technologies fit in. Performance metrics become actionable when they are built from data coming from multiple sources. To understand why, consider how clinical decisions are made. A single blood chemistry lab value, for instance, would be of little value if it were not presented in the context of other data about a patient or as part of a database of results from numerous patients. Applying this to a notoriously difficult part of SSU— site selection—evaluating a single site’s performance would not be useful unless it were compared to multiple sites, assuming that comparison was made using standardized metrics derived from various data sources.

goBalto’s purpose-built solutions—Select, Activate, and Analyze—use data from multiple sources, and are designed to allow users to create metrics based on workflows specific to each step in each SSU task. For example, if the previsit to contract execution task in Phase II and III studies typically takes 6.2 months, as suggested by CSDD research,3 it is critical to track the sub-steps involved in that task, and who is responsible for each. Without this information, only the overall time for the task is known, but it cannot be determined where bottlenecks may be occurring.

Contract execution, provides a good example of a process with many substeps (Figure 1), and why each must be tracked to avoid bottlenecks. For example, All Contracts Executed, normally available in an eTMF as a summary of artifacts does not provide any metrics on the sub-steps that precede it. Without those metrics, stakeholders would be unaware of any issues until the planned date for All Contracts Executed was reached.

Recent research from KMR Group on global site contract cycle times highlights why tracking these sub-steps is critical.9 Their study, evaluating 20,000 recently-executed contracts for Phase II and III trials from leading biopharmaceutical companies, found that overall contract cycle times have doubled from an industry median of 1.5 months in 2009-2011 to more than 3 months in 2014-2015. Even contracts conducted in North America, traditionally a top performer, increased from 1.3 months in 2010-2011 to 2.4 in 2014-2015. Cycle times in emerging markets were even longer.

Why Purpose-Built?
The purpose-built approach is preferred to efforts by some eClinical players looking to enter the SSU space with repurposed technologies not built for SSU. Those efforts tend to be cloud-based plug-and-play approaches, suggesting that software modules will work easily when first connected, without reconfiguration or modification. Unfortunately, a single platform meant to integrate applications—a hallmark of plug-and-play—is unlikely to work because SSU is complex, with its country-specific requirements and workflows.

There are other factors that should raise concerns about a single platform approach. On the surface, the notion of a “one-stop shop” is attractive. It sounds simple and inviting to have all of the clinical trial functions in one place, but in reality, if all of the functions are supplied by one vendor, competition is limited, and maintaining a high level of customer service for a multitude of products becomes overwhelming.

The applications may be further impacted by internal competition, whereby if one application earns a substantially larger user base than the others, it will likely garner more resources to secure its dominant position and satisfy demand at the expense of less popular applications that may ultimately deteriorate.

"When a company has SSU as its sole mission, it develops an intimate understanding of that critical function and is able to deliver best-in-class solutions that streamline SSU activities."

So, overall, what happens to the single platform approach if some applications are better supported than others, especially if large, influential customers are placing heavy demands on certain applications? And what happens if enhancements to applications are unable to keep pace with the rapid changes in technology or if a provider has greater depth of knowledge in one application than another?

One way to think about this scenario is to invoke a “jack of all trades, master of none” analogy. When building a house, for example, a contractor hires a wide assortment of talent, from plumbers to electricians to painters to carpenters to roofers. It would be almost impossible for one craftsman to know everything about building the house, which is why the plumber probably wouldn’t be hired to put on the roof, or the electrician to hang drywall. Similarly, a single vendor is unlikely to be able to provide the best solutions for all aspects of the clinical trial process.

Why Industry-Proven?
When a company has SSU as its sole mission, it develops an intimate understanding of that critical function and is able to deliver best-in-class solutions that streamline SSU activities. Moreover, the data from those SSU solutions can flow across the continuum to other best-in-class eClinical solutions, such as CTMS and eTMF, using an application program interface (API).

This approach is in keeping with regulatory influences on improving clinical trials. The Food and Drug Administration (FDA) and the European Medicines Agency (EMA) released documents in 2013 encouraging greater acceptance of technology early in the clinical trials process, with an emphasis on risk-based monitoring.10,11 More recently, a major update is expected from the International Conference on Harmonisation – Good Clinical Practice Guideline (ICH-GCP), known as E6(R2).12 This new guideline, designed to replace the industrystandard R1 guideline, draws attention to the increasing complexity of clinical trials, and how more efficient quality management and risk assessment are possible with the ongoing evolution in technology, starting with SSU.

A recent study by Novartis illustrates how a purpose-built SSU solution led to a dramatic improvement in cycle time. The company was seeking a better process for sending documents linked to 32 oncology trials. To determine whether goBalto’s Activate could make a difference, the company compared various startup times using this clinical tool versus the company’s legacy systems. The study involved 163 sites, all in the United States.

Eight months post-implementation, the time needed to complete SSU tasks was cut by more than 30%. In particular, receiving essential documents from sites took 18 weeks with Activate vs. 30 weeks using legacy systems, a 40% improvement. Likewise, packages sent for the site initiation visit required 21 weeks with Activate as compared to 33 weeks using legacy methods, a 36% improvement. The sidebar describes a second case study.

"Real-time data that feed into standardized performance metrics will allow stakeholders to conduct better risk analysis by spotting where processes are breaking down and where the various players are lagging."

Define the Steps
SSU remains one of the thorniest aspects of clinical trials, defined by a multitude of steps and processes. As the industry evolves past paper and spreadsheet-based methods for tracking the many steps of SSU toward use of cloud-based purpose-built solutions, critical improvements are being achieved. Real-time data that feed into standardized performance metrics will allow stakeholders to conduct better risk analysis by spotting where processes are breaking down and where the various players are lagging. Using this approach has led to 30%+ reductions in cycle time.

Forward-thinking stakeholders looking to improve SSU are reaching out to providers well-versed in the intricacies of SSU, who focus their efforts on it exclusively, and have a proven track record. This emphasis allows for the development of best-inpractice solutions that not only have extensive country-specific workflows as part of the offering, but they also integrate with other eClinical solutions. This allows for ongoing updates to SSU solutions designed to optimize site selection and document completion and management. And overall, this aligns with the regulatory push toward greater use of technology to modernize SSU and the rest of the clinical trial continuum.

  1. The Rime of the Ancient Mariner. Samuel Taylor Coleridge. 1834. Accessed at: http://uwch-4.humanities.washington.edu/Tautegory/EBOOKS/COLERIDGE/Coleridge-Ancient%20Mariner.pdf. Accessed November 9, 2016.
  2. English RA, Lebovitz Y, Giffin RB. Transforming clinical research in the United States: Challenges and opportunities. Workshop Summary. Institute of Medicine. 2009.
  3. Getz K. Assessing and addressing site identification and activation inefficiencies. Tufts Center for the Study of Drug Development. March 2016.
  4. START Study Tufts CSDD-goBalto, 2012.
  5. Lamberti MJ, Brothers C, Manak D, Getz K. Benchmarking the study initiation process. Therapeutic Innovation & Regulatory Science. 2013;47(1):101-9.
  6. Metrics Champion Consortium. http://metricschampion.org/who-we-are/about-us/. Accessed November 17, 2016.
  7.  Piazza R. Dequantify yourself. Are all those system metrics your friend or foe? Contract Pharma. November/December 2013. Available at: http://aws-mdsol-corporate-website-prod.s3.amazonaws.com/Contract-Pharma_201311.pdf. Accessed November 20, 2015.
  8. Sullivan LB. Standardized Metrics for Better Risk Management: The Right Data at the Right Time. Applied Clinical Trials. August 31, 2016. Available at: http://www.appliedclinicaltrialsonline.com/standardized-metrics-better-risk-management-right-data-righttime?pageID=1. Accessed November 11, 2016.
  9. McKay L. Site Contracts From Weeks To Months: Results From KMR Group’s Site Contracts Study. August 24, 2016. Available at: http://www.biospace.com/News/site-contracts-from-weeks-to-months-results-from/430240. Accessed November 28, 2016.
  10. European Medicines Agency. Reflection Paper on risk based quality management in clinical trials. November 2013. Available at: http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf. Accessed November 13, 2016
  11. Guidance for Industry: Oversight of clinical investigations — A risk-based approach to monitoring. Food and Drug Administration. August 2013. Available at: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM269919.pdf. Accessed November 13, 2016.
  12. Integrated addendum to ICH E6(R1): guideline for good clinical practice E6(R2). Integrated conference on harmonisation harmonised guideline. June 11, 2015. Available at: http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R2__Addendum_Step2.pdf. Accessed November 15, 2016.