Establishing robust metrics at the start of a study, and systems to monitor them in an ongoing manner, should be integral to all work that a Clinical Research Organisation (CRO) performs in any sponsor vendor relationship. There is an expectation from clients that clinical trial performance metrics will be used to monitor progress or lack of progress, including deviations from the planned schedule. However, these performance metrics are sometimes not defined clearly at the outset and often not incorporated into clinical trial contracts. A lot of performance metrics need to be “balanced” between what is controllable by the CRO and what is controllable by the sponsor. Therefore, the metrics that are incorporated into a contract need to be carefully chosen. Ideally it should be a small and accurate set of performance metrics that will meet the customer’s needs and enforce efficient information flow – it being a breach of contract not to provide and review the metrics.
When defining what metrics will be collected, it is critical to carefully consider the reasons and benefits, as the collection of too many metrics usually proves disadvantageous. Metrics should enable a team to increase productivity, work smarter, and make better decisions. They should also measure success of the sponsor as well as the vendor because delays or deviations from the original plan can be caused by both. Metrics should allow the user to manage and improve performance through the setting of realistic and achievable expectations. They can also be used for vendor rating purposes as each study is completed. Setting and agreeing the performance metrics are an important part of the governance within a FSP model. Such ratings could also be applied to clinical sites and any freelancers used on the study.
By considering the following, it is possible to avoid many of the pitfalls associated with metric development:
In order for metrics to be useful, they should be clearly defined at the outset. For ease of tracking they should be named and categorised according to the primary focus (time, cost, quality and/or satisfaction). Although almost anything can be measured, when defining metrics it is important to ensure that effort is spent measuring things that can be changed and which make a difference to the overall success of a project.
A metric should be used as an instrument to measure the effectiveness of a process. The words of Rowena Young of the Skoll Foundation should be heeded during metric development and review, “The risk with any metric is that people will come to see it as a description of reality, rather than a tool for a conversation about reality. One metric or another can function well only when managers know why they are measuring and for whom”.¹
Before any measurements begin, it is important to consider whether fewer metrics could be measured to achieve the same endpoint. Careful consideration of the data collected will not only allow for process change, but also the definition of more effective metrics in the future.
When defining metrics, the user should consider whether the indicator measured is ‘leading’ or ‘lagging’. For clinical trials, these terms can be considered to relate to whether the end user will use the metric to identify opportunities in the current trial or to identify opportunities in future trials, respectively.
Consideration should be given to exactly how the metric will be measured. Additional analysis may be performed on a ‘for cause’ basis after consideration of whether/why a metric is important and to further define which processes should be examined if the metric is out of range.
The reporting frequency should be clearly defined; it is important to gather sufficient data to allow accurate and meaningful analysis but to not collect data too frequently. Even when analysing the minutiae of a process, the ‘big picture’ of the entire process should not be forgotten. The target, or range you are trying to achieve, should also feature in metrics design.
Measure | Category | Indicator | Vendor | Sponsor | Site |
Final approved protocol to final approved Case Report Form (CRF) |
Time | LAGGING | X |
||
Protocol approval to first site activated | Time | LEADING | X | ||
Final protocol approval to first patient first visit (FPFV ; all sites) | Time | LEADING | X | ||
Final CRF/electronic CRF (eCRF) to database (DB) Live | Time | LAGGING | X | ||
CRFs received to data entry complete (paper) | Time | LEADING | X | ||
Patient visits complete to eCRF data entered (eCRF) | Time | LEADING | X | ||
Number of queries per 100 CRF pages | Quality | LAGGING | X | ||
Receipt of query response to DB update time | Time | LEADING | X | ||
Visits according to clinical monitoring plan | Quality | LAGGING | X | ||
Last patient last visit (LPLV) to DB Lock | Time | LAGGING | X | ||
DB lock to final Tables, Figures and Listings (TFLs) | Time | LAGGING | X | ||
DB lock to final Clinical Study Report (CSR) | Time | LAGGING | X | ||
Sponsor initiated scope changes | Cost | LAGGING | X | ||
Clinical Research Organisation (CRO) initiated scope changes | Cost | LAGGING | X | ||
Sponsor satisfaction - metrics generated per company procedures | Quality | LAGGING | X | ||
Invoice payment timelines | Cost | LAGGING | X | ||
Costs incurred relative to study progress (e.g. cash flow, performance) | Cost | LEADING | X |
LEADING: End user will use metric to identify opportunities in current trial
LAGGING: End user will use metric to identify opportunities in future trial
The collection and presentation of metrics should be kept simple. Data should be collected at regular intervals and reviewed individually and collectively. A traffic light approach on a dashboard is a useful way of monitoring performance against expectations and for monitoring standards and targets for improvement. Below are some examples of small accurate set of performance metrics designed to meet customer needs.
Specific Metrics Collected in Clinical Trials: Example 1
Title | Category | Indicator |
Final protocol approval to first patient |
Time | LAGGING |
Definition | ||
The total number of calendar days from the date of final approved protocol release to CRO to the date first visit for all sites (i.e. all sites screened on patient) Displayed as range from lowest to highest |
||
Additional Analysis on a 'for cause' basis | Reporting frequency | Target |
Analysis of the reasons for delay include timelines for ethics committee/signed site agreements and Competent Authority approval. Monitoring resources, protocol amendments, site contract issues identifies sites that potentially may not be used for future studies | Twice-monthly during site selection phase |
+ 4 weeks across the study (Green) Within 4-8 weeks across the study (Yellow) >8 weeks across the study (Red) |
LAGGING: End user will use metric to identify opportunities in future trial
Specific Metrics Collected in Clinical Trials: Example 2
Title | Category | Indicator |
Receipt of query response to database update time | Time | LEADING |
Definition Displayed as range from lowest to highest |
||
Additional Analysis on a 'for cause' basis | Reporting frequency | Target |
Cycle time in excess of target indicated less than optimum processes Work that is not prioritised or passed between too many staff members can cause delays |
Quarterly |
2-3 days (Green) 4-5 days (Yellow) >5 days (Red) |
LEADING: End user will use metric to identify opportunities in current trial
Clinical trial performance metric collection in within clinical trial contracts between sponsors and vendors, and the process changes based upon findings, are often viewed differently by individuals within an organisation depending upon previous experience. With time and careful consideration invested at the outset, the right collection of performance metrics can lead to the identification of weaknesses in a process and the implementation of corrections to reduce time taken and associated costs and therefore improve quality and team/client satisfaction. Without sufficient consideration at the outset, metrics collection and any subsequent actions can be onerous and have, at best, little benefit relative to the time and cost invested. Too often, performance metrics are implemented for the benefit of having metrics rather than to gather data that can be used in a beneficial way.
Defining the clinical trial performance metrics within a contractual agreement, e.g. as part of an individual work order or 'statement of work’ can give great clarity to what progress reporting is required and the resulting information will be invaluable in helping to keep the project on track. It will also be very useful during a post-project analysis where lessons learned can be applied to the next project.
Quanticate has expensive experience across a range of outsourcing models depending on the preferred partnership with a vendor. As a biometric CRO our services range from statistical programming, biostatistical consulting, clinical data management, medical writing and pharmacovigilance. Within all our contracts and as part of our Coded to Care OATH, performance metrics are clearly defined and agreed with our clients to ensure high quality is delivered and client expectations are met. Please submit a request for information (RFI) if you would like to speak to a member of our team who can support your trial.