1. Approaches used by contributing Networks to identify and develop key metrics/indicators
Pediatric Improvement Collaborative for Clinical Research and Trials (PICTR®)
In 2018, PICTR worked closely with members of the site network to assess current paediatric clinical trial research operations. Sites completed surveys about their operations and met frequently to share gaps in their processes. Based on site feedback and subject matter experts (SME), a preliminary list of measurable goals and metrics was developed for improving the clinical trials process within sites.
To ensure the program’s goals and metrics aligned across the industry, PICTR hosted an SME meeting in Chicago in 2019, bringing together key stakeholders to discuss the conduct of clinical trials including pharmaceutical companies, federal agencies, academia, research sites, other global paediatric networks, and patients and families. The meeting outcome was a draft set of six metrics used to identify gaps in the clinical research operations process at site level.
Following the SME meeting, 14 sites participated in a pilot project collecting research operations metrics focused on the institutional review board and contracts process. The pilot aided in validating the program goals and identifying additional metrics after which, there was an ongoing collaboration with key stakeholders resulting in a final set of 11 core research operation metrics (Appendix A). Quality Improvement initiatives for sites were based on these metrics.
connect4children-Collaborative Network for European Clinical Trial for Children
c4c collects metrics to measure quality and performance of processes and network. Implementing a Performance Measurement System has a positive organisational effect, improves results over the long term, drives organisational strategy, supports planning and decision-making, and acts as an effective tool for communicating achieved results to stakeholders [17]
Within c4c, a methodological model was developed to identify a list of metrics and underlying data points to be suggested for adoption by c4c. The model considered metrics-specific issues, including:
- Common practice and use of metrics - collected from examples of national networks and sponsors.
- Lean Management approach in clinical research (e.g. “time” as one of the key performance measures).
- Goal-Question-Metric Paradigm (defining goals behind the processes to be measured and using these to decide precisely what to measure).
- Multi-Criteria Decision Analysis (to aggregate several simple metrics into one meaningful combined metric).
- Target setting.
A cross-work stream collaboration between c4c partners led to the selection of an initial core set of 13 metrics (Appendix B) from a list of 126 proposed metrics. The core set, prioritised by function and business case, is used to measure the performance of the studies used to define the network’s processes and to test its viability (so-called proof-of-viability studies), thereby testing the usefulness and actionability of this core set. Each metric has a target (value or range) and several attributes defined, including Name and Code, Process (mapped to Network or Clinical Trial processes), Definition, Data Points, as well as prioritisation for collection. The subset was chosen after a three-month consultation process across all c4c National Hubs and Industry partners of the consortium. The c4c Network Committee approved the metrics after a pilot phase of utilising with academic proof-of-viability studies. These metrics are critical to the c4c network and trial performance management framework and are continuously reviewed and evaluated.
MICYRN - Maternal Infant Child and Youth Research Network
In early 2019, MICYRN collaborated with I-ACT to learn about the PICTR initiative and metrics collected in the United States. Following the discussions with I-ACT, MICYRN engaged with its clinical trials consortium (CTC) comprised of scientific and operational representatives across 16 clinical trial units at MICYRN’s member research organizations to discuss the QI and Performance Metrics initiative. Buy-in from the CTC was achieved and deemed important to the maternal-child health research community in Canada. The MICYRN leadership team conducted individual teleconferences with CTC sites to identify a list of meaningful indicators across the 3 domains of quality, efficiency, and timeliness; 11 interviews were completed. Using the interview data, an electronic survey was created with the compiled list of 14 indicators and disseminated to the 16 consortium sites for completion. Sites were asked to rank each indicator in order of importance to their site (1-14). 11/16 CTC sites completed the survey. The survey results were analysed, reducing the list to the top 6 indicators identified by the CTC sites. The 6 indicators were reviewed by the MICYRN leadership team and in terms of tangible action items that MICYRN could support and facilitate. The MICYRN Annual General Assembly brought together the CTC to collectively generate common data elements and definitions, inclusion/exclusion criteria, timeframe, methods of data collection, frequency of reporting, and unit analysis, further reducing the indicators to 5 (Appendix C). The CTC and MICYRN leadership team are currently working on metrics collection and action items for each of the 5 defined indicators.
In summary, metric selection was driven by site quality improvement in one network (11 metrics), by network performance in one network (13 metrics), and by both in one network (5 metrics).
2. Commonalities and challenges in identifying network metrics
Appendix A – D describe the metrics provided by the participating networks. The metrics developed are broadly at either trial level, or at site, and/ or country level. They are related to individual services developed, and/or network/infrastructure.
Figure 1 summarizes the commonalities of the approach to identifying and developing these metrics across the three networks. All networks used a staged evidence-based approach based on existing evidence and wide internal stakeholder consultation and co-creation, keeping in mind the expected implementation of metrics across sites and organizations.
Appendix D summarises metrics related to each phase across each contributing network. The network driven by site quality improvement did not have indicators for capacity/capability or identification / feasibility (Table 1). 15 metrics for trial start up and conduct were identified. Metrics related to approvals were found in all three networks. Topics relating to protocol review were only included by the network driven by site quality improvement. Topics relating to numbers of paediatric interventional clinical trials and investigators participating in these at country level were only included by the network focussing on country-wide approach. Site identification/ feasibility indicators were only included by the network that was driven only by network management.
The challenges faced when reviewing and identifying common metrics reported by the three networks were:
Technical differences: c4c, I-ACT and MICYRN use (and source data from organisations that may use) different technical standards and systems, making it difficult to exchange data and information.
Measurement and semantic differences: All three networks use different terminology, definitions for each data point and metrics, and coding systems, making it difficult to compare data across organizations. Each of the three networks used slightly different reference points and definitions to capture similar metrics. For example, specific definitions used for site “initiation”, “activation” and “ready for enrolment” timelines were different between networks, impacting how the dates for these steps were captured. The same was noted in recruitment dates related to patient screening, consent, or enrolment. The source of information also varies; c4c collects detailed information from sponsors, whereas I-ACT and MICYRN collect the information from sites.
Organizational policies: Parent and partner organisations have different policies and regulations regarding data sharing and use; these need to be addressed to establish common guidelines for data exchange. These differences often arise because of the characteristics of health systems.
3. Working towards a common interoperable set of metrics
By comparing the identified metrics across the networks, we found specific shared metrics measured across all three networks that can form the basis of comparators for the service/ support that the networks provide across the trial lifecycle. Shared metrics could measure the effectiveness of interoperable networks. An example of a shared metric is shown in Table 2, illustrating challenges with terminologies and data point/measures alignment.