Aim and Design
The aim of this work was to systematically develop a National Evaluation Framework to: (a) facilitate national standardization of diabetes programs and evaluation throughout Australia, (b) ensure consistent program quality and reporting by NDSS service providers, and (c) facilitate consistent evaluation processes and reporting of overall program outcomes. A participatory approach was adopted to achieve these aims, with two key areas of focus. This approach is depicted in Figure 1 and described in further detail below.
Participatory Approach
A National Evaluation Team (NET) was established to lead the development and ongoing implementation of the National Evaluation Framework. The team was comprised of research and evaluation experts employed by Diabetes WA, with funding support from the NDSS. The Team met regularly with representatives from Diabetes Australia, who provided project governance and oversight.
An Expert Reference Group (ERG) was established to guide the design, development, and implementation of the Framework. The group included representatives from Diabetes Australia, the Australian Centre for Behavioural Research in Diabetes, the Australian Diabetes Educators of Australia, Deakin University, Charles Darwin University, a consumer representative, and Agents.
The NET and ERG consulted widely with the NDSS National Services Group, a diverse group of health professionals working with people with diabetes (e.g., Dietitians, Diabetes Educators, and Health Service Managers) representing Agents. The range of health professionals providing diabetes services in Australia is diverse, both organizationally and geographically. Therefore, understanding the viewpoints of a broad range of experts was integral in shaping the design and implementation of the Framework.
Nationally Standardized Outcomes and Indicators, Program Categories, Objectives and Measurement Tools, and Evaluation Processes
Research evidence guided the identification of outcomes, indicators, and objectives for diabetes programs and services that were most likely to lead to favorable outcomes. To support nationally consistent program delivery and evaluation and ensure the best opportunity to improve the outcomes of people with diabetes, outcomes and indicators were adopted from the national consensus position statement previously developed on behalf of Diabetes Australia (15). Informed by an international evidence-base and extensive consultation, the position statement outlines three key goals for diabetes education, including: (1) optimal adjustment to living with diabetes, (2) optimal health outcomes, and (3) optimal cost effectiveness. Goals 2 and 3 recognize the potential impact of diabetes education on physical health outcomes and optimal cost effectiveness. However, the statement acknowledges that it is challenging to directly attribute these to diabetes education, and thus, attention should primarily focus on Goal 1. Components or ‘indicators’ of diabetes education identified as relating to the goal of optimal adjustment to living with diabetes include: (a) knowledge and understanding, (b) self-management (i.e., diabetes self-care skills and behaviors), (c) self-determination and (d) psychological adjustment.
The identification of outcomes and indicators for DSMES allowed for the categorization of existing diabetes programs based on the indicators targeted within those programs. This required the identification of programs currently delivered throughout Australia, and the collection of information related to those programs, such as targeted population (e.g., people with type 1 diabetes) and outcomes, and program duration. Agents reported this information to the NET. The information was then collated to form a database of programs. The targeted outcomes of each program were compared against the outcomes and indicators specified in the National Evaluation Framework. Programs were then grouped into categories, based on the outcomes targeted within those programs.
The categorization of programs enabled the specification of a tiered system of evaluation. Each category of programs was ascribed to an individual tier of evaluation, with specific evaluation outputs allocated to the programs within each tier. More comprehensive evaluation processes were ascribed to resource-intensive, higher-cost programs predicted to demonstrate the greatest behavioral impact. Program objectives were defined, and appropriate measurement tools selected to assess outcomes against those objectives. Potential participant burden was weighed against the benefits of intensive program evaluation. Moreover, pragmatic assessment of program objectives had to be applied across the broad range of programs being delivered.
The selection of measurement instruments was guided by a recent review of psychometric tools for diabetes education services (19). The review investigated instruments to measure commonly targeted outcomes of diabetes education. Instruments were assessed on suitability, validity, reliability, feasibility, and sensitivity. Just three of the 37 tools evaluated met all five criteria. Several other instruments were deemed suitable as they met all but one of the criteria. Based on these findings, potential instruments to measure targeted objectives were selected for consideration by the ERG. Specific instruments were then nominated for inclusion in the National Evaluation Framework.
Quality Standards and Assessment
A set of quality standards was developed to ensure that NDSS programs were of high-quality and contained components to elicit key outcomes associated with optimal diabetes self-management. DSMES can have a significant positive effect on health (16). For example, DSMES are effective in reducing blood glucose levels and psychosocial outcomes in people with type 2 diabetes (17-19) and type 1 diabetes (17). Such programs have also been assessed as cost effective in empowering people with diabetes to self-manage their condition and mitigate risks of complications, with an incremental cost-effectiveness ratio of USD $5,047 per additional quality-adjusted life year, when compared to usual care (20). Therefore, it was important that the quality standards supported the delivery of programs that were structured and tailored to support the needs of the individual.
The content of the quality standards was informed by existing national and international standards and guidelines (21-24), state and federal government policy for primary care, chronic conditions, and diabetes (25, 26), and the National Safety and Quality Health Service Standards (NSQHS; 27). Collectively, these standards and guidelines recommend that DSMES should be person-centered (i.e., responsive to the unique needs of the individual), and provided at the time of diagnosis and throughout the person’s journey with diabetes. It is also recommended that DSMES should be accessible, culturally appropriate, and provide appropriate information and education for all people with diabetes, their families, and carers. The importance of strategies promoting active learning, goal setting, and supported decision making is also recognized. Programs should have a written curriculum, standardized facilitator training, and a quality development pathway to ensure fidelity and facilitate quality assurance.
Existing topic specific and comprehensive NDSS DSMES were assessed in accordance with the newly developed standards, to ensure consistent quality across all states and territories. Behavior change outcomes were not expected from tier 1 basic education programs, thus no formal review of these programs was undertaken; however, the standards provide a general guide for the provision of basic education programs through the NDSS.
In September 2016, representatives of each Agent assessed their existing DSMES utilizing a user-friendly self-assessment tool, developed to guide the application of the quality standards into practice. The tool is presented in Appendix A. To ensure internal validity, the evaluations of DSMES conducted by each of the Agents were then independently reviewed by two members of the NET. The reviewers then met to discuss the outcomes of the independent assessments and whether individual programs met quality standards. Discrepancies between reviewers were identified and resolved through mutual agreement and feedback, including areas for quality improvement, was provided to the NDSS Agents.