A rubric is an assessment tool involving a layout of expectations in a matrix [1]. Rubrics provide timely feedback, prepare learners to use detailed feedback, encourage critical thinking, and facilitate communication with others [1]. Since rubrics can be used to objectively assess performance, they are employed to evaluate competencies in outcome-based medical education (OBE) [2, 3], where it is common to develop rubrics for milestones [4–6].
In countries that have a long history of OBE implement milestone rubrics in their complex medical education systems, such as the U.S. and Canada, competencies are defined through their descriptions [4, 7], and speciality-specific rubrics for milestones are defined based on these competencies [8]. Sometimes, generic rubrics for milestones are provided to guide the development of speciality-specific rubrics [9], and multiple assessment tools are mapped to determine the scores for the rubrics [8]. Another way to assess milestones is via entrustable professional activities (EPAs) [6], which assess multiple competencies through authentic clinical practice [6]. EPAs allow physicians to assess competencies in a way that better express their views on learners’ performance in clinical practice [6]. Learners are assessed through direct observation, chart reviews, or multisource feedback [10], and these sophisticated assessment systems have been developed over years through the collaboration of multiple stakeholders.
In March 2019, Japan’s Ministry of Health, Labour and Welfare introduced in the Guidelines for Medical Residency, nine competencies and generic rubrics to assess medical residents. These competencies are professionalism (including medical ethics), medical knowledge and problem-solving ability, practical skills and patient care, communication skills, practice of team-based healthcare, management of the quality of care and patient safety, medical practice in society, scientific inquiry, and attitudes for life-long and collaborative learning [11, 12]. The generic rubrics indicate the criteria for scoring the competencies on a scale of one to four, with three to four subscales (see Appendix 1). In Japan, residents rotate multiple departments during their two-year residencies spending one to three months at each, and the rubrics are applied equally across all institutions and departments. Unlike in the U.S. and Canada, most Japanese speciality training programmes do not use speciality-specific milestones, EPAs connected to milestones, or guidelines mapping assessment tools to required competencies. Since Japan’s adoption of the guidelines, we have been urged to implement the new competency-based system. Many institutions, therefore, began to use the generic rubrics as an assessment tool for supervising doctors in all departments [13].
However, using generic rubrics as an assessment tool in the clinical environment poses potential problems. The items and descriptions are usually abstract and vague to ease applicability in a variety of contexts, but this means that these cannot take into account the local context of each clinical environment. It is difficult to use generic rubrics as an assessment tool [14] since learners struggle to understand where they can improve as per the abstract descriptions provided. The inability to account for local context decreases ecological validity [15], rendering the data acquired through direct use of the generic rubrics practically useless towards making summative decisions. Thus, adaptation and not merely adoption of generic rubrics is the key to conducting meaningful assessments.
Although EPAs, speciality-specific milestones, and guidelines to map assessment tools and timings are already available in some countries as OBE methods, their connection to the context of each country prevents them from being simply imported. For example, the Competence by Design Competence Continuum [16], which is used to define milestones in Canada, does not fit the Japanese medical education system, while the competencies defined in the U.S. and Canada also differ from ours. Considering the requirement to implement OBE, a rapid and simple implementation process is required.
The Association of American Colleges and Universities provides Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics to help assessing essential learning outcomes [17]. These provide a generic evaluation framework to localise the generic rubrics to their context [17] through the modification of sentences and elements within them. The process helps faculty and learners understand the stated criteria, as a result the localised variants reflect actual learning in the context [17]. However, localisation of generic rubrics for application in medical environments differs in some respects from localisation in liberal arts education. Adapting generic rubrics of competencies to the clinical environment requires more than just modifying sentences. The mental model, clinicians use to assess learners’ performance, sometimes does not fit the fragmented assessment categorised by competencies [18]; so, they assess learners with holistic models including multiple domains of competencies [18]. Comprehensive modifications, including the integration of competencies and the modification of items, are thus required to adapt competencies for each local medical environment. Despite the promise of applying OBE by creating a generic rubric and localising it, there is a lack of research on localising a generic rubric in a medical setting, and on the difference such localisation may offer.
Implementing this innovative process calls for some degree of uncertainty [19], and therefore, the integration of continuous improvement via user feedback. In this case, ‘users’ means supervising doctors, learners, and managers. Among these users, we focused on supervising doctors since they are the primary users of assessment tools and are directly affected by these. Analysis of their experiences may also help implementation in other locations. In this respect, we proposed the following research questions: ‘How can we locally adapt generic rubrics for OBE?’ and ‘What is the effect of such local adaptation on supervising doctors as assessors?’ A rapid process to localise generic rubrics will be useful for countries intent on applying OBE, as well as for specialities that are new to such implementation.