A whole-of-system approach to evaluation of clinical education quality is one aspect of the wider quality assurance program in any health professions education course. One challenge in implementing this approach is the lack of a gold standard measure of clinical teaching quality. Consequently, clinical educators should be encouraged to engage with multiple sources of feedback to benchmark their current performance level [4,6], and identify opportunities to improve their performance. For that reason this study explored the intersection between clinical educators’ self-evaluation of clinical teaching quality and self-efficacy, and student perceptions of clinical teaching quality. The current study also extends the work of Stalmeijer, Dolmans  on clinical educator self-assessment through the inclusion of self-efficacy, given its relationship to teaching effectiveness measures .
Self- and student evaluation
In the current study, three distinct groups of clinical educators were identified:
Group 1. Those with student evaluations that were higher than the educator’s self-evaluation;
Group 2. Those with student evaluations that were lower than the educator’s self-evaluations; and,
Group 3. Those with student evaluations that were consistent with educator self-evaluation.
In relation to clinical educators’ own views of their performance, the disconnect between self- and external evaluation is not new [1,3,6], and this trend appears to be the case in the current clinical educator cohort. The trivial to small relationships at item level between the student- and clinical educator OCTQ self-evaluations suggests the educators may be interpreting the items differently to the students, have differing conceptions of clinical teaching quality, or that the OCTQ is not a suitable self-evaluation measure.
Over- and under-estimation of clinical teaching performance in the current work was similar to that of Boerebach et al. . These authors concluded that there were groups who over- and under-estimated their teaching performance, and that in subsequent evaluation rounds, these differences were ameliorated. As these authors highlighted, whether this was due to enacting feedback received in prior rounds, or matching their self-evaluation to previous resident (student) evaluations could be debated. The results of Boerebach et al.  also support the collection of longitudinal teaching quality data , affording the educator an opportunity to enact strategies to improve their teaching in response to previous feedback.
Whilst some of the clinical educator cohort in the present study have received ad-hoc formal or informal feedback on their performance, this did not occur on a consistent basis over the study period. The current study was also the first time clinical educators were asked to formally self-evaluate their clinical teaching. Without feedback, it can be challenging for clinical educators to accurately gauge the effectiveness of their clinical teaching performance [1,48], and this appears to be borne out in the findings of the current study. How clinical educators use this self- and student-derived performance effectiveness information may be mediated by educators’ clinical teaching self-efficacy.
Those clinical educators who were in group 1 (self-evaluation scores higher than student evaluations) demonstrated significantly higher self-efficacy across all three of the SECT domains. This group of clinical educators self-reported they were able to successfully manage the varying demands of clinical supervision and education in the student-led clinical learning environment. This result may also reflect a level of self-confidence with their own performance as a clinical educator. Less experienced clinical educators, both in a clinical and education sense, have been shown to have less confidence in their performance as a clinical educator . However, experience as an osteopathy clinical educator did not appear to be related to higher self-efficacy in the current work. Self-efficacy is both context- and task-specific and when related to self-confidence, a subset of clinical educators in a clinical teaching context may be more likely to display this confidence through their perceived self-efficacy. However, some students in the current study rated clinical educators with low self-efficacy higher than the educator rated themselves (group 2), potentially suggesting this group of clinical educators may be less confident in their performance in this educational context.
Within Bandura’s framework , mastery learning is likely to drive confidence with a task (through success or failure) and therefore higher self-efficacy. In the group of clinical educators that demonstrated high self-efficacy, it may be that they have had more perceived successes, and potentially place increased demands on students beyond the students’ zone of proximal development. This may have resulted in lower student evaluation scores - an assertion that requires further investigation. Self-efficacy across the three SECT domains was also moderately positively associated with overall self-evaluated teaching effectiveness, further supporting the self-confidence assertion described previously. Self-efficacy accounted for between 21% and 42% of the overall variance in self-evaluated global teaching effectiveness suggesting self-efficacy plays a role in self-evaluation. The significant variation in self-efficacy in our clinical educator cohort, suggests that self-efficacy could be developed in some educators and tempered in others, potentially through professional/faculty development. Thus the current study provides an argument for the use of clinical teaching self-efficacy evaluation as a basis for developing faculty/professional development programs.
Arah et al.  demonstrated that those educators who attend training programs are likely to obtain higher student ratings than those who do not, however, participation in formal education programs did not result in higher ratings in the current study. Participating in a generalist post-graduate university teaching qualification may not be the most suitable program for those wanting to undertake more formal education in the clinical education context. This qualification did not appear to be associated with any of the OCTQ completed by the students and clinical educators, nor the SECT. Conversely, the study identified that the one educator who was completing their formal qualification in clinical education demonstrated a self-evaluation score that is consistent with the students’ ratings, although they were not the highest rated educator in the current population. Whether this clinical educator was more accurate at self-assessing due to their clinical education qualification would require additional exploration. It is also important to note that historically, very little clinical education-specific professional development (beyond workplace orientation) has been made available to the educators in the current work.
It is important to be cognizant of the limitations of the current work and the ability to generalize the results to other osteopathy teaching programs, student-led clinics and clinical education more broadly. Defining the construct of ‘clinical teaching quality’ has reported to be challenging , and although a definition is provided in the context of the current work, there is no agreed one defined in the literature  and the OCTQ may in fact measure ‘satisfaction’. This may also be an additional limit on the generalizability of the study. The are a number of limitations associated with the cross-sectional design of the study including the data being wholly self-report, and potential response biases on the part of the students and educators. The student responses were anonymous and therefore less susceptible to social desirability , however clinical educator responses were identifiable, and the high self-efficacy and self-evaluations may be due to this bias.
Additional limitations of the work include the study taking place at a single educational institution, there was no question on the demographic form exploring participation in non-award faculty development in clinical education, and the assumption that the SECT captures the breadth of self-efficacy of clinical teaching in the university clinical learning environment. The SECT has only been published within a PhD and the current study is the first to publish data on its use in the peer-review literature. Additional testing of the SECT will strengthen the argument for its use as a measure of self-efficacy for clinical teachers.
The low number of ratings received by some clinical educators may also bias the results in that the student responses may have been more towards one end of the scale providing a biased picture of performance. That said, a single clinical educator receiving a low number of ratings is reflective of the reality of the learning environment where the educator-student ratio may be small. Statistically this appeared to have minimal impact but larger numbers would be preferable to provide stronger support for the assertions in this work. The difference in self- and student evaluations could be associated with a differing interpretation of the meaning of the OCTQ items. This provides an interesting avenue for further work to understand how the different stakeholders interpret individual items. The small number of educators participating in the study limited the use of regression models that may have assisted in shedding light on the influence of the demographic variables, particularly the influence of gender, on over- or under-estimation of performance .