Our review identified several elements and challenges of effective physician implicit bias curricula. Below we highlight a spectrum of educational approaches to these curricula, as well as areas for improvement in implementation and outcome assessment.
Educational models
The 4 educational models (Table 2) identified in our analysis present various strengths and weaknesses. Competence Models have been critiqued for presenting implicit bias as a problem to be understood and resolved at the level of the individual,15–17 often by increasing learners’ awareness of their bias. Although evidence does not support the premise that increased awareness alone will allow clinicians to manage their own implicit bias,18,19 self-reflection may trigger cognitive dissonance and increase learner motivation to change. In our review, 20% of interventions identified self-reflection on personal bias as a strength. On the other hand, when Competence Models are used to improve learners’ understanding of cultural groups by focusing on categorical traits rather than individuation, they may have the counterproductive effect of actually increasing reliance on stereotypes.20–22 It is critical that interventions demonstrate heterogeneity rather than homogeneity within stereotyped groups, a strength which was recognized in 20% of curricula published in this review.
Skills-Based Models draw upon evidence-based strategies in Social Cognitive Psychology that have been shown to reduce stereotyping outside of healthcare settings.18,23−25 These skills may include “perspective-taking,” which fosters empathy by asking learners to imagine themselves in a patient’s position. Another practice, called individuation, consciously focuses on “specific information about an individual,”18 which may “increase [learners’] capacity to see others as members of a common ingroup” instead of an outgroup.23 Such models may also use mindfulness, which encourages “attention to one’s own thought processes…and how they affect decisions so that one pays attention to the details of clinical care rather than falling back on habits…such as stereotypes.”20
Social Contact Models facilitate direct interaction with diverse patients to foster empathy and enhance learners’ comfort, confidence, and positive emotions in interactions with people they perceive to be outgroup members.23,24,26,27 Evidence suggests that social contact only leads to these positive outcomes in specific conditions, namely, the presence of shared goals and equal status between both parties.20,27 Otherwise, such interactions have the potential to strengthen previously held stereotypes.20,27 To address this risk, novel approaches incorporate standardized patient encounters with debriefing.20 One downside to Social Contact Models is that lessons learned with specific populations may not be easily applied to other contexts, in contrast to Skills-Based Models, which provide tools meant to be universally applicable.
Critical Models seek to profoundly transform the paradigms through which learners think about equity and justice in the medical system. In contrast to other models, which seek to avoid provoking discomfort or defensiveness among learners,16,20 Critical Models intentionally present learners with experiences designed to arouse emotions, destabilize assumptions, and trigger cognitive dissonance. According to transformative learning,19,28,29 an educational theory which focuses on adult learning, such an exposure to a “disorienting dilemma”30 prompts learners to “engage in a process of self-examination,” leading to paradigm shift and skill acquisition.31
Curriculum implementation
Each educational model encountered challenges in its implementation. Our review revealed barriers related to institutional investment and culture, availability of experienced facilitators, and learner-related factors.
Institutional attitudes can support or impede learning by impacting the time and funding available for implicit bias programs.29 Given the multiple competing demands for medical staff time,32 it is unsurprising that over half of interventions held only a single session, despite concern that “the lessons of a onetime workshop…tend to fade as the volume of work increases, and old practices reassert themselves.”33 When institutional investment is lacking, the burden is carried by a handful of sometimes overtaxed individuals, as one author recalls, “we had momentum. What we didn’t have was money…which was a recipe for a lot of talk and no action…it seemed pretty clear I was going to have to find the funding for it myself.”34 We also observed an uneven distribution of implicit bias programs between various specialties, illustrating how departmental subcultures may affect the accessibility of such trainings.
Another barrier identified was the availability of facilitators who were comfortable and well-versed in the subject matter.20,26,29,33,35,36 Only half of the interventions discussed the training of facilitators. A deficiency of experienced facilitators could detract from curriculum feasibility and quality while compounding variability in learner experiences. Facilitators may be wary of teaching implicit bias because of the sensitivity of the subject matter, inadequate preparation and training, or institutional cultures of silence with relation to bias.29 Some questioned the evidence behind implicit bias, or felt antagonized when confronted with inequities in their establishment.34 In response, several articles investigated best practices for facilitator training and identified this as a crucial area for future research.15,29,37
Implicit bias programs were also impacted by factors related to learners. Multiple studies relayed concerns that the voluntary nature of these curricula meant that attendees were “self-selected,”38 such that the program may have been “preaching to the choir.” Interventions can reach a greater array of learners if their institutions value implicit bias training and support learners in making time for it.26 Changing institutional culture may also address another learner-related factor: the defensiveness and feelings of shame, fear29 or denial39 that may be experienced when confronting one’s own bias. Although such discomfort can be part of the process, as in the case of Critical Models,30,31 too much discomfort can be counterproductive. Educators should provide a supportive environment to intentionally channel learner discomfort into behavioral change.20,31
Environments which support vulnerability and are free of criticism enable learners to experience transformative change.16 One study suggested that “self-reflection, self-awareness, discovering…of often shameful past experiences of bias—could only be accomplished through…a non-judgmental environment in which everyone feels comfortable expressing their views with little fear of mockery or embarrassment.”16 It is also crucial to avoid taxing learners who are underrepresented minorities by treating them as token representatives of their group or expecting them to educate other learners.40 Educators must strive to “create a learning environment that fosters safety, trust, and respect,” “vet speakers, content, and materials carefully,” and “employ andragogical versus pedagogical methods of learning” which treat learners as active agents in their own learning.41 Striking this balance may be especially difficult when power differentials exist between facilitators and trainees, which reinforces the need for robust faculty development.29
Outcomes reported
Program evaluation is an essential component of curriculum development.42,43 Seventeen percent of studies in this review labeled evidence supporting interventions as a strength. This suggests that educators are seeking data to guide curricula, yet 20% of interventions did not report results. Faculty development initiatives should explicitly encourage educators to create a prospective evaluation plan to measure and disseminate outcomes, so that others may benefit from the lessons learned.
Kirkpatrick’s model for program evaluation (Fig. 2) is a well-known paradigm to categorize approaches to outcome measurement. The reported outcomes of included publications most commonly mapped to Level 2: Learning, which relates to learners’ knowledge, attitudes, or skills, as well as confidence or commitment to change.13 Noteworthy shortcomings exist within this subset of data. While optimal measurement at Level 2 would involve an external evaluator,42 many studies reported outcomes via self-assessments, raising concerns about their validity.44 As an alternative, several authors measured IAT scores, often in a pre/post-intervention format. The advantages of such an approach are the rigor with which IAT instruments are developed and evidence that the IAT has greater predictive validity than other self-report measures,45 but some publications question the validity and precision of IAT-based data.46–48
Few included studies attempted to measure outcomes at Kirkpatrick Levels 3–4. Level 3 assesses the degree to which learners apply what they learned, and Level 4 assesses targeted outcomes and organizational benefits.13 Although measurement at these higher levels is challenging due to the time, money, and methodologic expertise required,49,50 investing in such outcome evaluation would provide the most direct evidence of interventions’ effects on physician implicit bias and patient care.13