Medical homes are emerging as the standard for quality and comprehensive primary health care services across the country. As of 2018, three quarters of HRSAs CHCs were designated medical home practices, making HRSA one of the country’s foremost adopters of PCMH-modeled care. Although PCMH transformation has come to represent a “whole-person approach” to primary care delivery, [30, 31] and with numerous examples of improving primary care delivery and lowering overall health care costs, [32–36] there is still an emergent focus on whether it is able to resolve differences in primary care experiences due to racial, socioeconomic, and geographic contexts [37]. Amplifying this limitation are data restrictions that prohibit generating risk-adjusted evaluations of medical home initiatives. These constraints similarly problematize parallel efforts to understand the effectiveness of other initiatives to improve access and quality of primary care [38].
A major strength of this study was the ability to use data linkage techniques to re-visit questions raised in previous investigations as to whether PCMH recognition itself is a reliable measure of quality. To date, studies have found inconsistent evidence of improved processes and outcomes of care since HRSA began transitioning its CHCs into medical homes. One plausible explanation for mixed findings is that it has been assumed but not measured that every CHC delivery site has obtained PCMH designated status. The Commonwealth Fund Survey, though robust in context, perpetuates these assumptions due to dichotomizing PCMH status if a CHC oversees at least one designated site. These designs increase the potential for misrepresenting the effects of program transformation.
Given that current data limitations often impede efforts to monitor whether CHCs are improving experiences and outcomes of care among some of the country’s most vulnerable patient groups, we sought to assess whether site-level designation could be seen as having a positive impact on quality given the social, clinical, and geographic contexts in which care is provided. We designed this investigation to help overcome well-known limitations for how medical homes have come to be defined and the lack of consistent evidence that these new standards of care quality improve processes and outcomes of care. Geocoding and approximate string-matching techniques provide a unique opportunity to identify – with a high degree of certainty – whether site-level designation impacts the interpretation of CHC care quality. In absence of publicly available site-level clinical performance data, this analytical approach makes it easier to monitor the overall impact of PCMH transformation on quality targets.
Due to the need to use cross-sectional data to examine PCMH transformation on clinical quality indicators, our findings are provisional. Even so, because of the unique linkages and comparisons that allowed us to assign differences in site-level designation on clinical performance – endpoints that have thus far received little attention – we consider three important implications of our findings that are relevant to future PCMH quality improvement initiatives. First, our findings help validate the long standing concern of ecological fallacy in HRSA PCMH evaluations. That we consistently found that designation was associated with improved quality when PCMH status defined as a binary indicator of having at least one recognized delivery site, but not similarly associated with increasing proportions of site-level designation, implies a lack of sensitivity between PCMH designation and quality as it is currently reported.
Second, for numerous indicators we found that CHCs that had designated at least 90% of its delivery sites consistently outperformed non-designated CHCs. These findings suggest that it is CHCs that have fully embraced PCMH modeled care that are driving performance reports. These findings help to confirm that transformation does in fact matter, but it is likely to be far more nuanced then what has previously been reported. These findings are especially important given the inherent challenges that the full overhaul of CHC practice culture requires [39]. Helping to confirm these observations were our findings that designation rates less than 10% resulted in identical quality scores as non-designatd CHCs. Taken together, the results of these tests provide important contextual information for other investigations often fail to consistently observe differences in care between designated and non-designated centers.
Third, our findings offer an initial support for additional risk adjustment criteria in order to identify the intended effect of practice transformation on improvements in care quality. Intersecting our findings of a lack of consistent relationship between PCMH designation and clinical performance were our observations that designated CHCs typically exceeded a clinical target by a marginal amount. If these rates suggest that delivery sites are already working at near-optimum performance (i.e., a clinic’s adherence rate cannot get any higher), then it may become necessary to add risk-adjustment criteria to annual performance reports, much like how the Centers for Medicare and Medicaid Services do for its Hospital Readmissions Reduction Program (HRRP). At the same time, lack of variation in outcomes may also reflect the fact that CHCs have historically emphasized cultural competence, teamwork, and patient-centrism, all of which may confound evaluations of the PCMH transformation [40]. The varying effect and differences in magnitude of PCMH designation highlights a need to continue examining its effect more closely, which may require additional site- or patient-level data as well as the ability to risk-adjust performance targets. However, in absence of publicly available site-level clinical performance data, geocoding and data linkage algorithms allow for a more nuanced approach for evaluating the impact of PCMH designation on care quality.
We did observe one instance where PCMH designation was inversely associated with improved quality relative to non-PCMH CHCs. Whether PCMH recognition is important for increasing adherence to childhood immunization remains a question that is not well answered.[41] However, these trends could also reflect the 2017 changes to HRSA’s immunization indicator, which now includes all patients who have not seen their provider before turning age 2 and has increased the numerator to include Hepatitis A, rotavirus, as well as influenza vaccines. While these changes could explain the decrease in adherence rates from previous years, it does not explain why adherence rates for immunization targets remained significantly lower compared to other measures.
Our findings should be interpreted within the context of its limitations. First, this study represents a point in time cross-sectional evaluation of clinical performance derived from CHC performance reports. Although assessments based on cross-sectional data remain limited, attributing differences in quality to differences in the proportion of recognized delivery sites provides a mechanism to address a long-standing limitation within HRSA’s PCMH evaluations. A related limitation is the lack of longitudinal data to measure improvement over time as well as the lack of data that can be linked back to the delivery site. Second, prior to excluding false positive/negative matches, we were able to confirm 64% of the CHCs that had at least one designated delivery site, which was less than the 77% HRSA published for the same year for the lower 48 states and District of Columbia [29]. Three factors could have accounted for the observed differences. One is that we excluded delivery sites whose grantee was accredited through the AAAHC. In 2018, 48 community CHCs that oversaw 294 delivery sites had ambulatory health care accreditation [42]. We excluded the AAAHC database because it uses a network accreditation process whereas the NCQA and JC require site-level certification. Another factor could be that our data linkage algorithm was too restrictive owing to our emphasis on avoiding false positives. Another is our lack of access to archival data, which required that we link data using different time stamps. Finally, our use of non-PCMH centers as controls required that we exclude recognition time as a potential determinant of care quality, which is a known factor of PCMH performance.