No psychometrically-sound instruments exist to evaluate the usability of CHIs. Our study of the IUS revealed a factor structure that was nearly identical to a SUS study from 2009[17] with exception of item 6 (“I think there is too much inconsistency in [Intervention]), which we removed. However, our results differed from subsequent studies[21–26]. The moderate correlation between the subscales indicates that the measure can be used as a total scale score, as well as decomposed into Usable and Learnable subscales.
Comparisons of IUS scores across interventions and providers yielded notable differences, especially that “Other” interventions were found to be more usable than MI or CBT. This may be because some of the interventions combined into the “Other” category (such as BA)
are simpler than MI and CBT (e.g., omitting a focus on cognitions), but further research should apply more extensive usability assessment methods[14] to better understand the EBPI qualities that result in higher and lower scores. Future research also may assess the extent to which differences in implementation supports (e.g., training, consultation) impact experiences of EBPI usability. Additionally, behavioral health providers found all interventions more usable than other types of providers. This is consistent both with SUS findings that greater experience with products results in higher scores[34,35] and with a prior application of the IUS in which more expertise in a clinical service domain was associated with higher scores[14]. However, there was no difference by provider type on intervention learnability, with all disciplines finding the learnability of EBPIs to be slightly below the SUS cutoff of 70 for “passable”[20]. It may be that while behavioral health providers found the interventions more usable over time, given that they had expertise in the domains and potentially more supports for practice, they still found them initially complex to learn. It is also possible that learnability did not demonstrate significance because it contained only two items with lower reliability.
Limitations
This study had several limitations. First, it occurred only in primary care and may be difficult to generalize to other contexts. Second, due to a survey programming error, we did not obtain data about the amount of training respondents had received for the intervention they indicated delivering most frequently. Finally, as indicated above, we did not collect any information in the survey regarding what EBPI qualities drove ratings.
Conclusions and Next Steps
Overall, the adapted IUS demonstrated good psychometric quality and a structure consistent with some prior research. This consistency may bode well for growing collaborations between implementation and HCD researchers[6]. Given that the Learnable subscale was composed of two items and did not demonstrate differences by provider type, additional items may be indicated to create a more robust subscale. Overall, intervention usability has been conceptualized as a key determinant of both perceptual (e.g., appropriateness, feasibility) and behavioral (e.g., adoption, fidelity, reach) implementation outcomes, as well as patient outcomes[36]. Application of the IUS to a broader range of EBPIs, settings, and professional roles would allow this proposition to be explicitly tested.
No psychometrically-sound instruments exist to evaluate the usability of CHIs. Our study of the IUS revealed a factor structure that was nearly identical to a SUS study from 2009[17] with exception of item 6 (“I think there is too much inconsistency in [Intervention]), which we removed. However, our results differed from subsequent studies[21–26]. The moderate correlation between the subscales indicates that the measure can be used as a total scale score, as well as decomposed into Usable and Learnable subscales.
Comparisons of IUS scores across interventions and providers yielded notable differences, especially that “Other” interventions were found to be more usable than MI or CBT. This may be because some of the interventions combined into the “Other” category (such as BA)
are simpler than MI and CBT (e.g., omitting a focus on cognitions), but further research should apply more extensive usability assessment methods[14] to better understand the EBPI qualities that result in higher and lower scores. Future research also may assess the extent to which differences in implementation supports (e.g., training, consultation) impact experiences of EBPI usability. Additionally, behavioral health providers found all interventions more usable than other types of providers. This is consistent both with SUS findings that greater experience with products results in higher scores[34,35] and with a prior application of the IUS in which more expertise in a clinical service domain was associated with higher scores[14]. However, there was no difference by provider type on intervention learnability, with all disciplines finding the learnability of EBPIs to be slightly below the SUS cutoff of 70 for “passable”[20]. It may be that while behavioral health providers found the interventions more usable over time, given that they had expertise in the domains and potentially more supports for practice, they still found them initially complex to learn. It is also possible that learnability did not demonstrate significance because it contained only two items with lower reliability.
Limitations
This study had several limitations. First, it occurred only in primary care and may be difficult to generalize to other contexts. Second, due to a survey programming error, we did not obtain data about the amount of training respondents had received for the intervention they indicated delivering most frequently. Finally, as indicated above, we did not collect any information in the survey regarding what EBPI qualities drove ratings.
Conclusions and Next Steps
Overall, the adapted IUS demonstrated good psychometric quality and a structure consistent with some prior research. This consistency may bode well for growing collaborations between implementation and HCD researchers[6]. Given that the Learnable subscale was composed of two items and did not demonstrate differences by provider type, additional items may be indicated to create a more robust subscale. Overall, intervention usability has been conceptualized as a key determinant of both perceptual (e.g., appropriateness, feasibility) and behavioral (e.g., adoption, fidelity, reach) implementation outcomes, as well as patient outcomes[36]. Application of the IUS to a broader range of EBPIs, settings, and professional roles would allow this proposition to be explicitly tested.