Implementing Implementation Science; A Bibliometric Study of Qualitative Research in Clinical Artificial Intelligence

DOI: https://doi.org/10.21203/rs.3.rs-1082323/v1

Abstract

Background

Implementation science is a pragmatic and multidisciplinary field, centred on the application of a wide range of theoretical approaches to close ‘know-do’ gaps in healthcare. The implementation science community is made of individuals on a continuum between academia and practice, but it is unclear to what extent the theoretical deliberations of implementation academics are translated into the work of implementation practitioners and on to patient benefit. This bibliometric study aims to use the field of clinical artificial intelligence(AI) implementation to sample the prevalence and character of theoretically informed implementation practices.

Methods

Qualitative research of key stakeholder perspectives on clinical AI published between 2014-2021 was systematically identified. Following title, abstract and full-text screening eligible articles were characterised in terms of their publication, AI tool and context studied, the theoretical approach if any and the research methods and quality. Descriptive, comparative and regression statistics were applied.

Results

One-hundred-and-eleven studies met the eligibility criteria, with monthly eligible publication rate increasing from 0.7-4.0 between 2014-2021. Eligible studies represented 23 different nations and 25 different clinical specialities. A theoretical approach was explicitly employed in 39(35.1%) studies though 6 of these described novel theoretical approaches(15.1%) and the most frequently used theoretical approach was only used 3 times. There was no statistically significant trend in the prevalence of theoretically informed research within the study period. Of the 25 theoretically informed studies conducted in Europe or North America 19(76%) used theories that originated in the same continent.

Conclusions

The theoretical approaches which characterise implementation science are not being put to use as often as they could, and the means by which they are selected also seems suboptimal. The field may facilitate a greater synergy between theory and practice if the focus shifts from unifying implementation theories on to unifying and expanding the implementation community. By making more theoretical approaches accessible to practitioners and supporting their selection and application, theory could be more effectively harnessed to close healthcare’s ‘know-do gaps’.

Protocol registration

PROSPERO ID 248025

Contributions To The Literature

Introduction

Implementation science is a relatively young field drawing on diverse epistemological approaches.(1) Its pragmatic goal of bridging ‘know-do gaps’ to improve real-world healthcare necessitates this multi-disciplinary approach. However, the challenge of aligning qualitative and quantitative methodologies has been a persistent source of agitation within the field.(2) Implementation science centres around the application of theoretical approaches to inform or evaluate implementation in a particular healthcare context. Consequently, defining and standardising the terminology and structure of these theoretical approaches has received considerable attention.(38) Evidence synthesis,(4) consensus methods(9) and passive taxonomies(10) have been used on many occasions, in attempts to unify the field with no evidence of growing or imminent agreement. This dogmatic discourse serves the interests of academics, but it risks alienating practitioners on whom the real-world impact of the field depends.(11) To objectively gauge the disconnect between the two ends of implementation science’s academia-practice continuum, the present study seeks to examine the prevalence of theoretically informed work within a representative field. Defining exact boundaries to this multidisciplinary field holds challenge and controversy, which can be avoided by sampling research which indisputably focuses on the process of implementation. This is why qualitative methods were made an inclusion criterion for the present study. They hold the necessary sensitivity to investigate the various contextual factors shaping implementation, but also the complexity inherent to implementation. The second motive for a focus on qualitative methods are their foundational relationship with theoretical approaches. An additional analytical layer of theory may be undesirable in quantitative research. In qualitative research however, as set out by Collins and Stockton, theory provides focus and organisation to the study, exposes and obstructs meaning, connects the study to existing scholarship and terms and identifies strengths and weaknesses.(12)

The relevance of the present study to contemporary implementation science also depends on sampling a prominent and active field of health research. Clinical artificial intelligence (AI) forms such a sample. Computer based AI was conceived more than 50 years ago and has been incorporated into clinical practice through computerised decision support tools for a few decades.(13, 14) However, advancing computational capacity and the feasibility and potential of deep learning methods have galvanised public and professional enthusiasm for all applications of AI, including healthcare.(15) The acknowledgment of this potential is formalised in the embedment of clinical AI into national healthcare strategic plans and by the recent surge of regulatory approvals issued for ‘software as a medical device’.(1619) Scientific publication on clinical AI tools has dramatically increased in recent years, though the proportion of these using qualitative research methods is yet to be described.(20) Despite this, clinical AI remains an area with little demonstrable benefit to real-world patient care and so the well-cited ‘AI chasm’ provides an exemplary ‘implementation gap’.(21, 22)

This bibliometric study aims to describe the prevalence of theoretical approaches in contemporary health systems research, by sampling the field of clinical AI implementation. Each eligible study and the theoretical approach it may employ will be characterised to provide insights into the frequency and nature of qualitative methods used in clinical AI research.

Methods

This bibliometric study is part of a qualitative evidence synthesis which followed the PRISMA 2020 reporting guidelines (Supplementary materials 1) and was executed without amendments as published in an a priori protocol.(23, 24) A search string combining concepts related to AI, healthcare and qualitative methods was designed and tuned for sensitivity and specificity in MEDLINE(Ovid) (Figure 1).(25)

This search string was translated to Scopus, CINAHL(EBSCO), ACM Digital Library and Science Citation Index (Web of Science) to cover computer science, allied health, medical and grey literature (Supplementary materials 2). The search was executed in April 2021 and reviewed literature from 2014, which marked the first market authorisations for software as a medical device granted in Europe and the USA.(16) Only English language indexing was required, there were no exclusion criteria relating to full-text language. The initial results were de-duplicated using Endnote x9.3.3 (Clarivate Analytics, PA, USA) and two independent reviewers (JH, MA) performed full title and abstract screening against the criteria (Figure 1) using Rayyan.(26) The process was overseen by an information specialist (FB) and screening disagreements were arbitrated by a separate topic expert (GM). Eligible review and protocol manuscripts were included for reference hand searching only. Full-text review was performed independently in duplicate by two independent reviewers (JH, MA), with the same topic expert (GM) arbitrating. Corresponding authors were contacted up to three times when additional eligible data or reports were suspected (Figure 2).

Two reviewers (JH, MA) jointly extracted characteristics from 10% of articles together before completing extraction independently. These characteristics included a quality score using the Joanna Briggs’ Institute (JBI) 10-point Checklist for Qualitative Research.(27) Data concerning the year and type of publication, source title and field, source impact factor, implementation context, theoretical approach use, study methods and study participants were also collected. This included the nature of the clinical AI tool studied, categorised as rule-based or not depending on whether it executed a pre-existent human dictated decision tree or guideline, or drew on a non-human means of decision making, e.g. regression modelling or machine learning. For each theoretical approach identified the index article was sourced and underwent full-text review by a single reviewer (JH). Nilsen’s taxonomy of theoretical approaches was used to facilitate the organisation of the wide range of approaches employed to study the implementation of clinical AI tools and applications (Figure 3).(10)

Results

Of the 111 eligible reports, 104(93.7%) were full reports in peer-reviewed journals, with three(2.7%) short reports, three(2.7%) theses and one(0.9%) conference abstract. Citations of all eligible papers are available in Supplementary Materials 3. Only two articles were not in English language (Italian and German). Among journal publications, the median impact factor was 3.1 (Interquartile range 2.1,4.5) and 56(58.3%), 6(6.3%) and 34(35.4%) were medical, nursing or midwifery and informatics journals respectively. The median quality score using the 10-point JBI Checklist for Qualitative Research was 8 (IQR 7,8). There was a significant increase in the rate of all formats of eligible publication over time (Figure 4)(Kendell’s tau r=0.691, p=0.018). Access to potentially eligible reports from two published protocols was not possible despite attempted correspondence.(28, 29)

Research context

Although there was some representation from developing nations, 88.1% of the 101 reports focusing on a single nation were in countries meeting the United Nations Development Programme’s definition of ‘high human development’(Figure 5).(30) The median human development index of the host nations for these 101 reports was 0.929(IQR 0.926,0.944).

In terms of the clinical AI application studied, 31(27.9%), 24(21.6%) and 56(50.5%) studies considered hypothetical, simulated and clinical applications respectively. The nature of the AI tool under investigation was rule based in 66(59.5%) of reports, non-rule based in 41(36.9%) and not specified in 4(3.6%). The tools studied were aimed directly at the public in 5(4.5%) studies, primary care in 45(40.5%), secondary care in 43(38.7%), mixed settings in 3(2.7%) and unspecified in 15(13.5%). Application was studied across a broad range of clinical specialties, though primary care dominated (28.8%, Figure 6). The tools used scalar and categorical clinical data in 83(74.8%) of studies, imaging data in 9(8.1%) and mixed inputs in 1(0.9%) to perform triage, diagnostic, prognostic, management or unspecified tasks in 17(15.3%), 15(13.5%), 8(7.2%), 47(42.3%) and 23(20.7%) studies respectively.

Research method characteristics

Clinicians, patients, managers, industry representatives, academics and carers were participants in 95(85.6%), 27(24.3%), 25(22.5%), 15(13.5%), 9(8.1%) and 6(5.4%) studies respectively. Interviews, focus groups, surveys, think aloud exercises, observation and mixed data collection methods were used in 54(48.6%), 19(17.1%), 12(10.8%), 1(0.9%), 1(0.9%) and 24(21.6%) studies respectively. Thematic analysis, framework analysis, content analysis, constant comparative analysis, descriptive analysis, grounded theory approaches, other specified and unspecified data analysis methods were used in 39(35.1%), 11(9.9%), 13(11.7%), 4(3.6%), 6(5.4%), 8(7.2%), 3(2.7%) and 27(24.3%) studies respectively.

In total 39(35.1%) studies stated some form of application of a theoretical approach. In six(15.4%) cases this was a novel implementation theory and the most frequently used theoretical approaches had just three applications (Table 1). Only 1 study explicitly drew on two separate theoretical approaches. There was no statistically significant change in the frequency of theoretical approach use over time comparing the 43 studies published before the median year of publication, 2019, (39.5%) with the 68 published thereafter (30.9%)(χ2, p=0.349). Of the 15 European studies that used pre-existent theoretical approaches, 10(66.7%) used theoretical approaches originating in Europe with the remainder originating in the USA. Of the 10 studies from North America, nine(90.0%) used theoretical approaches originating in the USA, with a single Canadian study using Methontology from Spain. Performing univariate linear regression with source impact factor as the dependent variable, neither the JBI qualitative research checklist score (unstandardised B = 0.12, 95% confidence interval -0.09, 0.32) or use of theoretical approach (unstandardised B = 0.55, 95% CI -0.23, 0.94) had significant predictive value.

Table 1

Characteristics of all 23 pre-existent theoretical approaches used by eligible clinic artificial intelligence implementation studies. USA = United States of America, UK = United Kingdom

Theoretical approach

Year

Author’s nationality

Nilsen's taxonomy(10)

Frequency

Awareness-to-Adherence Model(37)

1996

USA

process model

1

Behaviour Change Theory(38)

1977

USA

classic theory

1

Behaviour Change Wheel(39)

2011

UK

implementation theory

1

Biography of Artefact(40)

2010

UK

classic theory

1

Consolidated Framework for Implementation Research(4)

2009

USA

determinant framework

3

Clinical Performance Feedback Intervention Theory(6)

2019

UK

implementation theory

1

Disruptive Innovation Theory(41)

1995

USA

classic theory

1

Fit Between Individuals Task and Technology(42)

2006

Germany

evaluation framework

1

Flottorp Framework(43)

2013

Norway

determinant framework

1

Heuristic Evaluation(44)

1990

Denmark

determinant framework

1

Methontology(45)

1997

Spain

process model

1

Normalisation Process Model(46)

2007

UK

process model

1

Normalisation Process Theory(47)

2009

UK

implementation theory

2

Process Evaluation Framework(48)

2013

UK

evaluation framework

1

Programme Sustainability Assessment Tool(49)

2014

USA

determinant framework

1

Rapid Assessment Process(50)

2001

USA

process model

3

Rogers' Theory of Diffusion(51)

1962

USA

classic theory

1

Siitig and Singh Framework(52)

2010

USA

process model

2

Strong Structuration Theory(53)

2007

UK

process model

2

Technology Acceptance Model(54)

1989

USA

determinant framework

3

Theoretical Domains Framework(7)

2005

UK

determinant framework

1

Theoretical Framing Theory(55)

1999

USA

classic theory

1

Unified Theory of Acceptance and Use of Technology(56)

2003

USA

determinant framework

2

Discussion

Considering the use of theoretical approaches within eligible studies, it is striking that only one third explicitly used a theoretical approach. In contrast to a prior scoping review in the context of guideline implementation, these data did not suggest this low rate of theory application is increasing.(31) The heterogeneity was also notable, with no single theoretical approach being used more than three times and with highly varied modes of use (Table 1). This is not surprising given the plethora of theoretical approaches already documented within the field of implementation science and our own observation that 15% of the studies applied a novel theoretical approach.(32) However, it does raise the issue of how authors come to select a theoretical approach and if this process can be optimised. Given the apparent lack of association between the use of theoretical approaches or research quality with the impact factor of the publishing journal, there are no clear external incentives for researchers to rigorously construct their approach. If the judicious selection and application of implementation theory is not acknowledged by this crude but widely used marker of esteem, then other motives may influence research practice. Only 14(43.8%) of the studies applying a pre-existent theoretical approach provided any kind of rational for their selection and these explanations appeared to be rather vague, e.g. simply stating the method was commonly used. Taken alongside the observed tendency for authors to apply theoretical approaches that originated within their country or continent, it seems that choices made by researchers are based on convenience and familiarity to some extent at least. This status quo seems unlikely to fully harness the full potential of theoretical approaches. A more methodical pairing of each implementation case’s needs and each theoretical approach’s value proposition could substantially improve the quality of implementation practices.(33)

These observations are in support of calls to move the focus within implementation science away from the theoretical approaches themselves and on to increasing the accessibility and efficacy of their use.(34) The alternatives of reverential commitment to a handful of dominant theories, or continuing attempts to create a unifying theory of implementation appear untenable in the face of an ever expanding catalogue of theoretical approaches. Embracing the full extent and value of available theory will require a kind of higher order consolidation however, as implementation science cannot expect to achieve its goals if its advocates must absorb all published approaches. Such a requirement would limit meaningful movement of individuals and expertise between the worlds of research and practice. The use of theoretical approaches must be supported with an interface that is accessible to individuals at any point on the continuum between implementation academic and practitioner.(11, 33) To be sustainable this interface will require a persistent incentive for an independent entity to maintain and update a library of theoretical approaches, searchable by various attributes that describe their suitability to different implementation needs. Such tools, agnostic of any particular theoretical approach, have been very successfully applied in other qualitative methodologies and are becoming established within implementation science.(33, 35) This innovative implementation platform has the potential to minimise the theoretical knowledge requirements demanded of users and welcome an even broader audience to the implementation science community.

The present data have also illustrated the paucity of qualitative enquiry within the field of clinical AI implementation. A recent bibliometric study, similarly aiming to categorise published work on clinical decision support, but without any methodological constraints, identified 88-fold as many publications over the same time period.(36) Such imbalances limit understanding of the context into which tools are to be placed. This under-appreciation and under-investigation of context is a major contributor to the so-called ‘know-do gaps’ in healthcare, which prompted the establishment of the field of implementation science.(1, 56) If research of the context into which efficacious tools are to be implemented does not become more dominant, the translational “AI chasm” may persist regardless of the surge in policy and industry support.(16, 17, 21) This is because the complexity inherent to implementation cannot be meaningfully characterised, understood and managed without the flexibility afforded by qualitative research methods.(8) Such explorations of the broader contextual factors support the identification of mechanisms of successful implementation and facilitate the scale-up, sustainability and spread of innovations.(5) Encouragingly, there seems to be a notable rise in the rate of qualitative work (Figure 4) which in relative terms outstrips the contemporary proliferation of all-methods research in this area.(57) The need for the pragmatic brand of social science encapsulated by implementation remains clear then, but the data are less encouraging of how it has itself been implemented into research and practice.

While the present study addresses the paucity of qualitative assessments of real-world theoretical approaches in implementation science, it has some clear limitations. Implementation science is a multi-disciplinary field with poorly described boundaries.(2) A core assumption of the present study is that research into stakeholder perspectives of clinical AI tools using at least some qualitative methods would be broadly acknowledged within the field. This study’s value also depends on the assumption that approaches within the present sample are generalisable to implementation science practice to some extent. Analysing the motives for theory selection also proved challenging as authors rarely justify their choice, and when they do it is often easy to align the same arguments with a different theoretical approach. Further exploration of authors’ individual collaborations, epistemological commitments or prospective inquiry may yield a more informative insight into these motives.

Conclusions

A real-world sample of implementation science research in the contemporary and prominent field of clinical AI finds that a minority of research is theoretically informed. The type and mode of theoretical application is highly heterogenous and is at least in part led by convenience rather than the alignment of theory and implementation need. To engage a greater range of stakeholders and harness the full range and value of theoretical approaches, we recommend a shift in focus from unifying implementation theories to expanding and unifying the implementation community. To achieve this, we propose increasing the prominence of a theoretical approach agnostic interface as a gateway between implementation science research and practice.

abbreviations

AI                    Artificial Intelligence 

ACM DL         Association for Computing Machinery Digital Library

CI                    Confidence Interval

CINAHL        Cumulative Index to Nursing and Allied Health Literature

EBSCO           Elton Bryson Stephens Sr. Company

ENT                Ear, Nose and Throat

IQR                 Interquartile Range

JBI                  Joanna Briggs’ Institute

PRISMA         Preferred Reporting Items for Systematic Reviews and Meta-Analyses

UK                  United Kingdom

USA                United States of America

Declarations

Ethics approval and consent to participate

As a bibliometric study using exclusively publicly available data, ethical approval was not sought.

Consent for publication

Not applicable

Availability of data and materials

Detailed search strategy from all five databases used is included within the supplementary materials (Supplementary materials 2). A full list of the 111 articles that were eligible for analysis is included within the supplementary materials (Supplementary materials 3).

Competing interests

Two of the authors (JH and PK) collaborate with DeepMind, a company involved in the development and application of clinical AI tools. 

Funding

This study is funded by the National Institute for Health Research (NIHR) through the academic foundation programme for the second author (MA) and through a doctoral fellowship (NIHR301467) for the first author (JH).  The funder had no role in the design or delivery of this study

Authors’ contributions

JH contributed to the conception and design of the work, the acquisition, analysis and interpretation of the data and drafted the manuscript. MA contributed to the acquisition and analysis of the data. PK contributed to the design and conception of the work. FB contributed to the design of the work, the acquisition, analysis and interpretation of data and revised the manuscript. GM contributed to the conception and design of the work, the data acquisition, analysis and interpretation of data and revised the manuscript. All authors approved the submitted version and all authors agree to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated, resolved and the resolution documented in the literature.

Acknowledgements

We would like to acknowledge the support of the lay members of the reference groups supporting the design and dissemination of this and other work relating to the same NIHR doctoral fellowship (NIHR301467); Rashmi Kumar, Janet Lunn, Trevor Lunn, Rosemary Nicholls, Angela Quilley and Christine Sinnett.

References

  1. Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science. 2006;1(1).
  2. Boulton R, Sandall J, Sevdalis N. The Cultural Politics of ‘Implementation Science’. Journal of Medical Humanities. 2020;41(3):379-94.
  3. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly. 2004;82(4):581-629.
  4. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4(1).
  5. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: A new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. Journal of Medical Internet Research. 2017;19(11).
  6. Brown B, Gude WT, Blakeman T, Van Der Veer SN, Ivers N, Francis JJ, et al. Clinical Performance Feedback Intervention Theory (CP-FIT): A new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implementation Science. 2019;14(1).
  7. Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: A consensus approach. Quality and Safety in Health Care. 2005;14(1):26-33.
  8. Maniatopoulos G, Procter R, Llewellyn S, Harvey G, Boyd A. Moving beyond local practice: Reconfiguring the adoption of a breast cancer diagnostic technology. Social Science and Medicine. 2015;131:98-106.
  9. Colquhoun H, Leeman J, Michie S, Lokker C, Bragge P, Hempel S, et al. Towards a common terminology: A simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implementation Science. 2014;9(1).
  10. Nilsen P. Making sense of implementation theories, models and frameworks. Implementation Science. 2015;10(1).
  11. Rapport F, Smith J, Hutchinson K, Clay-Williams R, Churruca K, Bierbaum M, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. Journal of Evaluation in Clinical Practice. 2021.
  12. Collins CS, Stockton CM. The Central Role of Theory in Qualitative Research. International Journal of Qualitative Methods. 2018;17(1):1609406918797475.
  13. Miller A, Moon B, Anders S, Walden R, Brown S, Montella D. Integrating computerized clinical decision support systems into clinical work: A meta-synthesis of qualitative research. International Journal of Medical Informatics. 2015;84(12):1009-18.
  14. Turing AM. Computing machinery and intelligence. Mind. 1950;59:433-60.
  15. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks C3 - Advances in Neural Information Processing Systems. 2012;2:1097-105.
  16. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital Health. 2021;3(3):e195-e203.
  17. Topol E. The Topol review: preparing the healthcare workforce to deliver the digital future. Health Education England, National Health Service; 2019.
  18. Royal College of Physicians and Surgeons of Canada. Artificial Intelligence (AI) and emerging digital technologies. 2020. https://www.royalcollege.ca/rcsite/health-policy/initiatives/ai-task-force-e
  19. Australian Government. Australia's AI Action Plan. 2021. https://www.industry.gov.au/sites/default/files/June%202021/document/australias-ai-action-plan.pdf
  20. Guo Y, Hao Z, Zhao S, Gong J, Yang F. Artificial intelligence in health care: Bibliometric analysis. Journal of Medical Internet Research. 2020;22(7).
  21. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine. 2019;25(1):44-56.
  22. Shojania KG, Grimshaw JM. Evidence-based quality improvement: The state of the science. Health Affairs. 2005;24(1):138-50.
  23. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. The BMJ. 2021;372.
  24. Hogg J, Al-Zubaidy M, Maniatopoulos G, Talks J, Teare D, Keane P, et al. A framework synthesis of stakeholder perspectives on computerised clinical decision support tools to inform clinical artificial intelligence implementation. PROSPERO ID CRD42021256005. PROSPERO [Internet]. 2021. Available from: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=256005.
  25. DeJean D, Giacomini M, Simeonov D, Smith A. Finding Qualitative Research Evidence for Health Technology Assessment. Qual Health Res. 2016;26(10):1307-17.
  26. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Systematic Reviews. 2016;5(1):210.
  27. Joanna Briggs’ Institute. Checklist for Qualitative Research. 2020. https://jbi.global/critical-appraisal-tools
  28. Bates LA, Hicks JP, Walley J, Robinson E. Evaluating the impact of Marie Stopes International's digital family planning counselling application on the uptake of long-acting and permanent methods of contraception in Vietnam and Ethiopia: a study protocol for a multi-country cluster randomised controlled trial. Trials. 2018;19(1):420.
  29. Camacho J, Medina Ch AM, Landis-Lewis Z, Douglas G, Boyce R. Comparing a Mobile Decision Support System Versus the Use of Printed Materials for the Implementation of an Evidence-Based Recommendation: Protocol for a Qualitative Evaluation. JMIR Res Protoc. 2018;7(4):e105.
  30. United Nations Development Programme. Human Development Report 2020. 2020.
  31. Pathman DE, Konrad TR, Freed GL, Freeman VA, Koch GG. The awareness-to-adherence model of the steps to clinical guideline compliance. The case of pediatric vaccine recommendations. Med Care. 1996;34(9):873-89.
  32. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev. 1977;84(2):191-215.
  33. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.
  34. Pollock N, Williams R. e-Infrastructures: How Do We Know and Understand Them? Strategic Ethnography and the Biography of Artefacts. Computer Supported Cooperative Work (CSCW). 2010;19(6):521-56.
  35. Bower JL, Christensen CM. Disruptive Technologies: Catchin the Wave. Harvard Business Review. 1995(Jan-Feb):11.
  36. Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. 2006;6:3.
  37. Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.
  38. Nielsen J, Molich R. Heuristic evaluation of user interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; Seattle, Washington, USA: Association for Computing Machinery; 1990. p. 249–56.
  39. Fernández-López M, Gomez-Perez A, Juristo N. METHONTOLOGY: from ontological art towards ontological engineering. Engineering Workshop on Ontological Engineering (AAAI97). 1997.
  40. May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.
  41. May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: Normalization Process Theory. Implementation Science. 2009;4(1):29.
  42. Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.
  43. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:130184.
  44. Beebe J. Rapid Assessment Process: An Introduction: AltaMira Press; 2001.
  45. Rogers EM. Diffusion of innovations. New York: Free Press of Glencoe; 1962.
  46. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19 Suppl 3(Suppl 3):i68-i74.
  47. Jack L, Kholeif A. Introducing strong structuration theory for informing qualitative case studies in organization, management and accounting research. Qualitative Research in Organizations and Management: An International Journal. 2007;2(3):208-25.
  48. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40.
  49. Klein HK, Myers MD. A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Q. 1999;23(1):67–93.
  50. Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly. 2003;27(3):425-78.
  51. Liang L, Bernhardsson S, Vernooij RWM, Armstrong MJ, Bussières A, Brouwers MC, et al. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implementation Science. 2017;12(1):26.
  52. Strifler L, Cardoso R, McGowan J, Cogo E, Nincic V, Khan PA, et al. Scoping review identifies significant number of knowledge translation theories, models, and frameworks with limited use. Journal of Clinical Epidemiology. 2018;100:92-102.
  53. Rabin B, Glasgow R, Ford B, Huebschmann A, Marsh R, Tabak R, et al. Dissemination and Implementation Models in Health Research and Practice 2021 [Available from: https://dissemination-implementation.org/.
  54. Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implementation Science. 2019;14(1):103.
  55. Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, et al. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. J Clin Epidemiol. 2018;99:41-52.
  56. Maniatopoulos, G., Hunter, D. J., Erskine, J., & Hudson, B. (2020). Implementing the New Care Models in the NHS: Reconfiguring the Multilevel Nature of Context to Make It Happen. In Transitions and Boundaries in the Coordination and Reform of Health Services (pp. 3-27). Palgrave Macmillan, Cham.
  57. Aktürk C. Bibliometric analysis of clinical decision support systems. Acta Informatica Pragensia. 2021;10(1):61-74.