This study is a cross-sectional assessment of decision-making practices within state health departments. We surveyed current SHD employees across the U.S. to gather quantitative data to identify the perceived frequency and correlates of mis-implementation within SHD chronic disease units. Human subjects approval was obtained from the Washington University in St. Louis Institutional Review Board (#201812062).
To develop a survey informed by the literature and addressing knowledge and survey gaps, we undertook an extensive literature review. Survey development was also guided by the study team’s previously described conceptual framework to ensure measures included EBDM skills, organizational capacity for EBDM, and external influences such as funding and policy climate .
A literature review of several databases (i.e., PubMed, SCOPUS, Web of Science) was conducted to search for existing survey instruments regarding organizational decision-making. Identified measures were summarized according to setting, audience, psychometric properties, and survey question themes. From our review of 63 surveys, we ended up selecting items from 23 measures to examine in relation to our conceptual framework [8-10, 18, 26-40]. Questions pertaining to political influence and mis-implementation decision-making were iteratively developed and refined as there was little published literature available at the time to inform these questions. Drafts for questions in each domain (individual skills, organizational/agency capacity, mis-implementation decision-making, external influences) were updated, and underwent three separate reviews by the study team and a group of practitioner experts to develop a final draft of the study instrument. Since the survey had not been previous validated, the final draft survey underwent cognitive response testing with 11 former SHD chronic disease directors. Reliability test-retest of the revised draft with 39 current SHD chronic disease unit staff found consistency in scores and only minor changes to the survey were needed.
Survey measures addressed the following topics: participant demographic characteristics, EBDM skills, perceived frequency of mis-implementation, reasons for mis-implementation, perceived organizational supports for EBDM, and external influences. External influences included perceived governor office and state legislative support for evidence-based interventions (EBIs), and perceived importance of multi-sector partnering. Exact item wording is provided in the national survey located in Appendix 1. Survey questions for EBDM skills, organizational supports, and external influences consisted of 5-point Likert scale responses. Response options ranged from either “Strongly Disagree to Strongly Agree” or “Not at all” to “Very great extent”.
Perceived frequency of mis-implementation was assessed with two questions: “How often do effective programs, overseen by your work unit, end when they should have continued”; and “How often do ineffective programs, overseen by your work unit, continue when they should have ended.” The response options were: never, rarely, sometimes, often, and always. These variables will subsequently be referred to as inappropriate termination and inappropriate continuation, respectively.
Participants for the survey were recruited from the National Association of Chronic Disease Directors (NACDD) membership list. The NACDD membership lists consists of SHD employees working in their respective chronic disease units. Participants were randomly selected from the membership roll after individuals from territories and non-qualifying positions (administrative support & financial personnel) were removed. Emails were sent out in June 2018 inviting a randomly selected sample of 1239 members to participate in a Qualtrics online survey. Participants were offered the option of a $20 Amazon gift card or to have us make a donation to a public health charity of their choosing. A follow-up email was sent two weeks after the initial email with a reminder phone call a week later. Non-respondents could have received up to three reminder emails and two reminder voicemails or a single phone conversation to address questions. There was no ability to directly compare non-respondents with respondents given the lack of key characteristics (e.g., role in the agency, years working in the agency) in our initial list for sampling. The online survey closed at the end of August 2018.
Data Cleaning and Analysis
Respondents who answered any of the questions beyond demographic questions were included in the sample. State-level variables, such as population size, funding from the Centers for Disease Control and Prevention (CDC) (the major funding source for state chronic disease control), and state governance type, were added to the data set from other publically available datasets such as the CDC grant funding profile, Association of State and Territorial Health Officials (ASTHO) State Profiles and Public Health Accreditation Board data [23, 25, 41, 42]. Dichotomized versions of Likert scale variables were created given the limited distribution of responses across the original scale and to facilitate interpretation. Responses that included Agree or Strongly Agree were coded as 1 while all other remaining responses were coded as 0.
Descriptive statistics were calculated for all variables in SPSS version 26. To assess associations, a Spearman’s correlation was calculated between each non-dichotomized mis-implementation variables and the individual demographic characteristics, individual skills, organizational capacity for EBIs and external factors. Multinomial logistic regression was then used to assess how variables were predictive of mis-implementation outcomes. The dependent variables (inappropriate termination & inappropriate continuation) were re-categorized to 1) often/always 2) sometimes and 3) never/rarely (reference category). Multinomial regression was used as the assumption of proportional odds was violated with an ordinal regression. The independent variables were dichotomized (as described above).Two separate models were fit: the first assessing inappropriate termination among programs overseen by SHDs and the second assessing inappropriate continuation among programs overseen by SHDs. We decided two separate models were appropriate as inappropriate termination and inappropriate continuation are two different phenomena within the overall mis-implementation concept. An initial model for each of the two dependent variables was run for each domain with all their respective variables included. All variables shown to be significant in these first runs of the model were then added to a final version of each model (inappropriate termination and inappropriate continuation).