Research context
As part of a larger cross-cultural investigation of first-episode psychosis(28, 29), this study was carried out with patients, families and clinicians at two early intervention programs for youth with psychosis in Montreal, Canada and Chennai, India. Participants in Chennai were recruited from the first-episode psychosis clinic of the Schizophrenia Research Foundation (SCARF), a not-for-profit, non-governmental mental health organisation. In Montreal, participants were recruited from two nodes of McGill University’s publicly funded Prevention and Early Intervention Program for Psychosis (PEPP)(30). The Chennai and Montreal programs are open to referrals from a wide variety of sources and offer a similar protocol of intensive, phase-specific, two-year psychosocial (assertive case management, family interventions, family psychoeducation, therapy, etc.) and medical (flexible use of low-dose antipsychotic medication) treatments provided by a multidisciplinary team of clinicians (case managers and psychiatrists) to individuals with first-episode psychosis(29). Persons treated with antipsychotic medication for more than 30 days or with IQ below 70, solely substance-induced or organic psychosis, pervasive developmental disorder, or epilepsy are excluded from each program. Concurrent substance use is not an exclusion criterion.
The study was approved by the relevant institutional ethics boards in Montreal and Chennai, and all participants gave informed consent. In the case of individuals younger than 18 years, participants provided assent and their parents/guardians provided consent.
Participants
The standardisation sample comprised of 57 patients, 60 family members, and 27 clinicians across study sites (Table 1). Patients were included if they had a first episode of either affective or non-affective psychosis as determined by the Structured Clinical Interview for DSM IV-TR for Axis I Disorders (SCID-IV;(31)), were between 16-35 years old, and were fluent in English or French in Montreal and English or Tamil in Chennai. The exclusion criteria were the same as that for the PEPP and SCARF programs. Family members were parents, spouses/ partners, or siblings of patients with FEP, and clinicians were case managers, psychiatrists, and other allied healthcare professionals (e.g., employment support specialists, psychologists, etc.) providing treatment at PEPP and SCARF.
Whose Responsibility Scale: Development and Description
The WRS was developed with multiple rounds of iterative discussion, feedback and modifications, involving clinician-scientists and mental health professionals at both sites with extensive experience in providing mental health care, including for persons with psychosis.
The WRS (Additional File 1) was modelled after an item that has been part of various rounds of the World Values Survey(32) . The item is rated on a 10-point scale, where 1 indicates complete agreement with the statement on the left pole of the scale (“The government should take more responsibility to ensure that everyone is provided for”), 10 indicates complete agreement with the statement on the right pole of the scale (“People should take more responsibility to provide for themselves”), and selection of any other number in between these poles reflects one’s relative weighting of the responsibility of each party. We included this item in the WRS, followed by a set of 21 similarly worded and structured items organized around seven needs of individuals with mental health problems: (1) general financial support, (2) housing support, (3) help with return to school/work, (4) help covering the costs of mental health services, (5) medication, (6) substance use treatment programs, and (7) mental health awareness-building and stigma reduction. These seven needs were selected based on previous literature (1, 33-39); our clinical and program development experience; and focus group discussions on responsibilities of various stakeholders for supporting people with mental health problems, conducted with patients, families, and clinicians at PEPP and SCARF(40-42). An initial version of the scale was also shared with three patients and two family members and their feedback was used to refine the scale and its instructions.
For each of these seven needs, a set of three items each rated on a 10-point scale are presented, with the first requiring respondents to contrast the role of the government with the role of persons with mental health problems, the second contrasting the role of the government with the role of families of persons with mental health problems, and the third contrasting the role of families with the role of persons with mental health problems (see Additional File 1). Respondents are asked to consider the attribution of responsibilities for supporting persons with mental health problems generally, and not with respect to their own specific case. In total, 22 items were included in the WRS.
The WRS was translated from English into Tamil and French and back-translated using standardized procedures recommended by the WHO (43). The WRS can be filled out as a self-report or can be interviewer-administered.
WRS Scoring
The first item of the WRS (taken directly from the World Values Survey(32)) is a stand-alone item and does not contribute to the scoring of the WRS. The remaining 21 items can be scored to arrive at three composite scores, reflecting the extent of responsibility attributed to (1) governments vis-à-vis persons with mental health problems, (2) governments vis-à-vis families, and (3) persons with mental health problems vis-à-vis families, for addressing all needs taken together.
To arrive at these three composite scores, all items comparing government and patient responsibilities are summed to derive the first composite score; all items comparing government and family responsibilities are summed to derive the second composite score; and all items comparing patient and family responsibilities are summed to derive the third composite score. Alternatively, scores on each individual item can be summed and then averaged to arrive at more easily interpretable scores between 1 and 10.
Depending on the research or policy objectives, one may in some cases need to estimate attributions of responsibility regarding specific needs. If so, similar composite scores can also be derived for each of the seven needs separately (e.g., if one is particularly interested in perceptions of responsibility for housing support, the three items corresponding to this need can be summed and averaged in the same way across participants).
Procedure
To establish its psychometric properties, the WRS was administered as a self-report by trained research assistants to all participants (patients, families, and clinicians) in English or Tamil in Chennai, and English or French in Montreal depending on participants’ linguistic preference, at two time points approximately two weeks apart in order to establish its test-retest reliability. Trained research staff were available to answer questions if needed and to administer the feedback questionnaire to a subset of the participants. We examined the internal consistency of the WRS using data obtained from the first administration of the measure. Internal consistency estimates were calculated for each set of seven items that were summed to derive responsibilities attributed to government vis-à-vis persons with mental health problems, government vis-à-vis families, and families vis-à-vis persons with mental health problems.
We also examined the frequency distributions of WRS scores to see if respondents used all or part of the possible range (1-10) of item scores, by looking at whether there were scores between 1-3, 4-6, and 7-10 for each item.
To assess the questionnaire’s acceptability, we created a brief, easy-to-understand feedback questionnaire. This scale includes two items that ask participants to rate the ease of comprehension and completion of the WRS on a 10-point Likert-type scale (1 = very difficult to 10 = very easy), and a third categorically-rated item (Was the WRS measure difficult to answer? – Yes/Somewhat/No). A subset of patients was approached to complete the feedback questionnaire (4-5 patients in each language, English and French in Montreal and English and Tamil in Chennai).
Data analyses
Data were analysed using IBM SPSS version 22. Descriptive data regarding the sample characteristics were represented as means, standard deviations (SDs) and percentages. The samples from the two sites were compared using independent samples t-tests or chi square tests, using .05 as the significance level. Test-retest reliability for patient, family and clinician groups were computed separately using intra-class correlation coefficient (ICC), with a two-way random effects model of variance and absolute agreement between scores at the two time points (ICC2, 1) and single measure estimates (44) with a 95% confidence interval. The ICCs were interpreted as recommended by Cicchetti (45): “poor” (ICC < 0.40), "fair" (0.40–0.59), “good” (0.60–0.74) and "excellent" (ICC > 0.75). Disaggregated and combined reliability estimates were computed for Montreal and Chennai for the patient and family samples. Due to the smaller size of the clinician sample at each site, only combined reliability estimates were computed. Finally, test-retest reliabilities were assessed separately for the English, French, and Tamil versions of the WRS based on combined patient and family samples.
Similarly, internal consistency was computed for patient, family, and clinician samples, aggregated and disaggregated across sites. Internal consistency reliabilities were also assessed separately for the English, French, and Tamil versions of the WRS based on combined patient and family samples. Often, reliability estimates based on internal consistency (Cronbach’s alphas) are interpreted as acceptable if >0.70, good if >0.80, and excellent if >0.90 (46). It has been argued that values below 0.60 are acceptable for exploratory research (47, 48)and that interpretation of such estimates be guided by how the scale was conceptualized rather than based on strict cut-offs (49, 50). In our study, it was decided a priori that the measure would be considered internally consistent if the Cronbach's alpha estimates were at least 0.60. Higher internal consistency estimates were not consistently expected as it was theorized that ratings (e.g., for government vis-à-vis persons with mental health problems) could reflect not only latent values about stakeholder responsibilities (e.g., a general belief that governments should take substantial responsibility for addressing key needs of persons with mental health problems), but also be influenced by the specific need area being rated (e.g., governments could be seen as more responsible for covering the costs of mental health services than the costs of substance use services).