Job satisfaction has attracted plenty of interest, has been debated much, and has always been up-to-date throughout history. It is of utmost importance for midwifery as a profession, which is directly related to the protection of community health care. There is limited research in the literature into job satisfaction in midwifery, but these studies do not include a specific scale that measures job satisfaction in midwifery. The Minnesota Job Satisfaction Scale has been used in some studies in the literature to measure midwives’ job satisfactions (Yalnız & Karaca-Saydam, 2010; Hadizadeh-Talasaz et al., 2014; Bilgin et al., 2017). In this sense, this study was conducted to develop a valid and reliable scale to measure job satisfaction in midwifery. For this purpose, exploratory mixed research methods design was used.
The JSMS was developed to measure job satisfaction in midwifery. The scale had six sub-dimensions, namely individual characteristics, management and salary, personal development and promotion, working environment conditions, communication, and general features of the profession. It consisted of 40 items, six of which are negative. The items of the scale are similar to those of the Minnesota Job Satisfaction Scale, which has been used widely. The short form of the Minnesota Job Satisfaction Scale consists of 20 items and internal (such as success in the work environment, recognition, appreciation, job itself, job responsibility, job change due to promotion, and promotion) and external (corporate policy and management, type of control, managers, and decision-making) sub-factors. This scale is used to measure satisfaction factors (such as relationships with colleagues and subordinates, working conditions, wages, etc.) (Weiss et al., 1967).
The success of a newly-developed scale in the scientific area depends on two factors. These factors are the validity and reliability of the scale. Validity is the measurement of what is intended to be measured without confusing it with other aspects. Reliability, on the other hand, expresses the consistency of the items with each other and the extent to which the scale used reflects the problem (Şencan, 2005). In this study, content validity and construct validity were used to test the validity of the scale, and Cronbach's alpha internal consistency coefficient and item total-remainder correlation analyses, item discrimination analyses, and test-retest reliability analysis were used to test reliability.
EFA is a method used to understand and clarify the structural feature of a scale prepared in scale development studies. According to EFA results, as the value of the total variance increases, the factor power also increases. In addition, the ideal limit of the factor load value is 0.30, and a factor load value of 0.45 or more indicates that it can be a good measurement tool (Şencan, 2005; Çokluk et al., 2014). The EFAs performed separately for each sub-dimension of the newly-developed JSMS indicated that factor loading values were greater than 0.30 and explained total variances varied between 36.24% and 72.24%.
In scale validity and reliability studies, CFA is performed to evaluate whether the structure determined by EFA is confirmed and to evaluate the integrity of the data model. CFA has an important place because it is a method that is used to reveal this relationship pattern and examine whether the findings obtained are consistent with the theoretical structure (Erkorkmaz et al., 2013). In CFA, fit indices are examined to evaluate the validity of the model. Acceptable fit indices are as follows: 0 ≤ RMSEA ≤ 0.050, good fit and 0.050 ≤ RMSEA ≤ 0.080, acceptable fit; 0.970 ≤ CFI ≤ 1.000, good fit and 0.950 ≤ CFI ≤ 0.970, acceptable fit (Eray et al., 2016). Accordingly, the developed scale was found to fit the model well and was accepted as valid.
A scale needs to be reliable as much as it needs to be valid. Reliability can be defined as the consistency between the answers given by individuals as a response to the items of the scale. It is related to the degree of accuracy in the measurement of the factors that are needed to be measured. Cronbach's alpha coefficient is a method that is used to determine reliability and measure internal consistency, and a high alpha value indicates that the items are consistent with each other (Erbil & Bakır, 2009). It is stated that if the alpha coefficient is below < 0.39, the scale is unreliable, it has low reliability between 0.4 and 0.59, it is reliable between 0.6 and 0.79, and it has high reliability between 0.8 and 1.00 (Kula-Kartal & Mor-Dirlik, 2016). Accordingly, it was determined that Cronbach's alpha coefficient of the JSMS was 0.94 and ranged between 0.83 and 0.96 for the sub-dimensions, and it was seen that the scale had high reliability.
Invariance, which is a must-have feature for a reliable measurement tool, expresses the power of the measurement tool to give consistent results from application to application and to show time-dependent invariance (Esin, 2014). Accordingly, the final form of the scale was administered to 30 midwives with an interval of 14 days, and as a result of the Pearson correlation analysis carried out to examine the test-retest reliability of the scale, the correlation values between the subscales were found to vary between 0.81 and 0.90. The correlation values obtained were 0.70 and above, which meant that it provided test-retest reliability.