Examining validity and reliability of Objective Structured Clinical Examination for evaluation of clinical skills of midwifery undergraduate students: a descriptive study

DOI: https://doi.org/10.21203/rs.2.10419/v1

Abstract

BACKGROUND: Clinical evaluation is one of the main pillars of medical education. The Objective Structured Clinical Examination is one of the commonly adopted practical tools to evaluate clinical and practical skills of medical students. Validity and reliability of the tool to evaluate clinical skills of midwifery undergraduate students in Kermanshah Midwifery Nursing School were examined. METHODS: Seven clinical skills were evaluated in this descriptive correlative study using a performance checklist. Thirty-two midwifery students performed the skills at seven stations each monitored by an observer using an evaluation checklist. Criterion-related validity was obtained through determining the correlation between the clinical courses point and the Objective Structured Clinical Evaluation score. The collected data was analyzed in SPSS (v.20) and logistic regression test. RESULTS: The correlation score of Objective Structured Clinical Examination was significantly related to the mean score of clinical course “Normal Pregnancy I” (0.319, p=0.075), the mean score of clinical course “Normal and Abnormal delivery I” (0.399, p=0.024), the mean score of clinical course “gynaecology “ (0.419, p=0.017) and total average scores(0.23, p=0.200). The correlation between the total score and mean score of students at the stations showed that out of the seven stations, the correlations of the stations three (communication and collecting medical history) and four (childbirth) were not significant. CONCLUSION: Although it appeared that Objective Structured Clinical Examination was one of the effective and efficient ways to evaluate clinical competencies and practical skills of students, the tool could not evaluate all the aspects.

Background

Midwifery profession has a critical role in providing care to infants and mothers before and after childbirth. Midwives have a great contribution to the mother and infant’s health (1). A midwife should be equipped with critical thinking skills and making this happen needs a revolution in education and evaluation systems. With improvement of critical thinking in students, it is easier to recognize their strengths and weaknesses (2). About 50% of midwifery education program is concentrated on clinical education (3), which is implemented as apprenticeship and internship courses. These courses play a key role in development of major and professional skills in midwifery students (2, 3). One of the main objectives of clinical education is to improvement clinical skills level of students (3). Clinical evaluation methods and techniques to evaluate students’ skills and capabilities are highly important (4). Evaluation is notably important as one of the ways to determine the level of skills and capabilities in students (4). Evaluation is a way to determine the level of learning in apprentices and also a reliable tool to survey learning process in students (5).

Evaluation of clinical competency in midwifery students is one of the toughest responsibilities of faculty board members and instructors of health programs (6, 7). In addition, evaluation of clinical skills and competency in midwifery students is a challenge for education programs (8).

Different evaluation methods are needed to evaluated performance in nurses and midwives (9). One of the most reliable and trusted methods to evaluate medical techniques is Objective Structured Clinical Examination (OSCE), which is mainly designed to evaluate clinical competency (10).

The OSCE is an optimum method to evaluate clinical competency of individuals (11). The point is that the traditional written and oral tests only measure the clinical knowledge while OSCE evaluates both the knowledge and skill (12).

In addition, the available clinical knowledge are not free from validity and reliability limitations. Structured practical methods can cover some of these limitations and OSCE is one of the most common structured practical tools (13).

The tool is accepted as a reliable tool all around the world (14) so that students, instructors, and many others have found it satisfactorily (15). The OSCE was first introduced by Harden and Dr. Glisone (1979) in Scotland to evaluate scientific information of students. Since then, the tools has been used in different countries (8, 16). The tool evaluates clinical and practical skills (17) along with decision making strategies, problem solving skills, and clinical thinking (18).

The most important thing about implementation of OSCE test in midwifery and before internship is that this stage is a pivotal stage of practical trainings of midwives. At this stage, students are supposed to achieve the academic objective of their program (19, 20).

As a reliable evaluation method, OSCE produces valuable information that is not obtainable through the traditional evaluation methods (21). The test provides identical condition for the subjects and eliminates all risks of interruption in the results. In addition, it is featured with accurate scoring and other elements of a fair test like validity, reliability, objectivity, and practicality (22). Throughout the test, the participants need to demonstrate specific clinical skills at different stations. Another feature of the test is that all students need to perform the same task at each station and there is a standard scoring system to evaluate the participants. The participants move from one station to another and deal with a specific clinical scenarios at each station (23).

The OSCE properly evaluates comprehensive capabilities and competencies so that the participants are given the chance to demonstrate different skills at different situations (13, 24).

There have been studies on validity and reliability of OSCE including Moatari et al. (2009) who examined criterion-related validity and correlation of OSCE and the mean score of theoretical scores (0.38, p=0.031) and mean score of clinical scores (0.52, p=0.005).

The highest and lowest amount of correlation coefficient was reported 0.95 and 0.83 respectively.

Split half test yielded the correlation of the two halves of OSCE equal to 0.61 (p≤0.05) and the authors concluded that OSCE was a valid and reliable tool to evaluate nursing skills (9).

The OSCE is currently used as a method to evaluate clinical skills and it has been adopted by Kermanshah School of Nursing and Midwifery since 2015. All midwifery students are evaluated using OSCE in terms of clinical competency before entering internship. Taking into account the importance of using valid tests to evaluate midwifery students, the present study is aimed at examining validity and reliability of OSCE test to evaluate clinical skills of students at Kermanshah School of Nursing and Midwifery before entering internship. The results of the study can be a step toward better evaluation methods in the education process.

 

Methods

Study setting and design:

The participants were evaluated before beginning the internship course. After discussing the issue at the department of midwifery, the faculty board members undertook to find women and midwifery references needed to design the stations of OSCE tests. The deadline for brining in the references was one week. At the same time, the head of department and a member of faculty board visited the clinical skills training space of the school. The physical space and the available equipment were examined. The head of department coordinated the rest of the faculty board members to prepare a list of professional behaviors and clinical performance needed in OSCE test to evaluate skills of the learners. The deadline for preparing the list was one month. In addition, the head of department and other members of the faculty board determined the measures of evaluation in OSCE test. Then, the measures were discussed at the department to determine the key stations of the test based on consensus. Seven stations were agreed upon including one station of basic skills, five stations of specialized midwifery skills, and one station of visual test. The time needed to pass each station was determined equal to five minutes. The learners (32 students) were grouped into four groups of seven members and one group of four members and the evaluation process was carried out in two consecutive days. The head of department sent the list of 32 participants to the head of education affairs department to determine and notify the time and place of the test. The education department issued examination entrance card and delivered them to the students. In addition, the list of equipment needed for each station was given to the clinical skills education department. Five stations used Moulage, one used patient role playing, and one used a computer. The patient role was played by one of the experienced members of the faculty board. A briefing session was held for the students one week before the exam at the faculty by the head of department. The required scenario for each station was codified by the members of the board of faculty and affixed to the entrance of each partition. In addition, 80 visual question slides and 20 midwifery cases were prepared to examine the scientific level of the students. Two slides (displayed using a computer) and one written midwifery case were prepared for each student.

 

Sampling techniques:

The study was carried out as a descriptive study on 32 undergraduate midwifery students finished the 6th semester and were in the period of before midwifery internship at Kermanshah School of Nursing and Midwifery in the spring semester of academic year 2017-2018.

 

Data collection:

 Checklists were prepared to evaluate the students at each station. The checklists was provided to experts for revision and comments. The checklists covered the procedures step by step and each step had a specific score. Each performance constituted of several practical skills and the questions of checklists were scored as failed (0), low (1), moderate (2), good (3), and excellent (4). The checklists about diagnosis and knowledge were scored as failed/successful diagnosis or correct/wrong answer. These checklists measured several practical skills. The score for each station was calculated by dividing the total obtained score by the number of items in the related checklist and the total score was the sum of scores of all the seven stations. The checklists and questionnaire were revised by experienced instructors for several times. The required skills for each station are listed in Table 1. The required scenarios and skills were listed at the entrance of each station and the students were given time to read them before entering the station. The examiner at each station would announce when the time each students could spent at each station was over and that the student should move to the next station. The students’ cellphones were collected before initiation of the test; then the students were allowed to enter the stations. Performance of each participant at each station would be examined using the checklist of that stations. An examiner was in charge of controlling the students entering and leaving each station, so that the students had no contact with each other. 

 

Data analysis

To determine validity of the test, content validity and criterion-related validity were used. To obtain the latter, the mean scores of apprenticeship courses recorded in the students’ files was used and it was obtained through calculating the correlation between these scores and total OSCE score. To measure content validity, the checklists and questionnaire of the test and a report of the objectives of study were provided to four experts and their feedbacks were used in the design of the final version of the test. To determine internal consistency, the correlation of the total scores of the test and the scores of each station was obtained. Finally, to determine reliability of the test, Cronbach’s alpha was calculated with emphasis on internal consistency.

 

Results

Data analysis showed that the mean age of the students was 22.78±1.40 and the age range was 21-26 years. Moreover, 22.2 percent of participants were unmarried. Total average scores of the students was 15.98±1.07 ranging from 13.38 to 18.09. The evaluated skills based on different stations are listed in Table 1. 

The correlation of OSCE scores with the mean score of clinical course “Normal Pregnancy I”, the mean score of clinical course “Normal and Abnormal delivery I” , the mean score of clinical course “gynaecology” and with “total average scores” was 0.319 (p=0.075), 0.399 (p=0.024), 0.419 (p=0.017) and 0.23 (p=0.200) respectively(table 2). 

Kolmogorov Smirnov (KS) test was used to examine normal distribution of the data. Distribution of the data of clinical course “Normal Pregnancy I” and clinical course “Normal and Abnormal delivery I” was not normal (p>0.05) and that of clinical course “”gynecology and “total average scores” was normal (p<0.05).

The Spearman Test showed a significant relationship and correlation between clinical course “Normal Pregnancy I” and OSCE scores (p=0.075, r=0.319). In addition, there was a significant relationship between clinical course “Normal and Abnormal delivery I” and OSCE scores (p=0.024, r=0.399).

The Pearson test showed a significant relationship between clinical course “gynaecology and OSCE scores (r=0.419, p=0.017). In addition there was not a significant relationship between “total average scores” and OSCE scores. 

The results of KS test showed that data distribution of the scores of stations 1, 2, 5, 6, and 7 was normal (P>0.05) and the distribution of scores of stations 3 and 4 was not normal (p<0.05). To examine internal consistency, the correlation between total OSCE score of students and mean score of students at each station was examined. The Pearson Correlation test showed that there was a significant relationship and correlation between OSCE score (total score) and the scores of stations 1, 2, 5, 6, and 7 (p<0.05).

In addition, the Spearman Correlation test showed no significant relationship between OSCE score (total score) and the scores of stations 3 and 4 (p>0.05) (Table 3).

 

Discussion

Reliability and validity of OSCE test for midwifery students were supported. This finding is consistent with other studies in medical field (9, 17, 25, 26).

Villegas et al. (2016), showed that OSCE was a reliable method to evaluate clinical competency and they recommended using it as a part of educational curriculum (27).

Michael and Villegas (2014), used OSCE test to prepare undergraduate midwifery students in Australia using stations like examining C-section infants, and post-delivery monitoring . Based on the midwifery students’ comments, OSCE had a positive effect on their self-confidence and performance (28).

Smith et al. (2012), showed in a review study that clinical competency of nursing and midwifery students was improved using OSCE test for skills like midwifery emergencies, pharmacology, drug prescription, breastfeeding, and supplementary nutrition in infants. They argued that the test was a valuable strategy to evaluate clinical competency and improving knowledge of students. In addition, the test was a motivating factor to create diversity in midwifery students’ education (29). Omu (2016), stated that OSCE test was a golden standard evaluation method and it was recommended for evaluation of clinical competency and psychomotor skills (30).

Despite differences between the mentioned studies in terms of the skills under study, their results about the strengths and weaknesses are consistent.

Criterion-related (coincident) validity results showed a positive and significant relationship between the total score of OSCE and clinical courses (clinical course “Normal and Abnormal delivery I” and clinical course “gynaecology). Farajzadeh et al. (2012), used stepwise multivariate regression test to determine predictability OSCE score using clinical and theoretical courses scores. The results showed that the clinical courses scores can be used to determine OSCE score, while there was no relationship between theoretical courses scores and OSCE score (4). Moatari et al. (2007), showed that there was a significant and positive relationship between OSCE scores and nursing clinical courses scores. They also reported that there was a positive and weak relationship between OSCE scores and nursing theoretical courses scores (9). Hosseini et al. (2016), showed that there was a direct but insignificant relationship between total score of OSCE test and biochemistry written exam score (26). Raj et al. (2007), reported no significant relationship between Rheumatology written exam score (hand section) and total OSCE score (25). Gerrow et al. (2003), tried to examine criterion-related validity of OSCE test through evaluating the dentistry practical skills based on written board exam scores of 2317 senior students in five years. They showed a positive and significant relationship between the written exam scores and OSCE results (31).  Factors like number of subjects, term of study, number and type of questions all explain differences in the findings between the present study and Gerrow et al. (2003) (31). The gap between theoretical and practical courses has always been an educational challenge, and the low correlation between OSCE test and theoretical courses supports existence of this gap (9). Alami and Safari (2014), proposed not to design stations in OSCE test based on written scenarios and recommended using stations based on patient role play, Moulage, and Mannequin. The reason for these recommendations is that using written scenarios concentrate the tests on the sections that are measured by pre-apprenticeship test (32). Nasri et al. (2010), held that OSCE method outperformed other standard clinical evaluation methods (33).

To examine internal consistency, the correlation between total OSCE test and mean score of each student at each station was computed. The results showed that out of seven stations, the correlation coefficient was not significant for two stations (No. 3 & 4). The significant correlation between the majority of stations and total score of OSCE indicates that the participants had gained the required skills during the pre-internship courses. Hosseini et al. (2013), reported that out of ten stations, correlation coefficients of two stations (8 and 10) were not significant [26]. Taghva et al. (2007), studied psychology skills and reported that out of nine stations, correlation coefficient of only one was insignificant (17). Moreover, Moatari et al. (2009), evaluated nursing skills and reported that out of 10 stations, only two had low internal consistency (9). Differences between these studies can be explained based on the design of stations, the way of holding brief sessions of OSCE test, and complicacy of the instructions. Here, Cronbach’s alpha of OSCE test was more than 0.7, which is consistent with (26, 34). Bould et al. (2009), reported a Cronbach’s alpha between 0.8 and 1 indicates a good reliability of the tool under study (35). Metsamuronen et al. (2006), reported that the minimum acceptable Cronbach’s alpha is 0.7 (36).

As to limitations of the study, tiredness of the evaluators and the students’ concerns about their probable negative impression on instructors due to their poor performance are notable. It was not possible to implement OSCE test in actual clinical setting due to lack of equipment to implement the stations. Therefore, the test was performed in the Skills Lab of school. In addition, the evaluators did not determine reliability of the tool, an issue that should be taken into account by the future studies. To this end, measurements should be repeated immediately, one day, 72hrs, and two months after the intervention.

Conclusions

Although, it seems that OSCE test is an effective and efficient way to measures clinical competencies and practical skills of students, it did not cover all the aspects. Effectiveness of the tool depends on factors like experienced evaluators, access to references and equipment, adequate time to design and implement the test, accurate planning, availability of a proper space to hold the test and suitable measurement tools.

Abbreviations

Objective Structured Clinical Examination (OSCE)

Declarations

Ethics approval and consent to participate

The written consent was obtained from study participants and study was approved by the Ethical Committee of the Kermanshah University of Medicine Science (approval number IR.KUMS.REC.1397.474)

Consent for publication

Not applicable.

Availability of data and material

The datasets used and/or analyzed during the current study are available from the first author on reasonable request.

Competing interests

The authors declare that they have no competing interests

Funding

Funded by Kermanshah University of Medicine Science. No role in the design of the study, collection, analysis, and interpretation of data and in writing the manuscript.

Acknowledgements

The article is mentorship project of Student Research Committee, under ethics code IR.KUMS.REC.1397.474, the Kermanshah University of Medical Sciences .The authors wish to express their gratitude to all the officials of the university and all the participants for their supports.

References

1.Fleming V, Luyben A. Establishing a Master’s for Europe a transnational model for higher education. Midwifery2016;33:52‑4.

2.Carter AG, Creedy DK, Sidebotham M. Development and psychometric testing of the Carter Assessment of Critical Thinking in Midwifery. Midwifery 2016;34:141‑9.

3.Malakooti N, Bahadoran P, Ehsanpoor S. Assessment of the midwifery students' clinical competency before internship program in the feld based on the objective structured clinical examination. Iranian J Nursing Midwifery Res 2018;23:31-5.

4. Farajzadeh Z, Saadatjoo SA, Tabiee SH, Hosseini MR. Reliability and Validity of OSCE in Evaluating Clinical Skills of emergency Medicine Students of Birjand university of Medical Sciences. Journal of Birjand University of Medical Sciences. 2012; 18(4): 312-319.

5.Moosavi P, Montazeri S. Check the rate of achievement to the minimum learning in clinical units and maternity department and barriers to its implementation from the perspective of the midwifery students. Journal of Educational Development Jundishapur 2012;3:71‑80.

6.Sheykh Aboomasoodi R, Moghimian M, Hshemi M, Kashani F, Karimi T, Atashi V, et al. Comparison the effects of Objective Structured Clinical Examination (OSCE) by direct and indirect supervision on nursing students’ examination anxiety. Nurs EDUC 2015;4;1‑8.

7-Noohi E, Motsadi M, Haghdoost A. Clinical teachers viewpoints towards objective structured clinical examination in Kerman university of medical science. Iran J Med Educ. 2008; 8 (1) 113-119.

8. Devine RD. Evaluation of Undergraduate Students Using Objective Structured Clinical Evaluation. Research Briefs 2007;46:76.

9.Moattari M, Abdollah-zargar S, Mousavinasab M, Zare N, Beygi Marvdast P. Reliability and validity of OSCE in evaluating clinical skills of nursing students. J Med Educ

. 2009; 13 (3): 79-85.

10.Saboori A, Dastjerdi V, Mahdian M, Farazi Fard M. Examining outlook of postgraduate students of Dental Faculty of Medical Sciences of Shahid Beheshti university about objective structure clinical (Osce examination). Journal of Dental School Shahid Beheshti University of Medical Sciences 2010;28:88‑94.

11.Shayan Sh, Saboori M, Salehi A. Practices of assessment in clinical competencies about Medical Sciences. University of Medical Sciences. 2007;1:25–106.

12.Bhatnagar KR, Saoji, VA, Banerjee AA. (). Objective structured clinical examination for
undergraduates: is it a feasible approach to standardized assessment in India? Indian J Ophthalmol 2011; 59(3), 211-4.

13- Bolhari J. [OSCE Instructional Guideline in Psychiatry]. 1st ed. Teh-ran: Arjomand publication; 2011.

14. Park WB, Kang SH, Myung SJ, Lee YS. Does Objective Structured Clinical Examinations Score Reflect the Clinical Reasoning Ability of Medical Students? Am J Med Sci 2015;350:64‑7.
15. Eldarir S, Nagwa A, Hamid A. Objective Structured Clinical Evaluation (OSCE) versus traditional clinical students achievement at maternity nursing: A comparative approach. IOSR
Journal of Dental and Medical Sciences 2013;4:63‑8.

16-Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ 1979; 13 (1) 41-54.

17.Taghva A, Rasoulian M, Panaghi L et al.Validity and reliability of the first objective structured clinical examination (OSCE) in psychiatry in Iran. IJPCP 13 (1) 17-24.

18.Silva CC, Lunardi AC, Mendes FA, Souza, F F, Carvalho, CR. Objective structured clinical
evaluation as an assessment method for undergraduate chest physical therapy students: a cross-sectional study. Rev Bras Fisioter 2011; 15(6), 481-6.

19. Moosavi P, Montazeri S, Poorghayoomi S, Farahnaz CH. Determining minimum learning in neonatal units, maternal and child health, family planning, and it’s administrative obstacles from the perspective of midwifery students. Journal of Medical Education Development Center Jundishapur 2015;6:123‑30.
20. Moosavi P, Montazeri S, Poorghayoomi S, Farahnaz CH. Determining minimum learning in The clinic and gynecology unitsand it’s administrative obstacles from the perspective of midwifery students. Journal of Medical Education Development Center Jundishapur 2013;3:71‑80.

21. Erfanian F, and Khadivzadeh T. Evaluation of midwifery students’ competency in providing intrauterine device services using objective structured clinical examination.Iran J Nurs Midwifery Res. 2011 Summer; 16(3): 191–196.

22-Razavi M. organized Clinical examinations, Development educational Center of Tehran University, 2001.

23. Eftekhar H, Labaf A, Anvari P, Jamali A, Sheybaee-Moghaddam F. Association of the preinternship objective structured clinical examination in final year medical students with comprehensive written examinations. Med Educ Online. 2012;17

24- Rasoulian M, Taghva A , Panaghi L, ZahiroddinA , Salehi M , Ghalebandi M. Qualitative assessment of the first objective structured clinical examination (OSCE) in psychiatry in Iran. IJPCP2007. 13 (1) 12-16.

25- Raj N, Badcock LJ, Brown GA, Deighton CM, O'Reilly SC. Design and validation of 2 objective structured clinical examination stations to assess core undergraduate examination skills of the hand and knee. J Rheumatol 2007 . 34 (2) 421-4.

26- Hosseini S, Vartanoosian J, Hosseini F, Farzin Fard F. Validity & reliability of OSCE/OSPE in assessing biochemistry laboratory skills of freshman nursing students of Shahid Beheshti University of Medical Sciences. Journal of Nursing and Midwifery Faculty Shahid Beheshti University of Medical Sciences & Health Services,2013,23(8): 34-43.

27.Villegas N, Cianelli R, Fernandez M et al. Assessment of breastfeeding clinical skills
among nursing students using the Objective Structured Clinical Examination (OSCE). Investigación en Educación Médical
2016;5:244‑52

28.Mitchell ML, Jeffrey CA, Henderson A et al. Using an Objective Structured Clinical
Examination for Bachelor of Midwifery students’ preparation for practice. Women Birth 2014;27:108‑13.

29.Smith V, Muldoon K, Biesty L. The Objective Structured Clinical Examination (OSCE) as a strategy for assessing clinica competence in midwifery education in Ireland: A critical review. Nurse Educ Pract 2012;12:242‑7.

30-Omu FE. Attitudes of Nursing Faculty Members and Graduates towards the Objective Structured Clinical Examination (OSCE). Open Journal of Nursing 2016;6:353‑64.

31. Gerrow JD, Murphy HJ, Boyd MA, Scott DA. Concurrent validity of written and OSCE components of the Canadian dental certification examinations. J Dent Educ.2003; 67 (8) 896-901.

32- Allami A, Saffari F. Correlation of OSCE and Pre-Internship Examination Results. JMED, 2014, 7(13): 57-63.

33-Nasri Kh, Kahbazi M, Nasri Sh. MedicalStudents' Viewpoints toward Basic Sciences and Pre-internship Comprehensive Exams in Arak University of Medical Sciences . Iran J Med Educ. 2010; 10(1): 82-90.

34- Sahebalzamani M, Farahani H, Jahantigh M. Validity and reliability of direct observation of procedural skills in evaluating the clinical skills of nursing students of Zahedan nursing and midwifery school. Zahedan J Res Med Sci (ZJRMS) 2012; 14(2): 76-81.

35-Bould MD, Crabtee NA, Naik VN. Assessment of procedural skills in anesthesia. Br J Anaesth 2009; 103(4): 472-83.

36. Metsamuronen J, Tarjome-Kamkar SH, Asraie A. Mabaniye nazari azmoon va azmoonsazi .Tehran: Behineh Press; 2006.

Tables

Table 1- Evaluated skills at different stations of OSCE test

Evaluated skill

Station

Establishing open vein, serum therapy, measuring blood pressure

1

Working with resuscitation equipment 

2

Communication and collecting medical history skills

3

Childbirth

4

Episiotomy

5

Speculum and bimanual exam

6

Visual questions

7

 

Table 2- Correlation between the scores of clinical courses, “Normal and Abnormal delivery I,  Normal Pregnancy I , gynecology and total average scores and OSCE score

Variable

Total score

Mean ± SD

R

P-value

clinical course “Normal Pregnancy I”

16.46±0.26

0.319

0.075

clinical course “Normal and Abnormal delivery I”

16.13±0.60

0.399

0.024

clinical course “gynecology

17.86±0.33

0.419

0.017

“total average scores”

07/1±98/15

0.23

0.200

Total OSCE score

11.53±1.74

1

-

 

 Table 3- Correlation coefficients between the scores of different stations and the total score

Variable

Total score

Mean ± SD

R

P-value

Station 1

1.24±0.41

0.726

0.001

Station 2

1.29±0.34

0.394

0.026

Station 3

1.54±1.02

0.340

0.057

Station 4

1.47±0.26

0.291

0.106

Station 5

1.92±0.16

0.387

0.029

Station 6

1.97±0.26

0.501

0.003

Station 7

2.57±1.06

0.805

0.001

Total score

11.53±1.74

1

-