There are a variety of approaches that are used for the MSK screening examination during the PPE. Most participants believe that the MSK exam should be performed and that it effectively screens for current injuries, but the majority of providers do not believe that the MSK exam adequately screens for future injury; an overriding goal of the PPE.6 This data confirmed our perception that the MSK screen does not screen for future injury and that providers do not believe in the predictive ability of the screen. To that end almost a third of respondents reported that they do not perform a physical exam at all. This perception is also in agreement with literature investigating the effectiveness of the MSK PPE, which shows there has been no convincing evidence that the MSK PPE is effective at accurately identifying or preventing at risk athletes from injury.4,9,10
Most participants were aware of the 4th Edition PPE Monograph and reported that they primarily use it as their guideline, contradictory to our hypothesis. Even with participants reporting using the monograph as their guideline for the MSK screen, there is still a lot of variability in how the screen is being performed, with half of the participants using the 90 second MSK screening test, 14% using the Functional Movement Screen (FMS), and close to a third not using a physical exam at all for the screen. Approximately 46% of participants reported lack of time as a barrier to performing some portions of the MSK PPE, providing some explanation as to why this variability may exist. Another explanation for the inconsistency could be due to educational differences amongst providers as there was variability in where providers were taught the MSK exam. 51% of respondents received training for the MSK PPE in residency and 62% in fellowship programs, which highlights that standardization in curriculums could improve the quality of MSK screenings provided.
Our findings support a lack of understanding of the PPE monograph given the wide variability in MSK screening exam techniques used while being aware of the PPE guidelines. This calls attention to the need for continued standardization of the MSK screening exam, as well as further research to validate objective screening exams for the prevention of MSK injury. Screening exams should be explored to look at the relationships between screening tests and risk factors in relevant populations to determine what tests are appropriate and effective in identifying high-risk populations.9 A recent study by Teyhen et. al. was able to show in a military population that the sum of a number of risk factors was able to produce a highly sensitive model for identifying those at risk for MSK injury.11 This highlights that future MSK screenings should not focus on a single screen, but that a multivariate model with multiple risk factors could successfully identify a high-risk population. Injury prevention programs have shown to be effective in reducing injuries in athletes across a range of sports.12,13,14 Identifying the high-risk population would provide an opportunity to direct limited prevention resources to the most at-risk individuals with the ultimate goal to reduce overall injury risk.
There are several limitations to our study. Our survey was the first that we are aware of that looked to gain provider insight to the MSK exam. As with any questionnaire-based research study, limitations regarding validation of the instrument used is of paramount concern. Validation of a questionnaire requires a process to determine construct, criterion, and content validity, amongst others. We addressed content validity through independent review of the questionnaire by 3 sports medicine physicians (two primary care and one orthopedic), 2 family medicine physicians, and 1 certified athletic trainer not involved in the creation of the questionnaire. Because there are no other instruments available to assess similar information, we were unable to assess criterion validity. Given the sample design of the questionnaire, we did not believe that it was necessary to assess construct validity.15 The study was also limited due to a low response rate of 9%, however, other web-based survey studies targeting members of the AMSSM had similar response rates.16,17,18 Additionally, it is important to note that the response rate is slighter higher than 9% due to crossover of providers being active members of both the AAFP and AMSSM. The low response rate from providers specializing in fields other than family medicine did not allow for further analysis across specialties.