In this cross-sectional study, we used Amazon Mechanical Turk as platform to obtain our study population. Previous studies show that Amazon Mechanical Turk can be used to obtain high-quality data that is as representative of the U.S. population, in terms of gender, age, race, and education, as traditional subject pools (18,19). Furthermore, it has been observed that participants from this online platform produce reliable results consistent with standard decision making biases (20).
Our target sample size was 1000 participants, so we posted two online surveys as a task in Amazon Mechanical Turk with a maximum limit of 500 respondents per survey. We randomized the opt-in process by posting both surveys the same day, and with the same title, description, and economic reward per response. Therefore, the two surveys had an almost identical positioning within Amazon’s Mechanical Turk and participants had no means to differentiate one from the other before entering the survey. Regarding inclusion and exclusion criteria, we included participants that were U.S. residents and at least 18 years old, and we excluded respondents who tried to fill-in the same survey more than once.
We used a survey methodology based on vignettes to evaluate American’s WTAW by health insurance wellness programs. Given our research methods and data collection process, institutional review board deemed this research exempt of ethics assessment.
Regarding the use-cases presented in the vignettes of the surveys, they were all based on insights from scientific articles (2,4,13,14); reports about health insurance technology (3,7–9); and recent news about trends in health insurance innovation (10,21,22). The six use-cases included were: health promotion, suggest actions to improve health status with a health assessment based on data from wearable devices; early detection of diseases, use data from wearable devices to identify certain diseases or disorders at an early stage; prediction of future health risks, use data from wearable devices to infer the likelihood of having a certain disease or disorder in the future; adherence tracking, wearable devices to identify specific movements such as smoking, drinking, eating or pill intake, and give actionable insights based on that data; personalized products and services, offer exclusive products and services related to wellness, health and insurance based on data from wearable devices; automated underwriting, speed up the process of applying or renewing the insurance policy by prefilling some fields with data from wearable devices. For additional information about the vignettes and the questions used in the surveys see APPENDIX 1 in the supplementary material.
Willingness to adopt wearables
Each survey had two main sections. The first section consisted of demographic questions to better understand the characteristics of the sample in terms of age group; gender; income level; ethnicity; state of residence; marital status; type of health insurance; and employment status. The second section contained a set of six hypothetical use-cases. Each use-case presented a scenario in which a fictional health insurance policy holder named “Peter” had to choose whether to “Accept” or “Do not accept” a new health insurance service where it was mandatory to use a wearable device and share its data with a health insurance company. In every scenario, respondents had to select what they would do if they were Peter. If they chose “Accept” we considered they had a high WTAW in that specific use-case, whereas if they chose “Do not accept” we considered they had no WTAW.
To evaluate the main barriers to accept health insurance use-cases based on wearable devices, in addition to the dichotomous choice of accept or not accept, there was an “Accept just if…” option were respondents could select the conditions under which they would be willing to accept. These conditions included a free service; a free smart band; a service 100% accurate; data not shared with third-parties; and an “other” field where they could write their own requirements. If they chose “Accept just if…” we considered they were willing to accept that specific use-case but only under certain conditions.
Influence of economic incentives
We created two surveys instead of one to test the influence of economic incentives on American’s WTAW. The only difference between surveys was that in one, the hypothetical use-cases included an economic incentive in their description while in the other they did not. Therefore, by comparing the results of the survey without economic incentive and the survey with an economic incentive we could analyze how economic incentives affect respondents’ choice to “Accept” or “Do not accept” a specific use-case. The economic incentives were not the same in the six use-cases, they were adapted to the specific context of each use-case to make the scenario as realistic as possible. Furthermore, the quantity of the economic incentives was not specified because, given the innovativeness of the use-cases, there was no clear benchmark in the health insurance market.
We tabulated participant characteristics of the total sample (n=1000) and by survey (n=500). We visualized the main exposure of interest using three charts, two for the overall willingness to adopt (respondents who selected “Accept” and “Accept just if…”) or not do adopt (respondents who selected “Do not accept”) wearables by type of use-case and by the absence or presence of economic incentive; and one for the main barriers to adopt wearables (conditions selected in “Accept just if…”).
Additionally, for the secondary outcome, we calculated the risk ratio of the WTAW in each use-case if there was an additional economic incentive and we considered a two-tailed P<0.05 to be statistically significant. We used poisson regression with robust variance to estimate the Relative Risk of WTAW; we adjusted for the demographic variables age group, gender, education level, income level, ethnicity, health insurance, marital status, and employment status; and clustered on state of residence to correct for correlated observations within each state. We decided to use log-poisson with robust variances over logistic regression because leading epidemiologists and biostatisticians agree that the robust Poisson models are more robust to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes; and odds ratio yields bias when outcome is not rare (i.e. >5% yes) (23,24). Analyses used Stata software, version 13.1. (StataCorp LP, College Station, TX).