There have been limited attempts toward the characterization of dropouts in crowdsourcing markets arguably due to the disagreement in defining a dropout. In this paper, we evaluate three different approaches of designating a crowd dropout and demonstrate that a prolonged interruption is enough to characterize a dropout. Our analyses on the crowdsourcing platform Flightfox reflect that the persistence of crowd workers depends on the remuneration and nature of task in addition to the previously reported success rate. Based on this, we show that it is possible to predict the dropouts in a crowdsourcing market with a very high (more than 90%) accuracy, as demonstrated both on cross-validation and independent test sets, with appropriate selection of features. A comprehensive analysis across different machine learning algorithms and feature selection approaches demonstrate that a graph-based robust feature selection performs the best with a boot-strap aggregation approach. It also highlights the importance of having effective communication among the crowd workers who tend to be dropout, a hitherto untold fact. The overall investigation not only provides new perspectives toward designating a dropout but also ensures their effective recognition. Moreover, it highlights the significance of incorporating feedback mechanism in crowdsourcing environments.