Despite 46 years of seizure prediction research, few devices/systems underwent clinical trials and/or are commercialised, where the most recent state-of-the-art approaches are not used to their full potential. This demonstrates the existence of social barriers to new methodologies. Based on the literature, we performed a qualitative study to analyse the seizure prediction ecosystem to find these barriers. With Grounded Theory and Actor-Network Theory, we draw hypothesis from data and considered that technology shapes social configurations and interests. For seizure prediction, as long as an algorithm proves to be useful to the patient, we conclude that we may only need to explain the model’s decisions, and not to necessarily obtain intrinsically interpretable models. Accordingly, we argue that it is possible to develop robust prediction models, including black-box systems to some extent, while avoiding data bias, ensuring patient safety, and still complying with legislation, as long as they can deliver human-comprehensible explanations.