Identification of variables truly associated with an outcome, whilst minimising false positives,
is critical to any data analysis. This is particularly challenging in wider datasets, where the proportion
of false positives tends to increase as the number of variables increases. Whilst stability selection has
been demonstrated to improve model performance, arbitrary stability thresholds mean performance
improvements are model and dataset dependent. We propose a novel permutation-based objective
stability threshold (POST) to enhance the stability selection procedure. The performance of POST
is evaluated through simulation analysis, and performance benefits of POST are demonstrated in
comparison with conventional approaches. Furthermore, the advantages of statistical triangulation,
the use of multiple models simultaneously, alongside POST are highlighted. To facilitate the implementation
of these methods, we present the R package stabiliser, which is designed to improve inferential
model performance regardless of model type or dataset structure.