Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. As insights into minimal necessary data set sizes are scarce, this study explores domain-specific learning curves for intervention dropout predictions. Prediction performance is analyzed based on dataset size (N=100-3,654), feature groups (F=2-129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N≤300) overestimate predictive power. For uninformative feature groups, prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but were crucial for maximizing test results in larger datasets. While N=500 mitigated overfitting, performance did not converge until N=750-1500. Consequently, we propose a minimum dataset size of N=500-1,000, depending on feature complexity and information value. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.