Machine learning has reached a maturity level in which improvements depend less on network architecture than on the availability of high-quality training data or of usable prior knowledge. Because the former are very expensive to obtain, more and more groups are trying to use prior knowledge in the form of ontologies or other structured knowledge representation formats to advance the boundaries of AI. Here, I review two approaches to endowing stochastic models with prior knowledge: via probabilistic networks and via deep learning with priors. I analyse the latter in detail using AlphaFold, a well-known example of a deep neural network model, designed for the prediction of protein structures. While AlphaFold is an impressive example of a human cultural achievement that demonstrates one elegant and effective usage of prior knowledge for stochastic learning, it has limitations related to the upper boundaries of the types of processes which can be automated using Turing machines. As the analysis confirms, such machines can only compute inferences within the boundaries of established knowledge; they cannot create any new knowledge. AlphaFold can indeed predict many protein structures; but only those proteins which are homologous to proteins whose structures have already been experimentally established. Other proteins will still have to be analysed using experiments in the classical, time- and resource-consuming fashion.