Becoming literate in a domain is intricately linked to getting immersed in a speech community and learn the respective language. Learning the respective language is then determined by the specific properties of that language, e.g., what words appear frequently in what contexts? Hence, determining the linguistic properties with principled means is crucial to analyzing learning processes in a domain. This study explores linguistic properties of terms in the science domains (biology, chemistry, and physics) to texture our understanding of potential affordances and challenges in learning concepts in these domains. We use machine learning and natural language processing to analyze language in a principled way. German and English Wikipedia articles that were categorized as science-related. The different languages were used as contrasting cases. Through a deep neural network approach we sought to gain insights into learnability of core science concepts. Our findings indicate that in German and English Wikipedia terms such as theory, time, energy, or system emerge as most central in the domain of physics. However, these central terms also tend to be more difficult to learn (for the deep neural network) in their respective contexts. Our findings indicate that mining Wikipedia can have potential to analyze learning-related processes in domains such as the sciences.