We estimate the n-gram entropies of English- language texts, using dictionaries and taking into account punctuation, and find a heuristic method for estimating the marginal entropy. We propose a method for evaluating the coverage of empirically generated dictionaries and an ap- proach to address the disadvantage of low coverage. In ad- dition, we compare the probability of obtaining a meaning- ful text by directly iterating through all possible n-grams of the alphabet and conclude that this is only possible for very short text segments.