One of the fundamental questions about human language is whether all languages are equally complex. To answer this long-standing question, we conduct a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6,500 different documents as represented in 41 multilingual text collections consisting of ~3.5 billion words or ~9.0 billion characters and covering 2,069 different languages that are spoken as a native language by more than 90% of the world population or ~46% of all languages that have a standardized written representation. Statistically inferring the entropy of each language-model as an index of (un)predictability/complexity allows us to refute the equi-complexity hypothesis, but also unveils a previously undocumented complexity-efficiency trade-off: high entropy languages are information-theoretically more efficient because they tend to need fewer symbols to encode messages. Our findings additionally contribute to debates about language evolution/diversity by showing that this trade-off is partly shaped by the social environment in which languages are being used.