Traditional text coherence models are unable to detect incoherences caused by word misuse in single-sentence documents, as they focus on sentence ordering and semantic similarity of neighboring sentences.This work investigates methods to classify and measure semantic consistency of words in very short documents. Firstly, we fine-tuned BERT for the tasks of detecting short documents with an incoherent word, and distinguishing original documents from the ones with a word automatically changed by the BERT Masked Language Model (MLM). We also used BERT embeddings to calculate coherence measures.Then we prompted generative Large Language Models (LLMs) to classify and measure semantic coherence.The classifiers based on BERT achieved between \(80%\) and \(87.50%\) accuracy in the task of classifying semantic coherence, depending on the language. They performed even better in the task of distinguishing original documents from the ones with a word changed. However, coherence measures calculated using BERT embeddings did not discriminate well coherent documents from incoherent ones, neither original documents from their respective versions with a word automatically changed.On the other hand, LLaMA, GPT, and Gemini outperformed BERT in the task of semantic coherence classification on our corpus of short questions about data structures, in Portuguese and in English. They also generated semantic coherence measures that discriminate coherent from incoherent documents better than measures based on BERT embeddings.