Aspect-level sentiment analysis is a crucial subtask of sentiment analysis, tasked with identifying the sentiment polarities of specific aspect terms in a context. Its main challenges are handling multifaceted sentences with multiple aspect terms and sentiment polarities and ensuring a model’s robustness. They lead to overlapping feature representations, which causes the aspect sentiment classification model to focus on irrelevant words and overlook critical sentiment expressions related to aspect words. This paper proposes a capsule network with position-biased mechanism (PBCapsNet) for as. The model employs the BERT representation, bidirectional gated recurrent units, position-biased mechanism, self-attention mechanism, and capsule network. The position-biased mechanism, comprising position-biased weight and dropout, enables the model to focus on words close to aspect terms, thereby reducing incorrect word involvement. The self-attention mechanism identifies keywords within aspect terms and generates context-specific semantic information representations. Furthermore, the capsule network utilizes feature vectors instead of scalars to represent multi-attribute entity features clearly. We find a way to improve the routing algorithm between capsule layers by introducing high-level capsule coefficients and sharing parameters globally during the iterative update process, addressing feature overlaps in some single-sentence texts to some extent. Extensive experiments on SemEval2014-2016 laptop and restaurant datasets verify that the performance of the PBCapsNet model is significantly better than the existing baseline models.