The construction and updating of the knowledge graph need to obtain a lot of knowledge in time. Text data generated continuously on the Internet is an important source of this knowledge. Especially for the knowledge graph related to public opinion monitoring, it is necessary to quickly extract entities and relations from the text. Although the entity and relation extraction (ERE) method based on the pre-training model can obtain good performance, the training efficiency is low due to the complexity of the model, and it needs to perform better on large datasets. In order to reduce the model complexity and better apply related models to small datasets and apply the ERE task on the edge devices, this work proposes a novel span-based joint ERE model with a light attention encoder. The specific operation is to discard the Value transform matrix originally possessed by the self-attention mechanism, modify it to an identity map, and change the calculation method of dot product similarity to the cosine similarity calculation method. Through experiments on three public datasets, it is verified that the proposed approach can greatly reduce the training cost on the basis of ensuring the good performance of the model.