Brain-Computer Interface (BCI) is a communication tool between humans and systems using electroencephalography (EEG) to predict certain cognitive state aspects, such as attention or emotion. For brainwave recording, there are many types of acquisition devices created for different purposes. The wet system conducts the recording with electrode gel and can obtain high-quality brainwave signals, while the dry system expressly proposes the practical and ease of use. In this paper, we study a comparative study of wet and dry systems using two cognitive tasks: attention and music-emotion. The 3-back task is used as an assessment to measure attention and working memory in attention studies. Comparatively, the music-emotion experiments are used to predict the emotion according to the subject's questionnaires. Our analysis shows the similarities and differences between dry and wet electrodes by calculating the statistical values and frequency bands. Besides, we further study the relative characteristics by conducting the classification experiments. We proposed the end-to-end models of EEG classification, which are constructed by combining EEG-based feature extractors and classification networks. A deep convolution neural network (Deep ConvNet) and a shallow convolution neural network (Shallow ConvNet) were applied as the feature extractor of temporal and spatial filtering from raw EEG signals. The extracted feature is then forwardly conveyed to a long short-term memory ( LSTM ) to learn the dependencies of convolved features and classify attention states or emotional states. Additionally, transfer learning was utilized to improve the performance of the dry system by using transferred knowledge from the wet system. We applied the model not only on our dataset but also on the existing dataset to verify the model performance compared with the baseline techniques and the-state-of-the-art models. Using our proposed model, the result shows the significant differences between accuracy and chance level in attention classification (92.0%, S.D. 6.8%) and SEED dataset's emotion classification (75.3%, S.D. 9.3%).