Coronavirus outbreak continues to spread around the world and none knows when it will stop. Therefore, from the first day of the virus identification in Wuhan, scientists have launched numerous research projects to understand the nature of the virus, how to detect it, and search for the right medicine to help and protect patients. A fast diagnostic and detection system is a priority and should be found to stop COVID-19 from spreading. Medical imaging techniques has been used for this purpose. The existing works used transfer learning by exploiting different backbones like VGG, ResNet, DenseNet or combine them to detect COVID-19. By using these backbones many aspect can not be analysed like the spatial and contextual information in the images, while these information's can be useful for a better detection performance. For that in this paper, we used 3D representation of the data (video) as input of the 3DCNN-based deep learning model. The Bi-dimensional empirical mode decomposition (BEMD) technique to decompose the original image into IMFs, then built a video of these IMFs images. The formed video is used as input of 3DCNN model to classify and detect COVID-19 virus. 3DCNN model consists of a 3D VGG-16 backbone followed by a Context-aware attention (CAA) modules then fully connected layers for classification. Each CAA module takes the feature maps of different blocks of the backbone, which allows a learning from different feature maps. In the experiments we used 6484 X-Ray images, 1802 of them were COVID-19 positive cases, 1910 normal cases, and 2772 pneumonia cases. The experiment results showed that our proposed techniques achieved the desired results on the selected dataset. Also, the use of 3DCNN model with contextual information processing exploiting CAA networks helps to achieve better performance.