Automated segmentation of biomedical image has been recognized as an important step in computer-aided diagnosis systems for early detection of abnormalities. Despite its importance, the segmentation process remains an open challenge due to the indistinguishably of color, texture, shape diversity and boundaries. Semantic segmentation often requires deeper neural networks to achieve higher accuracy, making the segmentation model more complex and slower. Due to the need to process a large number of biomedical images, more efficient and cheaper image processing techniques for accurate segmentation are needed. In this article, we present a modified deep semantic segmentation model that utilizes the backbone of EfficientNetB3 along with UNet for reliable segmentation. To lighten the network, we use a balanced approach for the depth and width, and 10x magnification images are used from the Queensland dataset. We train our model to divide the image in 12 different classes for segmentation. Our method outperforms the existing literature with an increase in average class accuracy from 79% to 83%. Our approach also shows an increase in overall accuracy from 85% to 94%.