Application of Deep Neural Networks (DNN) in Edge Computing has emerged as a consequence of the need of real time and distributed response of different devices in a large number of scenarios. For this end, shredding these original structures is urgent due to the high number of parameters needed to represent them. As a consequence, the most representative components of different layers are kept in order to maintain the network's accuracy as close as possible from the entire network's ones. To do so, two different approaches have been developed in this work. First, Sparse Low Rank Method (SLR) has been applied to two different Fully Connected (FC) layers to watch their effect on the final response, and similarly the method has been applied to one of these layers. In the contrary, in this last case, the relevances of the previous FC layer's components were considered taking into account the connections to each of the components from the other FC layer, considering the relationship of relevances across layers. Experiments had been carried out in well-known architectures to conclude whether the relevances throughout layers have less effect on the final response of the network than the independent relevances intra-layer.