The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while keeping an acceptable performance level. In this work, we address this problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), and address its complexity reduction for the 30 GBd 1000 km transmission over a standard single-mode fiber. We demonstrate that it is feasible to reduce the equalizer’s memory by up to 87.12%, and its complexity by up to 91.5%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense and examine the impact of using different CPU and GPU settings on power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, using two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano.