3.1 Characteristics of the PNP algorithm
On Earth, water always flows, and the process of water completing its cycle is called water circulation. This circulation is the continuous flow of water above and below the Earth's surface. It is usually caused by water evaporating from the sea and forming clouds, which transfer the water to land through rain. Once on land, the water flows over the surface of the earth to a lower place on the surface, forming a stream or a river. The stream's water flows downstream into the ocean, completing the water circulatory system. The time when water stays in this circulatory system is called the residence time. The average residence time in Korea is 1.5 weeks in the atmosphere and two weeks in the river. Even in the absence of precipitation, the river flow characteristics require a deep-learning algorithm that can identify the characteristics of the water circulation system to predict the river flow. Therefore, we present the PNP algorithm, a new deep-learning algorithm that considers the water circulation system and the residence time. The deep-learning model consists of input, hidden, and output layers. The input data are fed into the input layer, and the output layer produces the model value. Hidden layers are made up of neurons composed of deep-learning algorithms. The PNP algorithm can consist of neurons in the deep-learning model, such as MLP, LSTM, and other deep-learning algorithms. Figure 2 shows the layers and neurons of the deep-learning model.
The PNP algorithm focused on river flow that we present is divided into water entering the stream (positive water) and water exiting to the sea (negative water). If there is more positive water than negative water, the river becomes too full, which causes flooding, while more negative water than positive water leads to droughts. We applied positive and negative water to the PNP algorithm, corresponding to the positive and negative neurons, respectively, and we also applied features from past precipitation data to consider the residence time.
The PNP algorithm is organized in a hidden deep-learning layer, such as LSTM or RNN, consisting of positive and negative neurons in the hidden layer. Figure 3 shows the PNP algorithm concept. Both neurons receive the precipitation as input. Positive neurons analyze the increasing factor of the water flow in the river, while negative neurons analyze the decreasing factor. There is a line connected between the negative and positive neurons called the conveyor belt, which delivers the results of each neuron to the next node and provides past precipitation information. Through this process, we can reflect the amount of river emissions during the residence time. The conveyor belt provides the calculated value from the previous neuron to the next neuron. This is similar to the cell state in LSTM, which completely removes the cell state's information. However, unlike the cell state, the conveyor belt maintains the information and transfers it to the next neuron. Finally, when the last positive and negative neurons finish calculating, two results are generated from one node of the PNP hidden layer. Figure 4 shows the positive and negative neuron configuration diagram.
The positive and negative neurons are calculated in four steps: input, judgment, application, and transfer. The first step of the positive neuron is the input step, which is given to the neuron and multiplied by the weighted value \(w\). The second step is the judgment by applying the \(sigmoid\) function, using the value one if the need is high and zero if it is low. The third step is the application step, which adds the value of the sigmoid function, the positive bias term, \({b}_{p}\), and the value from the previous positive conveyor belt. Finally, in the fourth step, we remove a negative element by applying a rectified linear unit (ReLU) function to the value calculated in the third step. The values removed by the ReLU are transferred to the conveyor belt and the next neuron. The negative neuron follows the same process through the second computational step, but in the third step, \({b}_{n}\) is added to the negative bias term rather than the positive bias term, \({b}_{p}\). The fourth step is applied to prevent the quantity value. Equations (1)–(4) show the calculation of positive neurons, while Equations (5)–(8) show the calculation of negative neurons.
$${fs}_{p} = I\times {w}_{p}$$
1
$${ss}_{p}=sigmoid\left({a}_{p}\right)\times {fs}_{p}$$
2
$${ts}_{p}={ss}_{p}+\left|{b}_{p}\right|+{PCb}_{(t-1)}$$
3
$${Ls}_{p}=Relu\left({ts}_{p}\right)$$
4
$${fs}_{n} = input\times {w}_{n}$$
5
$${ss}_{n}=sigmoid\left({a}_{n}\right)\times {fs}_{n}$$
6
$${ts}_{n}={ss}_{n}+\left|-{b}_{n}\right|+{NCb}_{(t-1)}$$
7
$${Ls}_{n}=-Relu(-{ts}_{n})$$
8
In Equations (1)–(8), \({fs}_{p}\) and \({fs}_{ n}\) are the first step in the PNP algorithm computational process, \({ss}_{p}\) and \({ss}_{n}\) are the second step, \({ts}_{p}\) and \({ts}_{n}\) are the third step, and \({Ls}_{p}\) and \({Ls}_{n}\) are the final step. \({w}_{p}\) and \({w}_{n}\) are the positive and negative weighted values, respectively. \({a}_{p}\) \(\text{a}\text{n}\text{d} {a}_{n}\) are the positive and negative sigmoid functions, respectively. \({b}_{p}\) and \({b}_{n}\)are the positive and negative bias terms, respectively. \(I\) is the input data, \({PCb}_{(t-1)}\) represents the previous positive conveyor belt value, and \({NCb}_{(t-1)}\) represents the previous negative conveyor belt value.