Tuning the weight values in an artificial neural network for a computational function is essential for artificial intelligence. This letter proposes tuning based on a mathematical model, which conceptually extends the artificial neural network by allowing an imbalance between the sum of the incoming values to a node and the outgoing values from the node in the network. In a different way from the gradient-descent-based tuning that uses multiple updates to minimise the difference between the required output values and the computed output values of an artificial neural network, the proposed tuning resolves the imbalance at each node in the network. The proposed tuning exhibits similar performance to the existing stochastic-gradient-descent-based tuning. In contrast, the proposed tuning does not need to explore the optimal parameters to obtain the optimal weight values. These benefits of the proposed tuning method could accelerate the advancement of artificial intelligence.