Enhancing Heliostat Calibration on Low Data by Fusing Robotic Rigid Body Kinematics with Neural Networks

Solar tower power plants rely on precise calibrations of their heliostats for eﬃcient operation. Open-loop calibration procedures are the most common type due to their cost-eﬀectiveness. Two main approaches to these algorithms exist: geometry-based robotic kinematics and neural network-based models. While the former is reliable and requires little data, it only yields moderate accuracy. The latter, however, promises higher accuracies but is data-hungry and unreliable. In this study, we propose a 2-layer coarse-to-ﬁne hybrid model that combines the strengths of both approaches. Our model uses a rigid-body model for prealignment, then phases in a neural network disturbance model through a regularization sweep. This approach ensures that the prediction accuracy is, in the worst-case, equivalent to that of the rigid-body model. Moreover, it helps to identify deﬁciencies that may have been overlooked by the physical approach. It especially is capable to compute deviation from the geometry models averaged optimum. For testing, we used real measurement data from daily heliostat calibration at the solar tower in J¨ulich. We also employed a training/validation data split for evaluation, which allows for a conservative performance assumption over the entire year. Our results demonstrate that the hybrid-model outperforms rigid-body models starting from the ﬁrst measurement, achieving a top performance below 0.7 milliradians. In conclusion, our proposed hybrid model provides a cost eﬀective in-situ solution for heliostat calibration with highest accuracies on low data in solar tower power plants for all open loop calibration methods.


Introduction
The use of solar tower power plants has been gaining momentum as a promising source of clean and renewable energy as well as solar fuels.These plants use an array of mirrors, known as heliostats, to reflect and concentrate sunlight onto a central receiver, where the energy is then converted into electricity.The sun tracking accuracy of these heliostats is essential for maximizing the energy output and overall efficiency of the power plant.For keeping the accuracy high, the heliostats have to be calibrated regularly.However, due to cost constraints affecting especially the material and component quality of the heliostats, the required accuracy is difficult to achieve.
The Camera-Target Method (Stone, 1986) is the standard method for heliostat calibration due to its high accuracy, Camera-Target Method is rather small per heliostat, the 32 use of neural networks is very limited.Furthermore, even 33 if the data set is sufficient, they are not as reliable as their 34 physical counterpart.

35
We here present a 2-layer coarse-to-fine hybrid model, which 36 uses a proven to be reliable geometric model as a pre-   The more general approach, visualized in Figure 2, is using a neural network to predict the alignment.This approach can learn modelling a heliostat's complete alignment behavior, including dynamic impacts such as wind     A neural network configuration of 14 hidden layers with two neurons per layer was found to yield good results.However, we did not employ any parameter exploration or optimization techniques; instead, we utilized the initial architecture that proved to be effective.As (normalized) inputs we use the heliostat's orientation, as well as the month and time of the measurement.Inputs like environmental conditions are not further discussed as no data was available.Due to the periodic behavior of time and date, it holds two different types of information.The first type is its normalized duration ∆t n from a given date t 0 , which can indicate the amount of wear on a system.The normalized duration can be derived by interpolating the current date t between an earliest date t 0 and latest date t 1 (ref.Equation 1).Then the normalized duration is encoded into a sine and cosine function.

Related Work
The second type of information is the season, which can   As expected, errors tend to be higher at the edges of the data distribution.This is seen in both the validation and test dataset where all data points within the data points' distribution's center are predicted better than those at the dataset's edges.Therefore, the average error calculated over the year will likely be smaller than the reported average performance.
Additionally, Fig. 9b demonstrates that the algorithm is capable of high extrapolations over 30 degrees.In summary, even if the test data set performance is only as good as the rigid body model, the overall performance over the year can be judged as superior to the rigid body model, since the model performance only declines at the edges.
The accuracy is consistently higher for the inner part of the distribution and thus for the majority of the year.

Discussion
Our study demonstrates that our new model outperforms all comparison models.Despite using neural networks, our model provides reliable and traceable behavior of heliostats over the year due to its hybridization with the rigid-body model.However, it is important to consider the data distribution more carefully than with the common approach.The new model has a tendency to produce higher errors in edge regions due to the significantly increased number of free parameters in those regions.While this may not significantly impact the annual average, it can result in higher errors during early morning reliability and relatively low cost implementation.During calibration, each heliostat is moved individually from the receiver to a white calibration target nearby the receiver.A camera is positioned to have a clear view of the target and captures focal spot images from various sun positions throughout the days.These images are used to 21 determine the heliostats' deviations in position and ori-22 entation via a rigid-body regression model.Traditionally, 23 physical-mathematical models have been used to deter-24 mine the optimal positioning.25 New approaches are being developed to improve the track-26 ing accuracy of heliostats through the use of neural net-27 works.These approaches require large datasets for train-28 ing but deliver far better accuracies compared to com-29 mon physical models (Sarr et al. (2023); Pargmann et al. 30 (2021)).However, since the data set delivered by the 31

37
alignment and then phases in a neural network disturbance 38 model over a regularization sweep.This ensures, that the 39 model performs in the worst case as good as the geomet-40 ric model, but can profit from small dynamic variations, 41 provided by the neural network.For testing we use real 42 measurement data from the daily heliostat calibration car-43 ried out at the solar tower in Jülich.We use a k-Nearest 44 Neighbor (k-NN) training/validation data split to give a conservative performance estimation of our models over the year.

Figure 2 :
Figure 2: Sketch of a neural network for alignment prediction.

Figure 4 :
Figure4: The layers of our propoased heliostat model.At the first layer, the heliostat is decribed by an ideal rigid-body model.In the second layer, the rigid-body model is trained using 28 optimizable parameters.The trained parameters are then saved and used as reference points for the the last layer, which adds dynamic calculated perturbations 136 3.2.kNN Training-Evalutation Splitting 137 In a previous publication (Pargmann et al. (2023)), we 138 employed a Training/ Evaluation/ Test split for heliostat 139 calibration datasets based on the distance between a mea-140 surement point and its k Nearest Neighbors (kNN with 141 k = 1, 2, 3) with respect to the sun position in Euler an-142 gles (Azimuth, Elevation).To create these splits, the full 143 dataset was sorted by the kNN distance.The test set was 144 populated with 30 data points, using measurements with 145 the greatest distance to their neighbors.The validation 146 dataset used further 20 data points, still sorted by the 147 kNN distance.All remaining data was used for training.148 If the dataset size was restricted, measurements with the 149 smallest kNN were excluded from training.All accuracies 150 presented later refer to the test dataset and are strictly 151 correlated with their kNN distance to the training dataset.152 This type of Training-Test Splitting removes valuable in-153 formation from training.Training with a different split, 154 such as a randomly ordered split, will likely result in sig-155 nificantly better accuracies, due to the smaller distance to 156 the validation and test data set.However, as noted in our 157 earlier publication, this accuracy can be considered a con-158 servative performance assumption, and it is not expected 159 to fall short in daily operation.
160 3.3.Neural Network Architecture 161 Our Fine Model is represented by a simple multi-layer per-162 ceptron (MLP).The general architecture corresponds to a 163 self normalizing neural network (SNN).The basic princi-164 ple behind SNNs is that the prediction by construction of 165 the network, strives for a prediction mean value around 166 zero with a standard distribution value of one.While it is 167 possible for a SNN to learn to exceed this prediction range, 168 all exceeding values experience a pull towards the targeted range of prediction.Thus, predictions can be bounded to 170 a certain range without loosing the gradients.They can 171 also be much deeper and thus solve more complex tasks 172 than simple MLPs.

244Figure 6 :Figure 7 .Figure 7 :
Figure 6: Comparison of different coarse models (Static 6,14,20 ), the fine model (neural network ) and the hybrid model (pre-alignment) using the fine model together with the coarse model as a prealignment

288 289Figure 8 :
Figure 8: Comparision of our Model to the coarse model with 20 free parameters on different data sets NN (val) Coarse Model (val) Pre-Aligned NN (test) Coarse Model (test) a) Regularization sweep results for the AM35 heliostat.
accuracies of the AM35 heliostat on the test and validation data sets using a weight decay factor of 0.0015, which achieved the best validation accuracy.

354
In future works, model architecture investigations could 355 improve model prediction capabilities even further.

Table 1 :
Characterizing properties of the heliostat data sets.
(Tancik et al., 2020)lized month m n that is constructed by 175 dividing a data points' month m by twelve.This input 176 can indicate seasonal biases.This could be corrections to 177 the solar algorithm or environmental impacts, whereby the 178 latter should be used as individual inputs (e.g.tempera-179 ture) if available.Daily cycles are already encoded within 180 the solar position and thus given through the heliostat's 181 orientation.182Theheliostatorientationcanbeextracted from the he-183 liostat's actuator configuration.Therefore, a data point's 184 actuator steps a i are normalize to a i,n by dividing a i by 185 its maximum allowed value a i,max for each actuator i.186In addition the heliostat orientation is extended via 187 Fourier feature mapping(Tancik et al., 2020).This ap-188 proach was found to significantly improve a neural net-189 work's performance for low-dimensional regression tasks, 190 where high-frequency features within the input data were 191 not noticed by the network until explicitly been introduced 192 as additional inputs.These additional inputs are obtained 193 by splitting each input parameter's value into multiple in-194 puts, one each for a different frequency band.201 by a hyperparameter, known as the weight decay coeffi-202 cient, which controls the amount of regularization applied.203 Higher weight decay coefficient renders the neural network 204 parameter space less complex.221 put towards well-working parameters instead of zero, thus 222 binding the network to previously obtained knowledge.223 The fine-model training is performed iteratively, begin-224 ning with high regularization to force the network to find 225 solutions close to the coarse model.We can now gradu-226 ally decrease the weight decay until it reaches zero and 227 compare the test results to the coarse model results using 228 the same test data.As we will see later, regularization 229 is only required very infrequently as the two-layer system 230 performs significantly better in most cases, even without 231 regularization.232 5. Datasets 233 In total, we gatherd 4 Datasets from 4 different He-234 liostats at the solar tower in Jülich in the time between Mai 235 2021 and October 2022.The Datasets contain between 191 236 and 477 Datapoints.The datasets where collected in daily 237 operation.So no extra measurement campaign has taken 238 place for this work has taken place.We cannot exclude 239 that points were recorded outside the automated calibra-240 tion routine.Especially for AJ23 this is very likely.