A New Generation of Earthquake Recurrence Models Based on The Extreme Value Theory and Impact on Probabilistic Seismic Hazard Assessments

Probabilistic Seismic Hazard Analysis (PSHA) procedures require that at least the mean activity rate be known, as well as the distribution of magnitudes. Within the Gutenberg-Richter assumption, that distribution is an Exponential distribution, upperly truncated to a maximum possible magnitude denoted m max . This parameter is often ﬁxed from expert judgement under tectonics considerations, due to a lack of universal method. In this paper, we propose two innovative alternatives to the Gutenberg-Richter model, based on the Extreme Value Theory and that don’t require to ﬁx a priori the value of m max : the ﬁrst one models the tail distribution magnitudes with a Generalized Pareto Distribution; the second one is a variation on the usual Gutenberg-Richter model where m max is a random variable that follows a distribution deﬁned from an extreme value analysis. We use the maximum likelihood estimators taking into account the unequal observation spans depending on magnitude, the incompleteness threshold of the catalog and the uncertainty in the magnitude value itself. We apply these new recurrence models on the data ob-A. served in the Alps region, in the south of France and we integrate them into a probabilistic seismic hazard calculation to evaluate their impact on the seismic hazard levels. The proposed new recurrence models introduce a reduction of the seismic hazard level compared to the common Gutenberg-Richter model conventionally used for PSHA calculations. This decrease is signiﬁcant for all frequencies below 10 Hz, mainly at the lowest frequencies and for very long return periods. To our knowledge, both new models have never been used in a probabilistic seismic hazard calculation and constitute a new promising generation of recurrence models.

One important task in PSHA consists of calculating the activity rate for any seismic source and the earthquake magnitude-frequency distribution. The most common model is based on the empirical Gutenberg-Richter law ( [20], [21]), defined by: where E [N m ] is the mean number of earthquakes whose magnitude is larger than m. Under the asumption that earthquakes are generated by a stationary Poisson point process, this relation implies that magnitudes are Exponentially distributed. As the total energy released by earthquakes is finite, some deviation from the Gutenberg-Richter straight line is required for large magnitudes: many authors have proposed some improvements ( [14], [22], [28], [31], [35]): some of them consist of truncating the Exponential distribution to a regional maximum possible magnitude m max that is the largest possible earthquake of seismic and tectonic setting. For stable continental regions with poor identification of seismogenic structures and low earthquake rates, the determination of m max is a tricky matter. Currently, there is no widely accepted method to estimate m max . One possible way is based on the empirical relationships between the magnitude and various tectonic and fault parameters, such as the fault length or the rupture dimension ( [6], [36]). Another possible way is based on records of the largest historic or paleo earthquakes applied especially in areas of low seismicity, where large events have long return periods ( [26]). In the absence of any tectonic information, the maximum possible earthquake magnitude is assumed to be equal to the largest magnitude of the catalog, or the largest one plus an increment defined by experts judgement ( [37]). Kijko & al have proposed some statistical estimators of the maximum magnitude ( [22], [24], [23]) but the author himself admits that these estimations prove to be very near the maximum magnitude of the catalog. The objective of this paper is to propose two innovative alternatives to the Gutenberg-Richter model, based on the Extreme Value Theory and that doesn't require to fix a priori the value of m max : the first model uses a Generalized Pareto Distribution (GPD) to model the tail distribution of magnitudes; the second model is a variation on the usual Gutenberg-Richter model: the Exponential distribution is truncated to a random maximum magnitude that follows a distribution resulting from an extreme value analysis.
These new recurrence models are implemented into PSHA in order to check their consistency with the common Gutenberg-Richter model, as well as to assess their impact on seismic hazard levels mainly for long return periods of interest in the nuclear facilities.
The paper is organized as follows. First, we present both innovative recurrence models, after we have reminded the truncated Gutenberg-Richter model (see Sect. 2). Then, we estimate these three recurrence models on a particular seismic region (see Sect. 3). At last, we integrate them into a complete probabilistic seismic hazard calculation to evaluate their impact on the seismic hazard levels (see Sect. 4). Both innovative models constitute a generation of recurrence models that, to our knowledge, has never been used in PSHA.

Earthquake Recurrence Models
We present in this section the usual Gutenberg-Richter model and our innovative models, both of them being based on the Extreme Value Theory. We use the maximum likelihood procedure based on data that show the following main features: 1. observation spans depend on magnitude; 2. magnitudes are considered as imprecise: each magnitude is supposed to uniformly belong to an interval of length 0.5; 3. we don't take into account magnitudes lower than m 0 = 3.5 M w , supposed to be unreliable. This last feature implies that we will estimate the restriction of the earthquake Poisson point process to the data range [m 0 , m max ].

The Gutenberg-Richter model (GRt-imp)
This model relies on the Gutenberg-Richter law (1), where magnitudes are Exponentially distributed, with a maximum magnitude m max fixed by experts judgement under geophysical considerations. Then, the restriction of the earthquake Poisson point process to the data range [m 0 , m max ] is defined by: the magnitude distribution: Exponential(β) truncated to [m 0 , m max ], the annual counting distribution: Poisson(µ 0 ) distribution, where µ 0 is the mean annual number of earthquakes with magnitude larger than m 0 . The Gutenberg-Richter parameters (a, b) are linked to (β, µ 0 ) by: The recurrence model is given for all m ≥ 0 by: The maximum likelihood estimator of (β,μ 0 ) is detailed in [18], as well as its asymptotic properties. The estimator of (a, b) is defined by : (β,μ 0 ) is asymptotically Normal. Due to the regularity of f and g, (â,b) is still asymptotically Normal, which allows to determine confidence intervals on a and b.

The Generalized Pareto Distribution based model (GPD-imp)
This model still relies on the Poisson point process modelization of earthquakes. It is an alternative to the Gutenberg-Richter law (1) as the tail distribution of magnitudes is based on the Extreme Value Theory: the Exponential distribution is not assumed any more and a Generalized Pareto Distribution (GPD) is used to model magnitudes larger than a threshold u, to be determined.
The restriction of the earthquake Poisson point process to magnitudes larger than u is defined by: the excesses of magnitudes above u are distributed according to a GPD(σ, ξ), the annual counting distribution of magnitudes larger than u follows a Poisson(µ u ), where µ u is the mean annual number of earthquakes with magnitude larger than u.
A GPD(σ, ξ) is parametered by a scale parameter σ and a shape parameter ξ. Given that the strain energy released in a region is necessarily bounded, the magnitude distribution must be bounded too. Thus, it belongs to the Weibull attraction domain with ξ < 0. The cumulated distribution function (cdf) of a GPD where ξ < 0 is defined for all x ∈ [0, − σ ξ ] by: We denote by M 0 = M |M ≥ m 0 the magnitudes larger than m 0 and by F 0 its cdf. If u is large enough ( [30]), then the tail distribution of M 0 is written for all m ≥ u as: The recurrence model is defined for all m ≥ u by: The expression of the maximum likelihood estimator of (σ, ξ, µ u ) is detailed in [16] and [17]. In particular, the estimator is asymptotically Normal, which allows to determine confidence intervals on (σ, ξ, µ u ).

The Random Gutenberg-Richter model (GRt-al)
The Random Gutenberg-Richter model is an innovative variation on the common truncated Gutenberg-Richter model: magnitudes are supposed to be distributed according to an Exponential distribution truncated to a random upper bound magnitude M max . The random upper bound M max follows a distribution L Mmax resulting from an extreme value analysis. The model is formally written as: where b is the usual Gutenberg-Richter parameter, estimated in The Gutenberg-Richter model (GRt-imp) Section. It is important to note that the estimation of b is independent of the upper bound of the Exponential, which allows to estimate it in a separate way. The distribution L Mmax is the result of an extreme value analysis of magnitudes: we consider the asymptotic distribution of the estimator of a well-chosen quantile of large order of M 0 . The quantile q p of order p is defined by F 0 (q p ) = p, and from (5), with ζ u =F 0 (u) and ξ < 0, we deduce that: We denote by g p the function such that q p = g p (σ, ξ) and we consider the estimatorq p of q p : We know that (σ,ξ) is asymptotically Normal, with a covariance matrix denoted Σ. Under regularity considerations, we deduce thatq p is also a maximum likelihood estimator, asymptotically Normal, with covariance matrix Γ that is written as: where ∇g p (σ,ξ) = ( ∂g p ∂σ (σ,ξ), ∂g p ∂ξ (σ,ξ)). From relation (7), we have: which enables to determine Γ and finally the asymptotic distribution ofq p . Furthermore, it is very important to take into account the maximum magnitude m max,cat of the catalog: it would be nonsense that M max be lesser than m max,cat .Thus, L Mmax has to be truncated to the lower bound m max,cat to ensure that M max be larger than m max,cat with a probability equal to 1.
It is worth noting that the cdf F of magnitudes is related to F 0 through the relationF (m) = ζ uF0 (m) for all m ≥ u. Then, we have F (q p ) ≥ F 0 (q p ), implies that q p is indeed a F -quantile of orderp ≥ p. In other words, q p is a more extreme value with respect to the distribution F than with respect to the F 0 distribution.
We want to emphasize that assigning to M max the distribution of the GPD(σ, ξ) upper bound: u −σ/ξ deduced from the Normal asymptotic distribution of (σ,ξ) would not be a good idea, due to the irregularity of the function (σ, ξ) → u − σ/ξ at any point of kind (σ, 0): any small variation on ξ induces a large variation on the upper bound. Thus, uncertainty on ξ induces a huge variance in the distribution of u − σ/ξ. On the contrary, the estimation of any quantile of large order is much more stable as the function g p is infinitely regular at any point of kind (σ, 0) (for example, the derivability at (σ, 0) is ensured by the equivalent ∂g/∂ξ(σ, ξ) ∼ (1/2)σ log 2 α with α = ζ u (1 − p)). More generally, the upper bound of a distribution gives poor information on its extreme values and the decreasing speed of its tail is more informative. Consider, for example, the Normal and LogNormal distributions: both ones have an infinite upper bound whereas the probability to be farther than 4 standard deviations from the mean is about 10 −5 for the Normal distribution and 10 −2 for the LogNormal one. The study of large order quantiles is more significant: the quantile of order 99% is 2.33 standard deviations from the mean for the Normal distribution, while it is 4.9 standard deviations for the LogNormal one.

Models comparison
To help the comparison, we detail in this section the pros and cons of each model.
The truncated Gutenberg-Richter model is an improvement with respect to the Gutenberg-Richter model (1) as the frequency-magnitude curve falls off from the straight line when the magnitude increases. The difficulty remains in the choice of m max , on which the recurrence model strongly depends and that raises debates among the seismologist community.
The GPD-based model is an improvement with respect to the truncated Gutenberg-Richter model (3) as the frequency-magnitude curve falls off from the straight line when the magnitude increases whithout any additional hypothesis on the maximum possible magnitude: its upper bound results from the GPD estimation, equal to m max = u − σ/ξ if ξ < 0.
However, for areas with low seismicity, the estimation of the GPD-based model might be tricky, due to the eventual low number of earthquakes in stable continental regions and particularly earthquakes with large magnitude.
Compared to the previous models, the Random Gutenberg-Richter model presents the following advantages: 1. It requires less data to be estimated: indeed, the GPDbased model needs enough data to assume that the empirical tail distribution has converged towards an extreme value distribution, whereas the Random Gutenberg-Richter model estimates the Exponential parameters using all the available data (and not only the largest ones); 2. It is able to model the uncertainty of the upper troncature thanks to an extreme value analysis. In domains where the seismicity is low, that feature proves to be essential.
To conclude, it is worth considering the following good practices: for domains of high seismicity, if no value m max achieves universal agreement, the GPDbased model is likely to help to model the tail distribution whithout any additional hypothesis. For domains of low seismicity, the Random Gutenberg-Richter model should be used after the random modelisation of m max has been done on a larger domain thanks to the GPDbased model. In general, the simultaneous use of all the models help to better understand their respective limitations and their impact on hazard assessments.

The Alps data
In this section, we apply the whole approach to the mountain region Alps, in the south of France, that is one of the most active regions in France. A homogenous seismotectonic zone was delimited based on geological, structural, geophysical, neotectonic and seismological data. We use the FCAT17 catalog ( [25]) (see Data and Resources Section) that gathers recent instrumental magnitudes measured by seismometers ( [10], [11]) and magnitudes attributed to historical events ([1]). Magnitudes were derived from historical documents ( [34] , [33]) in a 2-step process: 1. The historical information was translated into Intensities, and 2. Magnitudes were derived from Intensity maps. All magnitudes are given in M w . Figure 1 presents the location of earthquakes inside of the delimited seismotectonic domain earthquakes.
A declustering method was applied to separate the time-dependent part of seismicity from the seismic activity related to aftershocks, foreshocks or clusters of earthquakes.
Complete periods associated to each class of magnitudes are defined Table 1. Furthermore, we left all besides the magnitudes lower than m 0 = 3.5M w , considered as not reliable. We assume that the restricted catalog constitutes a complete statistical sample. Our  database finally contains N = 583 earthquake events, from year 1524 until year 2016, whose magnitudes range from 3.5 until 6.69. The database is large enough to achieve robust results with all the models presented. Furthermore, the accuracy of magnitude data can sometimes be no better than 0.5 unit: this can be due to imprecise measurement methods or to bias introduced when converting from various local magnitude scales. In this case, we can't account for the precise value of the magnitude to fit our models: only that is belongs to C i length 0.5.
We used the open source software OpenTURNS [7] to perform the probabilistic calculus that provides advanced probabilistic modelizations and statistical functionalities.

The Gutenberg-Richter model (GRt-imp)
The maximum magnitude selected for the region is m max = 7.5 M w , based on tectonic information and expert opinion. Using the maximum likelihood estimators, we ob-    The asymptotic distribution of (b,â) is Normal whith mean and covariance matrix:  Using the margins of (10), we can define confidence intervals of level 90% for a and b: Generating a sample of (b, a) from (10) allows to draw all the corresponding recurrence models and thus visualize the impact of the estimation uncertainty of (b, a) on the recurrence model. Figure 4 shows 100 generated recurrence models and Figure 5 shows the 5% and 95% quantile curves, built from 10 6 models. Table 2 details some values of the quantile curves. We note that the mean annual number of earthquakes whose magnitude is larger than 7 is less than 1.11 10 −3 with a 95% probability.
Among several tests to validate the fitted Poisson point process, we focus here on the good fitting of the  estimated distribution of magnitudes on data: it consists of drawing the estimated distribution of M |M ∈ C k , where C k is a class of magnitudes, and the empirical cumulative rates in the same plot: Figure 6 shows that the fitting is correct.

The Generalized Pareto Distribution based model (GPD-imp)
We first determined the threshold u thanks to different modelizations of the distribution of M 0 = M |M ≥ m 0 : non parametric fitting with normal or uniform kernels. We fixed u = 4.0 that is a quantile of order larger than 80%. Using the maximum likelihood estimators, we ob-   The asymptotic distribution of (σ,ξ,μ u ) is Normal with mean and covariance matrix:  Using the margins of (13), we can define confidence intervals of level 90% for σ, ξ and µ u : Generating a sample of (σ, ξ, µ u ) from (13) allows to draw all the corresponding recurrence models and thus visualize the impact of the estimation uncertainty of (σ, ξ, µ u ). Figure 9 shows 100 generated recurrence models and Figure 10 shows the quantile curves of level 5% and 95% built from 10 6 models. Table 3 details some values of the quantile curves. We note that the mean annual number of earthquakes whose magnitude is larger than 7 is less than 3.37 10 −4 with a probability of 95%.  Among several tests to validate the fitted model, we focus here on the good fitting of the annual maximum magnitude distribution whose cdf G is defined by: The cdf is defined from the estimated GPD model and from a non parametric kernel smoothing with normal or uniform kernels. Figure 11 shows that the fitting is correct. Besides, Figure 12 shows the estimated annual counting distribution of earthquakes P(μ u ) with magnitudes larger than u = 4 and the empirical one.

The Random Gutenberg-Richter model (GRt-al)
To elaborate the Random Gutenberg-Richter model, we considered the distribution of the quantile of order (1 − 10 −4 ) of the GPD-imp model to randomize the upper bound of the Exponential distribution. From (8)   Its asymptotic distribution is Normal, with mean 6.75 and variance 7.63 10 −2 calculated from (9), that we truncated to the lower bound equal to the maximum magnitude contained in the catalog m max,cat = 6.69 M w : Figure 13 shows the pdf of the distribution ofq N . Figure 2 shows the final distribution of magnitudes and Figure 3 shows the associated recurrence model compared to the Gutenberg-Richter straight line.

Results comparison
In this section, we want compare between the previous models: Figure 14 shows the pdf of the conditional    Table 4 details some values of the recurrence models at different magnitudes. Table 4 Recurrence models: mean annual number of earthquakes wich magnitude is larger than m.
GRT-al 6.0 1. 16  We note that the truncated Gutenberg-Richter model has the heaviest tail distribution, while the GPD-based model has the slightest one (the pdf of GRt-imp is above the pdf of GPD-imp). It means that large magnitudes have a higher probability to appear with the truncated Gutenberg-Richter model than with the GPDbased model, even if the upper bound of the GPDbased model is larger thant m max = 7.5 fixed to the truncated Gutenberg-Richter model. Once more, the Random Gutenberg-Richter model appears as a compromise as it is equivalent to the truncated Gutenberg-Richter model until the magitude m max,cat = 6.69 and then is equivalent to the GPD-based model for larger magnitudes.

Probabilistic Seismic Hazard Assessment
In this section, we integrate our innovative recurrence models into a probabilistic seismic hazard calculation in order to check their consistency with the truncated Gutenberg-Richter commonly used in PSHA as well as to assess their impact on probabilistic seismic hazard assessments.

Methodology
The PSHA computational model used in this study corresponds to Cornell's approach ( [13].) This approach calculates the probability of exceeding a target value of ground motion, during a fixed period of time. A probabilistic seismic hazard calculation takes into account the influence of all seismic sources able to contribute to the hazard, as well as the spatial and temporal distribution of the seismicity of each of these sources. Four main steps are necessary to perform a PSHA calculation: 1. Identification of the area seismic sources that might affect the site of interest. Area source models are often used when one cannot identify a specific fault. Usually, in PSHA, a uniform seismicity distribution is assigned to each area source, implying that earthquakes are equally likely to occur at any point within the source zone.
2. Specification of temporal and magnitude seismicity distributions for each source. Cornell's approach assumes that earthquakes follow a Poisson point process. The most commonly used frequency-magnitude model is the Gutenberg-Richter relationship (1). 3. Calculation of ground motions and their uncertainty.
Ground Motion Prediction Equations (GMPE) are used to predict the ground motion at the site itself. The parameters of interest include different intensity measures (peak ground acceleration, peak ground velocity, peak ground displacement, spectral acceleration, intensity, strong ground motion duration). Most GMPE available today are empirical and depend on the earthquake magnitude, sourceto-site distance, type of faulting and local site conditions. 4. Integration of uncertainties on earthquake location, magnitude and GMPE.
The ultimate result of a PSHA calculation is a seismic hazard curve giving the annual probability of exceeding a specified ground motion parameter. Conceptually, the computation of the seismic hazard is relatively simple, as the calculation is carried out with the following equation: (18) where IM is the Intensity Measure of the ground motion parameter; x is the target intensity; λ(IM > x) is the annual rate of earthquakes leading to an exceess of the target; λ(M > m min ) is the annual rate of earthquakes of magnitude M larger than the minimum magnitude m min selected for the source zone; f M |M >mmin and f R are the pdf of the magnitude and the distance from the source zone; P [IM > x|(m, r, θ)] is the probability that an earthquake of magnitude m at the distance r from the site generates an intensity larger than x; θ regroups all the other fixed parameters regarding the site, the type of faults, . . . . This probability is determined using a model of GMPE that takes into account the model errors due to amplifications and misunderstandings of geological phenomena.
The structure of a PSHA calculation is summarized in Figure 16. In the study, the PSHA model is a simplified but realistic model composed of only one area source. It uses real data (see Date and Resources Section) and a conceptual calculation model respecting the international rules. The tool used for the calculations is OpenQuake [27].
The PSHA calculations take into account the following epistemic uncertainties:  -Ground motion models uncertainty: we selected four Ground Motion Prediction Equations (GMPE) . These independent GMPEs are representative of all published database for shallow continental active regions ( [4], [5], [9], [12]). -Maximum magnitude uncertainty: both innovative models GPD-imp and GRt-al can model this uncertainty, indirectly through the asymptotic distribution of the parameter estimators for the GPD-imp, or directly with the distribution of the random upper bound for the GRt-al model. -Recurrence parameters uncertainty: this epistemic uncertainty is taken into account thanks to the asymptotic distribution of the parameter estimators that enables to generate several possible sets of parameters: (a, b) for the Gutenberg-Richter based models and (σ, ξ, µ u ) for the GPD-imp model. Each set of parameters thus constitutes an alternative branch of the logic tree.
The final logic-tree of PSHA calculations is composed of 400 branches sum up on Figure 17, each one associated to some specific probabilistic hypotheses. The weights of the logic-tree branches are defined according to the degree of confidence that is given to each of the assumptions and interpretations. In our study, all the branches have the same weight.

Results and Discussion
Impact of the new generation of recurrence models on the probabilistic seismic hazard We calculate the seismic hazard curves and Uniform Hazard Spectrum (UHS) using the previous recurrence models. Figure 18 shows all the hazard curves and shows that for all spectral accelerations the curves look sim-

SA(2Hz)
GRt-imp GRt-al GPD-imp ilar: a decrease in the hazard calculated with the new recurrence models (GRt-al and GPD-imp) compared to the common truncated Gutenberg Richter model (GRtimp). This difference is higher for low frequencies, as it is clearly observed for the spectral period of 2 Hz. Figure 19 illustrates the impact of the new recurrence models on the UHS calculated for the return periods: 475, 10 000, 20 000 and 100 000 years. This impact is particularly important for low frequencies (be-low around 10Hz), which is consistent with the impact observed on the seismic hazard curves. This decrease is accentuated for larger return periods, clearly visible for the 100 000 year return period. Figure 20 quantifies the relative differences of the seismic hazard generated by the new recurrence models with respect to the common truncated Gutenberg-Richter model. The curves show a clear truncation at about 10 Hz: below 10 Hz, there is a marked decrease in the seismic hazard with both new recurrence models, as follows: for return periods larger than 5 000 years, the decrease in the seismic hazard of the Generalized Pareto Distribution based model GPD-imp is estimated at around 50, 30, 25 and 10% for frequencies 0.5, 1, 2 and 10 Hz respectively with respect to the common truncated Gutenberg-Richter model GRt-imp; for return periods larger than 5000 years, the decreasing in the seismic hazard of the Random Gutenberg-Richter modelGRt-al is estimated at about 25, 18, 12 and 5 % for frequencies of 0.5, 1, 2 and 10 Hz respectively with respect to GRt-imp. for frequencies above 10Hz and up to 100Hz, the reduction in seismic hazard of the GPD-imp and GRt-al is less than 10% with respect to GRt-imp.
We explain that significant decrease by the different modelisations of the extreme magnitudes: the tail distribution of magnitudes of both GPD-imp and GRt-al is much lighter than the tail of GRt-imp, as drawn on Figure 14. The pdf of the innovative models drops towards 0 faster than the pdf of the truncated Gutenberg-Richter model. In particular, the probability to exceed a magnitude equal to 7 M w calculated from GPD-imp and GRt-al is 8 times lesser than the probability calculated from GRt-imp. That feature is noticeable on the values of the recurrence models: the mean annual number of earthquakes with magnitude larger than 7 M w is E [ We insist on the fact that even if the maximum possible magnitude estimated by the GPD-based model, equal to 8.06 M w , is larger than the m max = 7.5 fixed by experts to the truncated Gutenberg-Richter model, the GPD-based model gives less weight to large magnitudes than the truncated Gutenberg-Richter model. We have the same feature for the GRt-al model whose maximum possible magnitude is theoretically infinite, but whose probability to exceed the magnitude 7 M w is 1.47 10 −8 . Thus, these new recurrence models reduce the population of earthquakes with larger magnitudes included in   the PSHA calculation, which directly induces a reduction in the seismic hazard, precisely for low frequency component and for very long return periods (eg 100 000 years).
Impact of parameters uncertainty on the probabilistic seismic hazard In this section, we propagate the uncertainty on the estimation of the recurrence model parameters through PSHA calculations. We generated 100 possible sets of parameters of each recurrence model, according to the asymptotic distribution of their estimators, which generated 400 branches in the PSHA logic-tree. At each frequency and for each return period, we calculated the quantile of level 5%, 50% (median value) and 95%. Figure 21 shows the quantile curves of the UHS, for each recurrence model. To better understand the impact of the estimation uncertainty, we draw on Figure 22 the coefficient of variation of the sample values, at each frequency, defined as the ratio between the standard deviation and the mean of the values. The larger the coefficient of variation, the larger the variability around the mean value. The results show that from the frequency 5 Hz, GRt-imp has the largest coefficient of variation compared to GPD-imp and GRt-al : the uncertainty in the estimation of the truncated Gutenberg Richter model induces the greatest uncertainty on the hazard levels. In other words, calculations of hazard levels with the new recurrence models are more robust than those from the truncated Gutenberg Richter one with respect to the parameters estimation uncertainty. For lower frequencies, GPD-imp presents the greatest uncertainty with a peak at 30% at the frequency 0.33 Hz. One the other hand, at that frequency, GRt-imp and GRt-al have practically the same feature, largely below GPD-imp. This trend is the same for all return periods analysed in the present study.

Conclusions
In this paper, we presented a new generation of earthquake recurrence models and we subsequently compared them to the truncated Gutenberg-Richter model that is commonly used in PSHA calculations. We evaluated their impact on probabilistic seismic hazard assessments, using the Alps data. Both innovative models are based on an extreme value analysis, which avoids fixing a priori the maximum possible magnitude m max . This value results from a statistical analysis adapted to extreme values. This feature is particularly interesting as there is no widely accepted method to estimate m max .  the Exponential distribution that has been widely accepted by the seismologist community, is still used to model the magnitudes distribution, the randomness of M max allows to take into account the uncertainty on the maximum possible magnitude value, whose determination is a tricky matter, the distribution of M max relies on an extreme value analysis that aims at modeling the behavior of the extreme magnitudes.
This new generation of recurrence models introduce a reduction of the hazard compared to the truncated Gutenberg-Richter model commonly used in PSHA. This decrease is significant for all frequencies below 10 Hz, above all for very long return periods (e.g. 100,000y). It is estimated between 10% and 50% depending on the frequency. These new recurrence models are expected to result in a more realistic assessment of the probabilistic seismic hazard.

Declarations
Not applicable.