Recurrence Models based on Extreme Value Theory : Impacts on Probabilistic Seismic Hazard Assessments and Comparison with the Standard Approach

Probabilistic Seismic Hazard Analyses (PSHA) require that at least the mean activity rate be known, as well as the distribution of magnitudes. Within the Gutenberg-Richter assumption, the magnitudes follow an exponential distribution which is upperly truncated to a maximum possible magnitude denoted m max . This parameter is often ﬁxed from expert judgement under tectonics considerations, due to a lack of universal method. In this paper, we present two alternatives to the exponential distribution of the magnitudes, based on the extreme value theory and that don’t require to ﬁx a priori the value of m max : the ﬁrst model is based on a generalized Pareto distribution (GPD) to model the tail distribution of the magnitudes; the second model, the Randomized Gutenberg-Richter model , is a variation on the usual exponential distribution where m max is randomized and follows a distribution deﬁned from an extreme value analysis. We use the maximum likelihood estimates taking into account the time varying level of completeness of the catalog and the uncertainty in the magnitude value it-self. We apply both modelizations to the Alps region. Then, we integrate the resulting magnitude-frequency relations into a probabilistic seismic hazard calculation to evaluate their impacts on the seismic hazard levels. These extreme-value-based recurrence models introduce a reduction of the seismic hazard level compared to the common Gutenberg-Richter model convention-ally used for PSHA calculations. This decrease is signiﬁcant mainly for long periods.

Abstract Probabilistic Seismic Hazard Analyses (PSHA) require that at least the mean activity rate be known, as well as the distribution of magnitudes. Within the Gutenberg-Richter assumption, the magnitudes follow an exponential distribution which is upperly truncated to a maximum possible magnitude denoted m max . This parameter is often fixed from expert judgement under tectonics considerations, due to a lack of universal method. In this paper, we present two alternatives to the exponential distribution of the magnitudes, based on the extreme value theory and that don't require to fix a priori the value of m max : the first model is based on a generalized Pareto distribution (GPD) to model the tail distribution of the magnitudes; the second model, the Randomized Gutenberg-Richter model, is a variation on the usual exponential distribution where m max is randomized and follows a distribution defined from an extreme value analysis. We use the maximum likelihood estimates taking into account the time varying level of completeness of the catalog and the uncertainty in the magnitude value itself. We apply both modelizations to the Alps region.
One important task in PSHA consists in calculating the activity rate for any seismic source and the earthquake magnitude-frequency distribution. The most common model is based on the empirical Gutenberg-Richter law ( [20], [21]), defined by: where E [N m ] is the mean number of earthquakes with a magnitude larger than m. Under the asumption that earthquakes are generated by a stationary point Poisson process, this relation implies that magnitudes are exponentially distributed. As the total energy released by earthquakes is finite, some deviation from the Gutenberg-Richter straight line is required for large magnitudes: many authors have proposed some improvements ( [15], [24], [30], [33], [40]): some of them consist of truncating the exponential distribution to a regional maximum possible magnitude m max that is the largest possible earthquake of seismic and tectonic setting. For stable continental regions with poor identification of seismogenic structures and low earthquake rates, the determination of m max is a tricky matter. Currently, there is no widely accepted method to estimate m max and the literature is vast on estimating the maximum possible magnitude. Some methods are based on empirical relationships between the magnitude and various tectonic and fault parameters, such as the fault length or the rupture dimension ( [3], [43]). Other methods rely on records of the largest historic or paleo earthquakes applied especially in areas of low seismicity, where large events have long return periods [28]. In the absence of any tectonic information, the maximum possible earthquake magnitude is assumed to be equal to the largest magnitude of the catalog, or the largest one plus an increment defined by experts judgement [44]. Many scientific papers propose statistical estimators of the maximum possible magnitude: for a complete review of different methods applicable for the assessment of the maximum possible magnitude, see [8], [22], [23], [24], [25], [26] or [41].
In this paper, we aim to provide two alternatives to the truncated Gutenberg-Richter model where the maximum possible magnitude m max is fixed by expert judgements. These models are based on a generalized Pareto distribution (GPD) without requiring any assumption on m max : -the first model uses a GPD to model the tail of the magnitude distribution; -the second model is a variation on the usual truncated Gutenberg-Richter model: the magnitudes follow an exponential distribution which upper bound is randomized. The randomized maximum magnitude follows a distribution that results from an extreme value analysis.
Furthermore, we go further than just estimating the magnitude-frequency relations associated to both models: we want to quantify their impacts on the seismic hazard levels mainly for long return periods of interest in the nuclear facilities. That's why we implemented both of them into a complete probabilistic seismic hazard calculation.
The paper is organized as follows. In Section 2, we present both recurrence models, after we have reminded the commonly used truncated Gutenberg-Richter model. In Section 3, we detail the probabilistic seismic hazard assessment. At last, in Section 4, we apply the models to the Alps region in France, and we integrate them into a complete probabilistic seismic hazard calculation to evaluate their impacts on the seismic hazard levels.

Earthquake Recurrence Models
We briefly present in this section the magnitude-frequency models that will be used afterwards.
We assume that the temporal distribution of event numbers follow a point Poisson distribution. Consequently, the seismic events follow a point Poisson process.

The truncated Gutenberg-Richter model (GRt)
This model relies on the Gutenberg-Richter law (1), where magnitudes are exponentially distributed, with a maximum magnitude m max fixed by experts judgement under geophysical considerations. In [17], the author details the model as well as its estimate from data that follow the previous features. The author also exhibits the normal asymptotic properties of the joint estimator of (a, b).
The recurrence model is given for all m ≥ 0 by:

The Generalized Pareto Distribution based model (GPD)
This model is an alternative to the truncated Gutenberg-Richter law (1). We model extreme magnitudes using a generalized Pareto distribution rather than an exponential distribution. As a matter of fact, under some general conditions, the tail of a distribution considered beyond a high threshold may be approximated ever more closely by a GPD. Extreme magnitudes are those that exceed a high threshold u to be determined. In [18], the author details the model as well as its estimate from data that follow the previous features. The author also exhibits the normal asymptotic properties of the joint estimator of the parameters. A GPD(σ, ξ) is parametered by a scale parameter σ and a shape parameter ξ. Given that the strain energy released in a region is necessarily bounded, the magnitude distribution must be bounded too. Thus, the magnitude distribution belongs to the Weibull attraction domain where ξ < 0. If the random variable X follows a GPD(σ, ξ), then its cumulated distribution function (cdf) where ξ < 0 is defined for all x ∈ [0, − σ ξ ] by: Then, when ξ < 0, X is upperly bounded by − σ ξ . We denote by M 0 = M |M ≥ m 0 the magnitudes larger than m 0 and by F 0 its cdf. If u is large enough, then, according to the Tippet-Gnedenko-de Haan theorem ( [32]), the tail distribution of M 0 can be written for all m ≥ u as: The use of a GPD to model the tail distribution of the magnitudes is not new. In [7], [13], [34], [35] and [41], the authors propose estimators of the GPD parameters (σ, ξ) using the maximum likelihood. But the proposed estimators are adapted to catalogs which level of completeness is constant in time. The time-varying level of completeness entails a correlation between the magnitude and the time and requires to use the likelihood of the Poisson process rather than the likelihood of the magnitudes distribution only.
In [5] and [8], the authors propose to truncate the GPD tail distribution of the magnitudes to the endpoint T M : their numerical experiences led them to think that even when using a negative extreme value index ξ, extreme value methods are not able to capture truncation at high levels. They propose an estimator for T M , using estimators of extreme quantiles, more stable than those proposed in [6].
A vast literature is focused on the Pareto distribution because they consider the energy E released by earthquakes rather than the magnitude. Both variables are linked by the relation M = (1/1.5 ln 10)ln(E/2)+1, where M is expressed on the Richter scale and E in megajoule. If the magnitudes follow an exponential distribution, then the energy E follows a Pareto distribution. The Pareto distribution belongs to the Frechet attraction domain associated to a ξ > 0. As the Pareto distribution is unbounded, a truncation is introduced: a rough truncation leads to the so-called Truncated GR or Truncated Pareto model; a soft truncation to the so-called Tapered GR or Tapered Pareto model ( [22] or [8]).
In this paper, we focus on the magnitude distribution rather than on the energy distribution. The GPD is only used to model the tail of the magnitude distribution. Furthermore, we do not use any additional truncation since the estimated GPD is already upperly bounded as ξ < 0. We check the good fitting of the GPD by comparing its right tail function with non parametric right tail estimates (using histograms and kernel smoothing techniques). At last, we have adapted the formulas of the likelihood of the point Poisson process to the time-non constant level of completeness of the seismic catalog and to the uncertainty on the magnitude data.
The recurrence model is defined for all m ≥ u by: where µ u is the mean annual number of earthquakes with magnitude larger than u.

The Randomized Gutenberg-Richter model (GRt-rd)
The Randomized Gutenberg-Richter model is a new variation on the common truncated Gutenberg-Richter model: magnitudes are still supposed to follow an upperly truncated exponential distribution. But the upper bound M max is randomized and follows a distribution denoted by L Mmax . The model is formally written as: where b is the usual Gutenberg-Richter parameter, estimated in Section 2.1. It is important to note that the estimate of b is independent of the upper bound of the exponential distribution, which allows to estimate it in a separate way. This model is a way to avoid an abrupt truncation: Kagan in [23] has argued that a sharp upper threshold does not agree with the known behavior of dissipative physical dynamic systems and fundamental seismological scaling principles. The sharp truncation is then replaced by a soft normal-like roll-off. Such a taper enables to take into account uncertainties on the right endpoint of the magnitude distribution. The resulting magnitude-frequency law is characterized by its gradual rather than steep decrease in frequency at the upper cutoff magnitude, and provides better agreement with both fundamental seismological principles and earthquake catalogs.
There exist different ways to fix the distribution L Mmax . For example, it may be the asymptotic distribution of one of the estimators of M max proposed in the literature. In this paper, we suggest that the distribution of the roll-off is fixed from an extreme value analysis that enlights possible values for the maximum possible magnitude: then, in this paper, we consider the asymptotic distribution of the estimator of a wellchosen quantile of large order of the GPD-tail distribution of the magnitudes. Furthermore, as the upper bound is randomized, the model does not fix the maximum magnitude to one value in particular but lets it take several possible values.
The quantile q p of order p is defined by F 0 (q p ) = p, and from (3), with ζ u =F 0 (u) and ξ < 0, we deduce that: We denote by g p the function such that q p = g p (σ, ξ) and we consider the estimatorq p of q p : We know that (σ,ξ) is asymptotically normal, with a covariance matrix denoted Σ. From regularity properties of g p , we deduce thatq p is also a maximum likelihood estimator, asymptotically normal, with covariance matrix Γ that is written as: To fix the distribution of M max , we must take into account the maximum magnitude m max,cat of the catalog: it would be nonsense that M max be lesser than m max,cat .Thus, the distribution L Mmax has to be truncated to the lower bound m max,cat to ensure that M max be larger than m max,cat with a probability equal to 1.
It is worth noticing that the cdf F of the magnitudes is related to F 0 through the relationF (m) = ζ uF0 (m) for all m ≥ u. Then, we have F (q p ) ≥ F 0 (q p ), which entails that q p is indeed a F -quantile of orderp ≥ p. In other words, q p is a more extreme value with respect to the distribution F than with respect to the F 0 distribution.
We want to emphasize that assigning to M max the distribution of the GPD(σ, ξ) upper bound: u −σ/ξ deduced from the normal asymptotic distribution of (σ,ξ) is not the best option, due to the irregularity of the function (σ, ξ) → u − σ/ξ at any point of kind (σ, 0): any small variation on ξ around the zero value leads to a large variation on the upper bound. Thus, estimate uncertainty on ξ induces a huge variance in the distribution of u−σ/ξ. This instability had already been noticed in [34]. On the contrary, the estimate of any quantile of large order is much more stable as the function g p is infinitely regular at any point of kind (σ, 0) (for example, the derivability at (σ, 0) is ensured by the equivalent ∂g/∂ξ(σ, ξ) ∼ (1/2)σ log 2 α with α = ζ u (1 − p)).
Generally speaking, the upper bound of a distribution gives poor information on its extreme values: the decreasing speed of its tail is more informative. Consider for example the standard normal and log-normal distributions: both ones have an infinite upper bound whereas the probability to be greater than 4 standard deviations from the mean is about 10 −5 for the Normal distribution and 10 −2 for the log-normal one. Furthermore, the study of large order quantiles is more significant: the quantile of order 99% is 2.33 standard deviations from the mean for the normal distribution, while it is 4.9 standard deviations for the lognormal one. In conclusion, even if both distributions have an infinite upper bound, the behavior in extremes is completely different: the tail of the log-normal distribution is heavier than the tail of the normal distribution, letting extreme values appear much more frequently than the normal distribution does.
The principle of the Randomized Gutenberg-Richter model is near the principle of the Gamma distribution proposed by Kagan in [22]: here, an normal taper is applied to the density of the exponential distribution of the magnitudes, while Kagan applies an exponential taper to the Pareto density of the seismic moment. Indeed, the final distribution of the magnitudes is defined by its probability density (pdf): where M is the magnitude, p M |Mmax=mmax the pdf of the conditional magnitude M |M max = m max and p Mmax the pdf of M max . Determining that final distribution is not easy to do.
We used the open source software OpenTURNS [4] that provides advanced probabilistic modelizations and statistical functionalities. The integral (8) is calculated thanks to quadrature methods that prove to be very efficient in that case.

Recurrence models evaluation
This section evaluates each recurrence model.
The truncated Gutenberg-Richter model is an improvement with respect to the Gutenberg-Richter model as the frequency-magnitude curve falls off from the straight line when the magnitude increases. The difficulty remains in the choice of m max on which the recurrence model strongly depends. This value raises debates among the seismologist community.
The GPD-based model is an improvement with respect to the truncated Gutenberg-Richter model as the frequency-magnitude curve falls off from the straight line when the magnitude increases without any additional hypothesis on the maximum possible magnitude: its upper bound results from the GPD estimate and is equal to However, for areas with low seismicity, the estimate of the GPD-based model might be tricky, due to the eventual low number of earthquakes in stable continental regions and particularly earthquakes with large magnitude.
The Randomized Gutenberg-Richter model is based on the following advantages: 1. Its estimate is based on all the data and not only the largest ones; 2. It is able to model the uncertainty of the upper troncature thanks to an extreme value analysis. In domains where the seismicity is low, that feature proves to be essential.
To conclude, it is worth considering the following good practices: for domains of high seismicity, if no value m max achieves universal agreement, the GPDbased model is likely to help to model the tail distribution without any additional hypothesis. For domains of low seismicity, the Randomized Gutenberg-Richter model should be used after the random modelization of m max has been done on a larger domain thanks to the GPD-based model. In general, the simultaneous use of all the models help to better understand their respective limitations and their impacts on hazard assessments.

Probabilistic Seismic Hazard Assessment
This section presents the integration of the three recurrence models previously defined into a probabilistic seismic hazard calculation. The aim is to check their consistency with the truncated Gutenberg-Richter model, commonly used in PSHA, as well as to assess their impacts on seismic hazard assessments.
The PSHA computational model used in this study corresponds to Cornell's approach [14]. This approach calculates the probability of exceeding a target value of ground motion, during a fixed period of time. A probabilistic seismic hazard calculation takes into account the influence of all seismic sources able to contribute to the hazard, as well as the spatial and temporal distribution of the seismicity of each source. Four main steps are necessary to perform a PSHA calculation: 1. Identification of the area seismic sources that might affect the site of interest. Area source models are often used when one can not identify a specific fault. Usually, in PSHA, a uniform seismicity distribution is assigned to each area source, implying that earthquakes are equally likely to occur at any point within the source zone.
2. Specification of temporal and magnitude seismicity distributions for each source. Cornell's approach assumes that earthquakes follow a Poisson point process. The most commonly used frequency-magnitude model is the Gutenberg-Richter relationship (1). 3. Calculation of ground motions and their uncertainty.
Ground Motion Prediction Equations (GMPE) are used to predict the ground motion at the site itself. The parameters of interest include different intensity measures (peak ground acceleration, peak ground velocity, peak ground displacement, spectral acceleration, intensity, strong ground motion duration). Most GMPE available today are empirical and depend on the earthquake magnitude, sourceto-site distance, type of faulting and local site conditions. 4. Integration of uncertainties on earthquake location, magnitude and GMPE.
The ultimate result of a PSHA calculation is a seismic hazard curve giving the annual probability of exceeding a specified ground motion parameter. Conceptually, the computation of the seismic hazard is relatively simple, as the calculation is carried out with the following equation: where IM is the Intensity Measure of the ground motion parameter; x is the target intensity; λ(IM > x) is the annual rate of earthquakes leading to an exceess of the target; λ(M > m min ) is the annual rate of earthquakes of magnitude M larger than the minimum magnitude m min selected for the source zone; f M |M >mmin and f R are the pdf of the magnitude and the distance from the source zone; P [IM > x|(m, r, θ)] is the probability that an earthquake of magnitude m at the distance r from the site generates an intensity larger than x; θ regroups all the other fixed parameters regarding the site, the type of faults, etc. This probability is determined using a model of GMPE that takes into account the model errors due to amplifications and misunderstandings of geological phenomena.
The PSHA model used in this study is a simplified but realistic model composed of only one area source. It uses real data and a conceptual calculation model respecting the international rules. The tool used for the calculations is OpenQuake [29].
The PSHA calculations take into account the following uncertainties: -Maximum magnitude uncertainty: both models GPD and GRt-rd can model this epistemic uncertainty, indirectly through the asymptotic distribution of the parameter estimators for the GPD, or directly with the distribution of the random upper bound for the GRt-rd model. -Recurrence parameters uncertainty: this epistemic uncertainty is taken into account thanks to the asymptotic distribution of the parameter estimators that enables to generate several possible sets of parameters: (a, b) for the Gutenberg-Richter based models and (σ, ξ, µ u ) for the GPD model. Each set of parameters thus constitutes an alternative branch of the logic tree.
The final logic-tree of PSHA calculations is composed of 100 branches sum up on Figure 1 illustrated in the Gutenberg-Richter case, each one associated to some specific probabilistic hypotheses. In this study, all the branches have the same weight.
The GMPE selected for the calculations is based on records in the Pan-European region [10] with the following main characteristics: Magnitudes range: [4.0, 7.6], the distances: R jb = 0 − 300 km, the spectral periods : (PGA; 0.01-4 s)

The Alps seismic catalog
In this section, we apply the whole approach to the mountainous region Alps which is one of the most active regions in France. A homogenous seismotectonic zone was delimited based on geological, structural, geophysical, neotectonic and seismological data [16].
We used the FCAT17 catalog [27] that gathers recent instrumental magnitudes measured by seismometers ( [11], [12]) and magnitudes attributed to historical events [38]. Magnitudes were derived from historical documents ( [39] , [37]) in a 2-step process: 1. The historical information was translated into macroseismic in- tensities; 2. Magnitudes were derived from these macroseismic intensities. All the magnitudes are given in M w . Figure 2 presents the location of earthquakes inside the delimited seismotectonic domain.
A declustering method was applied to separate the time-dependent part of seismicity from the seismic activity related to aftershocks, foreshocks or clusters of earthquakes [16].
Seismic events of this catalog exhibit the following main features: given the limited sensitivity of seismographic networks, small earthquakes are not completely sampled in the earthquake catalog. This makes it necessary to introduce a general catalog completeness threshold equal to m 0 = 3.5 . Furthermore, the catalog level of completeness is not constant in time: the further we go in the past, the higher the threshold is. Complete periods associated to each class of magnitudes are defined in Table 1. We assume that the resulting restricted catalog provides a complete statistical sample. Our database finally contains N = 583 earthquake events, from year 1524 until year 2016, whose magnitudes range from 3.5 until 6.69. The database is large enough to achieve robust results with all the models presented. At last, the accuracy of the magnitudes can sometimes be no better than 0.5 unit: this can be due to imprecise measurement methods or to bias introduced when converting from various local magnitude scales. In this case, we can not account for the precise value of the magnitude to fit our models: we only consider that it belongs to a class of length 0.5. This way of taking into account uncertainties on the magnitudes was first proposed by [42] and then generalized by [17] where the joint estimate of the parameters is introduced. In [18], the author has extended it to the GPD-based model. For all these reasons, we can not seperatetely analyse the two distributions: the temporal sequence of earthquake numbers and the magnitudes. That is why the estimators are built from the likelihood of the point Poisson process rather than from the seperate likelihoods (temporal sequence and magnitude).

The Gutenberg-Richter model (GRt)
The maximum magnitude selected for the Alps region is m max = 7.5, based on tectonic information and expert opinion. The (a, b) estimate with its 90%-confidence intervals is: Generating a sample of (a, b) from its asymptotic normal distribution allows to draw all the corresponding recurrence models and thus visualize the impact of the estimate uncertainty of (a, b) on the recurrence model. Figure 3 shows 100 statistically possible models. Table 2 gives the 90% confidence intervals of E [N m ] for several magnitudes: we note that the mean annual number of earthquakes whose magnitude is larger than 7 is less than 1.11 10 −3 with a 95% probability.
Among several tests to validate the fitted Poisson point process, we focus here on the good fitting of the estimated distribution of magnitudes on data: it consists of drawing the estimated distribution of M |M ∈ C k , where C k is a class of magnitudes, and the empirical cumulative rates in the same plot: Figure 4 shows that the fitting is correct.

The Generalized Pareto Distribution based model (GPD)
This model and its application to the Alps region has already been presented in [18]. In this section, we mention the essential relations only and we focus on the additional developments. We first determined the threshold u thanks to different modelizations of the distribution of M 0 = M |M ≥ m 0 : non parametric fitting with normal or uniform kernels. We fixed u = 4.0 that is a quantile of order larger than 80%. The estimates with their 90%-confidence intervals are: In [18], the Reader can find 100 statistically possible models due to uncertainties in the estimation of (σ, ξ, µ u ) and the 90%-confidence interval of E [N m ]. We note that the mean annual number of earthquakes whose magnitude is larger than 7 is less than 3.37 10 −4 with a probability of 95%.
In addition to many validation tests already detailed in [18], we focus here on an additional test based on the annual maximum magnitude over years where at least one earthquake with magnitude larger than u happened. If M u = M |M ≥ u denotes the magnitudes greater than u, and N u the Poisson distribution parametrized by µ u that counts the annual number of such earthquakes, then the annual maximum is defined by: Its cdf can be written as: The test consists in comparing G u built with (Hσ ,ξ ,μ u ) with non parametric fittings built on empirical maximum annual magnitudes (a kernel smoothing with normal kernels or a histogram). Figure 5 shows that the fitting is correct.

The Randomized Gutenberg-Richter model (GRt-rd)
To elaborate the Randomized Gutenberg-Richter model, we considered the distribution of the quantile of order (1 − 10 −4 ) of the GPD model to randomize the upper bound of the exponential distribution. From (6) and (10), we obtained the estimateq N = 6.75. Its asymptotic distribution is normal, with mean 6.75 and variance 7.63 10 −2 calculated from (7). We truncated that normal distribution to the lower bound equal to the maximum magnitude contained in the catalog m max,cat = 6.69, which can be written as: : M max ∼ N (6.75, 7.63 10 −2 ) truncated to [6.69, +∞[ Figure 6 shows the pdf of the distribution ofq N . Figure 7 shows 100 statistically possible models built from couples (a, b) generated by the asymptotic distribution of (â,b).

Recurrence models comparison
This section compares the results obtained in the Alps region for the three recurrence models. Figure 8 shows the pdf of the conditional magnitude M u = M |M ≥ u  obtained in each case. We observe that the Randomized Gutenberg-Richter model appears as a compromise between the truncated Gutenberg-Richter model and the GPD-based model: for the magnitudes below magnitude 6.69, the Randomized Gutenberg-Richter model behaves like the truncated Gutenberg-Richter model as the blue and red curves overlap; while for the magnitudes above magnitude 7, the Randomized Gutenberg-Richter model behaves like the GPD-based model as the blue and violet curves overlap. Between magnitudes 6.69 and 7, the Randomized Gutenberg-Richter model gradually moves from the GRt model to the GPD-based model. We can vizualise here the taper effect applied on the probability function of the magnitudes, which entails a soft normal-like roll-off truncation.
At last, we note that the truncated Gutenberg-Richter model has the heaviest tail distribution, while the GPDbased model has the slightest one: the pdf of GRt (red line on Figure 8) is largely above the pdf of GPD (violet line). It means that large magnitudes have a higher probability to appear with the truncated Gutenberg-Richter model than with the GPD-based model, even if the upper bound of the GPD-based model is larger than m max = 7.5 fixed to the truncated Gutenberg-Richter model. Figure 9 shows all the recurrence models: we can notice the soft decrease of the GRt-rd recurrence model compared to the steep decrease at the upper cut-off of the GRt model. Table 3 details some values of the recurrence models at different magnitudes.    Seismic hazard curves and Uniform Hazard Spectrum (UHS) were calculated using the previous recurrence models. The calculation was done in the center of the area source delimited on Figure 2. The logic tree used for calculations is drawn on Figure 1 and Table 4 recalls the parameters selected for the PSHA calculations. Figure 10 shows the hazard curves for the spectral periods PGA and 1 s. Results show that for all spectral accelerations the curves are similar: we note a decrease in the hazard calculated with the GRt-rd and GPD models compared to the common truncated Gutenberg Richter model GRt. This decrease amplifies for long periods, as it is observed on the hazard curve at 1 s. Figure 11 illustrates the impact of the three recurrence models on the UHS calculated for the return periods: 475, 5 000, 10 000 and 20 000 years. This impact is particularly important for periods higher than 0.1 s. Figure 12 quantifies the relative differences of the seismic hazard generated by the GPD and GRt-rd models with respect to the common truncated Gutenberg-Richter model. The curves show a clear truncation at about 0.1 s: below 0.1 s, there is a marked decrease in the seismic hazard with both GPD and GRt-rd recurrence models, as follows:

Impact of the recurrence models on the probabilistic seismic hazard
for return periods larger than 5 000 years, the decrease in the seismic hazard of the GPD based model is estimated at around 30, 40 and 45% for the periods 1, 2 and 3 s respectively with respect to the common truncated Gutenberg-Richter model GRt; -for return periods larger than 5 000 years, the decreasing in the seismic hazard of the Randomized Gutenberg-Richter modelGRt-rd is estimated at about 15, 18 and 20 % for the periods 1, 2 and 3 s respectively with respect to GRt.
for return periods larger than 5 000 years and spectral periods lower than around 0.1 s, the reduction in seismic hazard is around 20 and 10% for both GPD and GRt-rd models with respect to GRt. -for return periods of 475 years, the decrease in the hazard for the long periods (1,2 and 3 s) is almost the same than those observed for higher return periods. However, for shorter periods, lower than 0.1 s, the decrease in the hazard is around 10% for both GPD and GRt-rd models with respect to GRt.
We explain that significant decrease by the different modelizations of extreme magnitudes: the tail distribution of magnitudes of both GPD and GRt-rd is much slighter than the tail of GRt, as drawn on Figure 8. The pdfs of these models drop towards 0 faster than the pdf of the truncated Gutenberg-Richter model. In particular, the probability to exceed a magnitude equal to 7 calculated from GPD and GRt-rd is 8 times lesser  We insist on the fact that even if the maximum possible magnitude estimated by the GPD-based model, equal to 8.06, is larger than the m max = 7.5 fixed by experts to the truncated Gutenberg-Richter model, the GPDbased model gives less weight to large magnitudes than the truncated Gutenberg-Richter model. We have the same feature for the GRt-rd model whose maximum possible magnitude is theoretically infinite, but whose probability to exceed the magnitude 7 is 1.47 10 −8 . Thus, these extreme-value-based recurrence models reduce the population of earthquakes with larger magnitudes predicted for the hazard calculations. Such a smaller population of larger magnitudes induces a reduction in the seismic hazard, precisely for longer periods (eg 2 and 3 s in this study).

Impact of parameters uncertainty on the probabilistic seismic hazard
In this section, we propagate the uncertainty on the estimation of the recurrence model parameters through PSHA calculations. We generated 100 statistically possible sets of parameters of each recurrence model, according to the asymptotic distribution of their estimators, which generated 100 statistically possible branches in the PSHA logictree (see Figure 1).
For each return period, at each period, we drew on Figure 13 the quantile of level 5%, 50% (median value) and 95% of the UHS. Figure 14 shows the length of the confidence interval of level 90% of the UHS curves, defined as the difference between the 95% quantile and the 5% quantile. We notice that the Randomized Gutenberg-Richter model GRt-rd has the lowest uncertainties with respect to the GPD and GRt models for all the periods. Fort short periods (below 0.1 s), uncertainties are higher than for longer periods for all the models. The higher uncertainties of the GPD model are due to the fact that this model is estimated from less data than for both other models: magnitudes greater than u = 4 are considered for the GPD model (around 200 seismic events), while all the magnitudes of the catalog are taken into account for both other models (around 600 seismic events).

Conclusions
In this paper, we presented two extreme-value-based recurrence models and we compared them to the truncated Gutenberg-Richter model that is commonly used in PSHA calculations. We evaluated their impacts on probabilistic seismic hazard assessments, using the Alps data. Both extreme-value based-models avoid fixing a priori the maximum possible magnitude m max . This parameter results from a statistical analysis adapted to extreme values. This feature is particularly interesting as there is no widely accepted method to estimate m max .
The Randomized Gutenberg-Richter model appears as a mix of the truncated Gutenberg-Richter model and the GPD-based model. It looks promising because: the exponential distribution that has been widely accepted by the seismologist community, is still used to model the magnitude distribution, -the randomness of M max allows to take into account the uncertainty on the maximum possible magnitude value, whose determination is a tricky matter, -the distribution of M max relies on an extreme value analysis that aims at modeling the behavior of the extreme magnitudes, -the truncation of the magnitude distribution is soft, which provides better agreement with both fundamental seismological principles and earthquake catalogs.
The new recurrence models introduce a reduction of the hazard compared to the truncated Gutenberg-Richter model commonly used in PSHA. This decrease is more significant for long periods (eg 1, 2 and 3 s), it is estimated to between 15 − 20% for the GRt-rd model and to 25 − 40% for the GPD model.
The GPD-based recurrence model as well as the Randomized Gutenberg-Richter model are expected to result in a more realistic assessment of the probabilistic seismic hazard.
The use of these extreme-value-based models for regions of moderate or low seismicity remains difficult, mainly due to the poor number of seismic events. However, the Randomized Gutenberg-Richter model is an encouraging alternative for hazard estimates and could help for such regions. In perspective, we plan to apply the GPD model in a domain large enough and with sufficient seismicity (M > 4) to provide a robust estimate of the tail distribution of the magnitudes in the whole domain. Then, the aim is to use the Randomized Gutenberg-Richter model for each area source inside the tectonic domain: the maximum upper bound follows the tail estimated on the whole domain and both parameters (a, b) are estimated from the seismic events of each area source.

Declarations
Not applicable.