Bayesian updating of seismic fragility curves through experimental tests

Fragility curves, commonly derived using analytical methods, are important ingredients of seismic risk analysis of structures in the framework of performance-based earthquake engineering. Hence, the accurate estimation of realistic fragility functions is a decisive *: Bayesian updating considering a diffuse prior step in a reliable risk assessment. This paper proposes a Bayesian updating procedure applied to analytical fragility curves of reinforced concrete (RC) structures based on data from experimental tests, namely shaking table tests. The latter are commonly performed by progressively increasing the intensity of the input motion applied to the same test specimen. In this regard, the maximal benefit from the output of a shaking table test is sought here, aiming to convert n sequential stages of a shaking table test of a single virgin specimen into n equivalent shaking table tests that are performed on n virgin specimens. This is performed by modifying the intensity of the input motion applied during the stage-wise testing based on a damage index coefficient. The parametric studies performed to validate this objective reveal that the approach is more suitable for simple structures compared to large or complex structures. The ATC-58 and Markov Chain Monte Carlo (MCMC) approaches for Bayesian updating of fragility curves are also closely examined and compared. The proposed Bayesian updating is applied to a RC structure, where fragility curves that are derived from incremental dynamic analysis are updated using shaking table results. The updating is examined considering three damage state models, namely HAZUS, homogenized RC and strain-based damages states. This work also highlights the pitfalls of using a limited sample of experimental test data for updating less reliable priors. Besides, the MCMC-based approach is shown to be more robust in the presence of complex analytical fragilities than the ATC-58 approach.


Introduction
Humankind has witnessed numerous catastrophic events due to earthquakes, which destroyed cities and important infrastructures causing loss of life and economical damage. Even though the probability of occurrence of such earthquakes is very small, the consequences can be devastating. Chan et al. (1998) estimated the maximum global economic loss for a period of 50 years, based on 10% of exceedance, to be around 1000 billion USD. Furthermore, the enforcement of modern seismic codes, although different from region to region, started only in the mid-1980s in regions with significant seismic hazard. Thus, many existing buildings are not adequately designed to resist earthquake forces and do not comply with the performance-based earthquake engineering (PBEE) design philosophy (FEMA 2000). Fortunately, due to the over-strength imposed by the conservativism of design codes, a fraction of lateral forces may be resisted with an acceptable level of damage. However, the presence of such over-strength does not guarantee compliance with the PBEE philosophy whereby seismic performance requirements are directly verified.
The evaluation of the seismic risk of structures can therefore be justified which can bring about better engineering practices to reduce its effect. It is computed by convolving hazard, exposure and structural vulnerability functions in a probabilistic framework. Thus, the accuracy of fragility functions is important in this context. Fragility curves are a family of damage functions relating the probabilities of exceedance of damage-states to intensity measure levels. Depending on how they are derived, they are categorized into empirical, analytical, judgmental or hybrid (for instance combining empirical and analytical methods) fragility curves. The empirical method  for generating fragility curves depend on statistical methods based on post-earthquake data collection. On the other hand, analytical fragility curves (Lallemant et al. 2015;D'Ayala et al. 2015) are derived from numerical analyses of structures. In the absence of information, the judgmental (expertopinion) techniques (Jaiswal et al. 2012) for deriving fragility curves can also be adopted. Owing to its flexibility, analytical fragility curves are commonly adopted in seismic risk analysis. Additionally, this approach enables us to model uncertainties in the characteristics of structural systems and ground motion input. The hybrid approach for deriving fragility curves combines available damage statistics and damage statistics that are obtained from nonlinear analysis (Kappos et al. 2006;Kappos 2016).
Furthermore, fragility curves that are derived from less reliable data (prior belief) can be updated through more reliable or recent data. The approach proposed in this paper falls into this category, whereby experimental data is used in updating analytically derived fragility curves through the principles of Bayesian updating. Although Bayesian methods are better equipped to model data with small sample sizes, estimates can be sensitive to the prior distribution. Nonetheless, updating fragility curves in the context of structural engineering, or more specifically in earthquake engineering, has been practiced for a few decades now. Singhal and Kiremidjian (1998) presented a Bayesian updating technique for RC buildings using a likelihood constructed from building damage data collected during the 1994 Northridge earthquake. They constructed confidence bounds around the median values of fragility curves to represent uncertainties. This updating process is based on observed data; thus, it may not be suitable when the class of structures under study does not match the characteristics of structures from which the damage statistics are collected. Another approach to update fragility curves is to use experimental test results. This approach permits full control on the experimental data, and the test specimen indirectly, 1 3 used in updating the analytical fragility curves. For example, fragility curves derived from the expert opinion approach or from a less representative numerical model can be updated using experimental or field data. Jaiswal et al. (2012) took the latter approach in developing fragility curves for global building types.
Bayesian updating (Andrew Gelman et al. 2013;Box and Tiao 1992) of fragility curves was also extensively investigated by Porter et al. (2006). Porter and co-workers presented a simplified method based on the principles of Unscented Transformation (UT) (Julier and Uhlmann 2000) that was shortly after adopted by the ATC-58 framework. In this approach, the prior fragility function is a joint probability function that is represented by a few discrete points (typically five) and their corresponding weights. The weights that are assigned to the discrete points are eventually updated using the likelihood constructed using experimental data. Even though the approach is simple and efficient, such oversimplified representation may result in bias in the output of the updating process. Consequently, stochastic frameworks for Bayesian updating have been explored in the last decade. For example, Koutsourelakis (2010) used the Markov Chain Monte Carlo (MCMC) approach in the context of Bayesian inference to update fragility curves. Li et al. (2013) took a similar approach while updating the fragility curves of a bridge overcrossing of the Meloland road in California. In the latter, fragility curves that were generated from incremental dynamic analyses of the bridge were updated using hybrid tests conducted on eight, 1:25 scale, RC piers while the rest of the bridge was modeled numerically.
For obvious reasons, shaking table testing is reliable in simulating the seismic response of structures. From this perspective, this paper proposes and examines the Bayesian updating of fragility curves for RC structures through shaking table tests. Nonetheless, the cost of building several test specimens can make this approach prohibitive. To limit this economic burden, an approach to maximize the output of a shaking table test is explored in this paper assuming the conventional approach for conducting shaking table tests -a single virgin specimen subjected to progressively increasing intensity of ground motion input. Besides, to expose the potential and limitations of the proposed approach, fragility curves based on different damage state models, including strain-based damage states, are investigated in this work.

Fragility modeling
Seismic fragility curves describe the probability of exceeding a given performance level or damage state as a function of an intensity measure (IM) of an earthquake, or as a function of an engineering demand parameter (EDP) such as the maximum inter-story drift (ISD max ). Damage limit states can be defined in terms of thresholds on EDPs such as ISD max , plastic rotation, peak roof displacement, maximum strain, etc. (Choudhury and Kaushik 2018). In this paper, ISD max and the maximum strain EDPs are considered for defining damage states. The probability that a damage state k (DS k ) is reached or exceeded is described using a conditional probability which is commonly represented by a lognormal distribution: (1) where im m and stand for the median intensity measure, such as peak ground acceleration (PGA) corresponding to a particular damage state, and the logarithmic standard deviation (dispersion), respectively. The median value has the units of the intensity measure chosen whereas the dispersion is a dimensionless term. It should be noted that the dispersion parameter is associated with the nature of the median parameter (a ground motion intensity or an EDP). The dispersion parameter is a composite form of aleatoric and epistemic uncertainties represented by fragility curves, which reads: where R and U refer to the aleatoric and epistemic uncertainties, respectively. The inherent randomness of a fragility curve is represented by the aleatoric uncertainty. It is mainly due to record-to-record variability of the ground motion suite used for deriving fragility curves. On the other hand, uncertainties in the material strength, geometry of the structure, or the reliability of the models adopted are sources of epistemic uncertainty. Moreover, the dispersion parameter may include uncertainty in defining the damage states' thresholds (FEMA 2001).

Bayesian updating of fragility curves through experimental tests
Bayesian inference is a statistical method whereby Bayes' theorem is applied to update a probability distribution as more information becomes available. In the proposed method, information is obtained from experimental tests, namely shaking table tests. Bayesian updating is particularly helpful for studying dynamic systems and estimating their parameters. It has a great deal of application in many disciplines where prior beliefs are updated once additional information is available (from observation or experiments). In Bayesian inference, the posterior distribution is obtained using a likelihood function p(data| � ) , i.e., a statistical model that is developed based on additional data, and the prior distribution p( � ): and the posterior point estimate, ̂ ′′ , is therefore calculated as: where E is the expectation operator.

Bayesian updating using unscented transformation (UT)
This approach seeks simplicity and is thus less rigorous compared to a full-fledged Bayesian inference framework. The unscented transformation takes advantage of the fact that Bayesian updating can be considered as a nonlinear transformation of the prior distribution through a nonlinear likelihood function. This problem can be solved by approximating one of them -the prior or the likelihood function. However, the likelihood function is generally complex which makes approximating it a difficult, and thus nonpragmatic approach. On the other hand, the joint prior distribution can be approximated easily by considering a few discrete points, which is the principle of the unscented transformation (Julier and Uhlmann 2000). In this formulation, 2n + 1 sigma points are defined to cover the entire distribution space. Julier and Uhlmann (2000) showed that these sigma points are enough to approximate at least the first two moments of an n-dimensional distribution. The median and logarithmic dispersion parameters of the prior distribution, µ′ and β′, are then transformed through a nonlinear likelihood function and the posterior parameters, ′′ and ′′ , are evaluated at the end.
The first step in this approach is to define the coordinates of the sigma points, s i . The fragility curve is then modeled as a joint distribution of the two random variables, ′ and β′, at the sigma points. Hence, five sigma points (2 × 2 + 1, where n = 2), one at the origin and the remaining four points symmetrically spaced in their respective axes, can be conveniently used to approximate the first two moments of the joint distribution. The coordinates of the five sigma points are given in Table 1.
In Table 1, is a scaling parameter and P x,i are the elements of a covariance matrix. The scaling parameter determines the number of moments that can be matched through this principle, and it is calculated as: where n is the dimension of the probability distribution and is a free parameter, which is taken here as 1.
The covariance matrix, P, of the two independent random variables can be written as: The weight of a sigma point, w i ′ , is a function of its position i, the scaling parameter , and the dimension of the probability distribution n: Now, let us consider an experiment with N virgin sample structures that are tested. In a shaking table test, it would be highly beneficial if this is also equivalent to conducting N stages during a shaking table test of a single virgin structure, which is commonly executed by progressively increasing the input intensity to a shaking table, as discussed in Sect. 4. Considering a given damage state, DS k , a vector of binary numbers, ε, can be built to represent the exceedance or non-exceedance of DS k during a test. Therefore, the likelihood 3. For i = n + 1 to 2n, function, related to DS k , at each sigma point i, can be constructed using the exceedance identifier, ε: where im j is the magnitude of the input motion in the jth test or stage of a shaking table test. Note that the size of vector ε is N. Subsequently, the prior weights, w ′ i , of the joint distribution can be easily updated via the Bayesian principle after calculating the normalizing constant, p t : The updated weights, w ′′ i , of the five sigma points are then used in calculating the posterior estimates of the median term, ′′ , and of the logarithmic dispersion, ′′ : The number of moments that can be matched may be further improved by appropriately selecting the coordinates of the sigma points. Herein, the accuracy of the ATC-58 method (Porter et al. 2007) is explored by replacing the five sigma points with seven sigma points. However, the accuracy of the posterior estimates has not improved in comparison to the approach which uses five sigma points. Due to the extra computational effort in using seven sigma points, the approach with five sigma points is found more efficient and thus adopted in this paper.

Markov chain Monte Carlo (MCMC) approach for Bayesian updating
The application of Bayesian inference in updating fragility curves yields a posterior probability distribution that is complex and, in many cases, mathematically intractable, mainly due to the normalizing term (total probability) of the posterior distribution derived from the Bayes' theorem. The Markov Chain Monte Carlo (MCMC) method is a potential technique for tackling this problem. It literally eliminates the need to calculate the normalizing term and approximates the posterior distribution using Markov chains (Gleman et al. 2011).
Markov chain simulation, like importance sampling, is a general method for drawing samples of a parameter from an assumed distribution and corrects the sampling process for a better approximation of a target distribution (Lynch 2007). The principle of MCMC is to simulate a random walk in the space of parameters which eventually onverges to a stationary distribution. Samples are drawn sequentially, depending on the previously drawn samples, forming a Markov chain. The principle behind Markov chain sampling can be expressed as: where P is a state-transition matrix, and the vector is a stationary distribution on S whose entries are non-negative and sum to one. The random walk algorithm continuously generates samples until the stationarity criterion, described by Eq. (12), is met. However, convergence is not checked in practice. Instead, simple tests are done to ensure that a stationary distribution is approximately achieved. Such tests include ensuring proper mixing of samples and consistent posterior estimates from the first and second halves of the generated data. Unlike in the frequentist approach, the prior distribution plays an important role in Bayesian statistics which can make it relatively subjective. However, if a representative prior is used, accurate results can be obtained without large computational effort. A poorly chosen prior distribution on the other hand can be a source of bias. The issue about the weight given to the prior distribution has thus been debated for many years now, with some researchers giving more weight to the prior information than the new information and vice-versa. Diffuse priors may be chosen in some cases, for instance, if large experimental datasets (reliable and with good coverage) are available and/or if the prior is associated with large uncertainty. In the context of fragility curves that are derived from reliable analytical models, the prior distribution should have an important weight; consequently, a diffuse prior may not be realistic. Besides, Bayesian inference using a small number of experiments is the focus of this paper, which also contradicts the condition for adopting a diffuse prior established above. Overall, special care must be done in modelling the prior information.
Herein, the two parameters of a fragility function are designated as 1 and 2 , representing the mean value of the lognormal distribution and the logarithmic dispersion, respectively. 1 and 2 are modeled using lognormal and gamma probability distributions, respectively. Gamma distribution is mainly used in the latter to keep 2 strictly positive.
The value of 2 1 can be assigned depending on the knowledge of 1 . If the analytical fragility curves are associated with large uncertainty, perhaps due to an unreliable numerical model, large 2 1 can be considered; otherwise, small values of 2 1 can be taken. On the other hand, 2 2 is perhaps not be fully known on how it is related to the accuracy of the numerical model, but it can be estimated from experimental data. Note that the parameters of the gamma distribution, c and , are computed from 2 and 2 2 . In Bayesian inference, the ground motion suite from which the analytical seismic fragility curves are derived and the ground motions that are used during shaking table tests may be different. In this scenario, updating the total logarithmic dispersion of the analytical fragility curve may not be justified since the experimental tests do not fully model the record-to-record dispersion. However, if shaking table tests are conducted using ground motions that represent the seismic hazard considered in deriving the analytical fragilities, the component of the dispersion parameter 2 that originated from the uncertainty in the seismic hazard may also be updated. This entails the need to conduct several experimental tests, which is neither economically justifiable nor the intent of this paper. Instead, representative ground motion records can be used for conducting a limited number of experimental tests (for instance, the record of the ground motion suite used in deriving the analytical fragilities that best fit to the median response spectrum of the return period considered). Likewise, if uncertainty in the capacity of a structure is considered while deriving the analytical fragilities, one cannot update the dispersion parameter based on experimental tests that are conducted on a handful of test specimens. In this situation, the portion of 2 , which corresponds to the uncertainty in the capacity, needs to be identified and deducted from 2 ; and the remaining value of 2 can therefore be updated. In this paper, uncertainty in structural capacity is not considered to help us focus on the remaining ingredients that make up a fragility function. According to Porter et al. (2006), using the principle of compound distribution, = 1 2 ∕ √ 2 . However, the coefficient of variation (COV) of 2 can be selected depending on the reliability of its prior value as mentioned above. Based on observed data, Porter et al. (2007) adopted the range [0.5 2 , 1.5 2 ], which corresponds to the 98% confidence interval of finding 2 . This translates to 98% probability of finding 2 in that interval assuming a normal distribution, i.e., a coefficient of variation equal to 0.21. The above recommendations are adopted in this paper as well. Now, the prior distribution of a fragility curve can be constructed as a joint probability distribution considering 1 and 2 as independent random variables.
The likelihood function in MCMC is identical to the one given in Eq. (8), but a continuous density function is used instead of the discrete formulation. The posterior distribution is therefore proportional to the product of the prior and likelihood distributions (proportionality is more relevant in MCMC simulation since the normalizing term is not evaluated explicitly): The Metropolis-Hasting (MH) algorithm, which is commonly used to perform MCMC, generates a sequence of correlated random samples whose distributions converge to a target distribution. The algorithm uses a proposal distribution from which samples are drawn and it sets an acceptance criterion to accept or reject samples. The Step 1: Generate a candidate i from a proposal distribution q( t | t−1 ) Step 2: Calculate the acceptance ratio = min 1, Step parameters of the algorithm include the starting point and the proposal distribution. The steps conducted by the algorithm are presented in Table 2.
In MCMC simulation, a random walk proposal distribution q defined by a bivariate normal distribution was found to be sufficient in all applications of the MCMC sampling (Koutsourelakis 2010), i.e.: The variance of q is selected after a few exploratory runs by ensuring proper mixing of samples. The acceptance ratio of samples, in the interval 10%-50%, can be used as a rule of thumb to ensure adequate mixing of the Markov chains (Koutsourelakis 2010). At the end of the simulation, samples from the posterior distribution are post-processed before calculating the point estimates of 1 and 2 . If the initial value chosen is not close to the true solution, the simulation may take a longer time to attain equilibrium; consequently, the samples generated before reaching the equilibrium condition, say b, are discarded. The discarded samples are commonly termed burn-in samples. This makes the estimation of the posterior parameters to be independent of the initial condition. To gain flexibility on the prior and likelihood probabilistic functions used for the Bayesian updating, the MH algorithm was implemented in a MATLAB program.
Another important aspect of MCMC sampling is that θ t and θ t−1 are not independent and they could be highly correlated. Hence, the samples retained after the burn-in process are downsampled with a lag of n points, termed thinning. After thinning, (N sim − b)∕n samples are left for estimating the statistics of the posterior distributions. The autocorrelation function (ACF) of the samples obtained after burn-in can be used to estimate the factor, n, for downsampling the data. The downsampling factor that yields a near-zero autocorrelation among the samples is a good choice. For instance, a 95% confidence interval around ACF = 0 can be used to ensure that the final samples are independent. Note that MCMC sampling that is characterized by a slowly decaying ACF may need a large number of samples to be generated, thus increasing the computational cost of the sampling process.

Maximal benefit from shaking table tests
A shaking table test of a structure, or a structural component, is commonly conducted by progressively increasing the intensity of the input motion (Martinelli and Filippou 2009). For instance, a shaking table test of a portal frame may be conducted in stages by scaling an earthquake record by 0.1, 0.3, 0.5, 0.7, and so on. Shaking table tests can be used in Bayesian updating of fragility curves, as discussed above, and the accuracy of this process is expected to improve with the increasing number of experiments. However, strictly speaking, this condition can only be fulfilled by repeating the shaking table test, each time using a virgin specimen, which is financially burdensome.
The mitigation of this impediment is the motivation behind the proposal for maximizing the output benefit of a stage-wise shaking table test. This study attempts to find an adequate representation for an equivalent intensity of ground motion input whereby each stage of the shaking table test can be considered as an experiment that is conducted on a virgin specimen. A method based on the input energy was explored in the past (Coelho , but without conclusive results. In this paper, the severity of damage of a RC structure is measured by a damage index. The accumulation of damage during the stagewise tests is therefore accounted by formulating a damage-based equivalent intensity measure.

Damage-based equivalent intensity measure
Several damage indices are available in the literature for quantifying damage of RC structures in the PBEE framework (Rodriguez-Gomez and Cakmak 1990; Skjaerbaek et al. 1997). In this study, the Park-Ang damage index (Park et al. 1987) is adopted because it is widely used to quantify damage of RC structures. The Park-Ang damage index (DI) is a linear combination of normalized displacement and normalized hysteretic energy. In a sequential (stage-wise) shaking table test, the Park-Ang damage index at the jth input stage is computed as: where d max is the maximum displacement attained in a structural member; is a degradation parameter that represents the effect of cyclic response on the damage of a structure, typically taken as 0.05 (Kunnath et al. 1992); E h is the hysteretic energy and d ult is the ultimate displacement of the structure. Likewise, the damage index of a nonsequential shaking table test (virgin specimen is used in each test) at the jth input stage is computed as: In Eq. (17), the damage index at the jth input stage in the sequential testing depends on the cumulative hysteretic energy from the last j experiments. Conversely, the damage index at the jth scale of input in the non-sequential testing (Eq. (18)) is calculated based only on the hysteretic energy of the jth experiment. In this study, d ult corresponds to the post-peak capacity which has a 20% reduction from the maximum force capacity of the structure. This assumption is consistent with the approach adopted for bilinearizing the capacity curves while defining the yield force, F y .
Furthermore, engineering demand parameters (EDPs), such as drift, damage index, etc., may be related to the intensity of ground motion input. In Shome et al. (1998), the natural logarithm of an EDP is linearly related to the natural logarithm of an IM, which reads: where a and b are the fitting coefficients that represent the offset and slope of the curve, respectively. In Eq. (19), the Park-Ang damage index (DI) represents the EDP, while spectral acceleration at the fundamental frequency ( S a (T 1 ) ) is taken as the IM to formulate the damage-based equivalent intensity measure, S a (T 1 ) eq . The prime goal of this formulation is to estimate S a (T 1 ) eq of a non-sequential shaking table test that would produce the same damage as that of the jth sequential shaking table test, whose ground motion input IM is S stg j a (T 1 ) . Therefore, Eqs. (17), (18) and (19) are combined to derive S a (T 1 ) eq assuming the same fitting coefficients for the sequential and non-sequential tests. This assumption introduces simplicity to the proposed approach ensuring equivalence between the two scenarios by changing only the IM. Rearranging the terms, S a (T 1 ) eq can be shown to be: where b is the slope of Eq. (19) that is estimated from the non-sequential shaking table test. The proposed method is examined in a parametric study in the subsequent section.

Numerical study
In this parametric analysis, a numerical model is subjected to progressively increasing intensity of a ground motion input. The first step in this analysis is similar to repeating a shaking table test, each time using a different test specimen and different intensity of the input motion. For clarity, it will be referred hereinafter as non-sequential analysis. In the second step, a sequential time-history analysis is conducted by combing all the ground motions that are used in the first step thus forming a long record whose intensity increases progressively. The second step is referred as sequential analysis in subsequent discussions.
To fully damp-out all vibrations before starting any stage of the sequential analysis, a 10 s idle time is included between adjacent stages where the structure vibrates freely. The responses of two reinforced concrete structures: cantilever column and portal frame, under the sequential and non-sequential analyses, are therefore evaluated to examine the formulation proposed for S a (T 1 ) eq . The Park-Ang damage indices are calculated at the end of each stage of the two analyses, and they are compared in a stage-wise manner. Besides, the Maximum Inter-Story Drift (MISD) response, from the two analyses, is also explored to quantify the contribution of hysteretic energy in the damage indices.
The geometric characteristics and reinforcement details of the two RC structures are shown in Fig. 1. The cantilever column, 250 × 250 mm 2 cross-sectional area, is 2.5 m in height, and its mass is lumped at the top. The parametric modeling of the RC column is achieved by changing its lumped mass and steel reinforcement. Four natural frequencies: 1 Hz, 2 Hz, 3 Hz and 5 Hz are chosen to represent the common operational frequencies of RC buildings. In the cantilever column, the first three natural frequencies are achieved by changing the lumped mass. The reinforcement layout of the column cross-section is also varied to study the influence of ductility on S a (T 1 ) eq . The three cross-sections of the column shown Fig. 1 represent low, middle, and high seismic code designs, respectively. Overall, a total of nine RC cantilever columns are generated and analyzed. Likewise, the two columns of the portal frame are identical to the cantilever column. The portal frame has a 4 m-long rigid beam with 250 × 400 mm 2 cross-sectional area. The masses of the portal frame are also lumped at the top joints, and they are varied together with the columns' cross-section resulting in nine different portal frames. The three cross-sections are identified as Column-A, Column-B, and Column-C, as shown in Fig. 1.
The OpenSees software is adopted in modelling the RC structures to facilitate the parametric analysis; and all member elements are modeled, using the distributed plasticity approach, as force-based elements having five integration points. The steel rebar has 550 MPa of yield stress and a strain hardening ratio equal to 0.5%. The Giuffré-Menegotto-Pinto model (Steel02 uniaxial material), without isotropic hardening, is adopted in modelling the steel rebars. The concrete material has 28.5 MPa of compressive strength and all RC members have 25 mm of concrete cover. Both the confined and unconfined concrete are modelled using the Concrete04 uniaxial material (Popovics 1973). The properties of the confined concrete, including the ultimate stress, are evaluated in a MATLAB program using Mander et al. (1989) equations; and the ultimate compressive strain of the unconfined concrete is taken to be 0.03 to prevent a sudden drop in the post-peak region inherent to the Concrete04 model. In this study, the tensile capacity of concrete is also considered taking 0.002 as the ultimate strain while the ultimate stress is 14% of the compressive stress. 5% Rayleigh damping, based on tangent stiffness matrix, is also applied to the numerical model.
The capacity curves of the two RC structures, presented in Fig. 2, are bilinearized using the equal area method based on their initial stiffnesses; subsequently, the ultimate displacement, d ult , is determined. The post-peak properties of the cantilever columns with 1 Hz and 2 Hz natural frequencies are significantly different. The former cantilever column has shown a steep post-peak localization property. Contrarily, a slight difference is observed between the capacity curves of the columns with 2 Hz and 3 Hz natural frequencies. On the other hand, the effect of the extra longitudinal and hoop reinforcement, as well as the reduction hoop spacing, in Columns B and C brought important changes in the capacity and ductility of the cantilever column. Similar characteristics are apparent in the post-peak region of the portal frame comparing the different column crosssections and natural frequencies.
During the dynamic time-history analyses, sixteen earthquake records, in the interval Mw = 6-6.5, are selected that are representative of the ground motion input stipulated in EN 1998-5:2019. These earthquake records have diversity in faulting mechanism, distance to a source R j,B , and mean period of a record T m . Throughout this paper, the spectral acceleration at the fundamental frequency, S a (T 1 ) , is used as the intensity measure for an earthquake input motion since it can be a more reliable IM when the response is highly dependent on the first natural frequency (Hancilar and Çaktı 2015). The parametric study is intended to mimic a five-stage shaking table test. Hence, for each earthquake record, the non-sequential analysis is conducted five times by progressively scaling S a (T 1 ) , whereas the sequential analysis is executed only once using the combined record mentioned above.
The scaling of the ground motion is necessarily dependent on the IM selected. For example, scaling based on peak ground acceleration (PGA) controls the structural responses better in stiff structures as opposed to flexible structures. On the other hand, scaling based on S a (T 1 ) controls the structural responses better near the dominant frequency of the structure. Therefore, the ground motion input of the two RC structures, shown in Fig. 3, are scaled at their natural frequencies based on S a (T 1 ) . The earthquake records are scaled at 0.5 a y , a y , 2 a y , 3 a y and 4 a y , where a y is the yield acceleration ( F y ∕m ) of the structures. Consequently, each combination of frequency and cross-section of the two structures are uniquely scaled since the scaling factor depend both on the fundamental frequency ( T 1 ) and yield acceleration, a y .
The ground motion suite shown in Fig. 3, magnitude Mw in the interval 6-6.5, has been recently selected to serve for the revision of EN 1998-5 (EN 1998-5, 2005. It was Fig. 3 Spectral acceleration of 30 earthquake records, Mw = 6-6.5, adopted in the EN 1998EN -5:2019 taken from the PEER NGA West2 ground motion database. The Joyner-Boore distance (R j,B ) of the ground motion suite is smaller than 30 km, and it has an average shear wave velocity (V S,30 ) ranging 200 m/s -600 m/s. No pulse records are included, and fault mechanisms were not considered during the selection process. Besides, European records were given priority.

Results and discussion
After the non-sequential and sequential analyses, the Park-Ang damage indices are evaluated at the end of each stage (scaling) of the analyses. The median value of the damage indices obtained from all earthquake records is thus used in assessing the effect of cumulative damage apparent in the sequential analysis. To this end, it should be noted that the damage index (DI) in the subsequent discussions refers to the median value. Besides, DI calculated at the end of each stage can be plotted against the corresponding maximum lateral displacement to get a general picture of the contribution of hysteretic energy in the damage indices. Considering the non-sequential analysis, an approximately linear plot is obtained, as shown in Fig. 4a, since the contribution of the hysteretic energy is relatively small in comparison to that of the maximum displacement. Conversely, the damage indices of the 4th and 5th stages of the sequential analysis appear to have significant contribution from the hysteretic energy thus making the plot slightly curved. This plot gains more curvature as the nonlinearity of a structure becomes stronger, as shown in Fig. 4b. The RC cantilever column with 1 Hz natural frequency, which has Column-A cross-section, has important damage at the 5th stage of the sequential analysis, i.e., DI increases dramatically from 0.9 (near-collapse) to 1.7 (collapse). This is mainly attributed to the combined effect of small ductility and large inertial action on the cantilever column. On the contrary, the damage indices of the remaining cantilever columns increase slightly between the 4 th and 5 th stages of the analysis.
Similar arguments are valid for the portal RC frame. Nevertheless, the maximum drift is dominant in the damage indices considering both analyses (sequential and non-sequential). The portal frame with 2 Hz natural frequency, constructed from Column-A cross-section, reached the collapse level too early, as shown in The evolution of damage indices with increasing intensity of input motion in the sequential and non-sequential analysis, shown in Figs. 6 and 7, can now be compared. In both structures, as the frequency increases, the damage index gets smaller. It is also evident that the damage experienced, by both structures, during the sequential and non-sequential analysis is nearly identical in the first three, or four, stages of the analysis (correspond to a maximum input acceleration equal to double, or triple, of a y ). This infers that the hysteretic energy accumulated during the sequential analysis is only important as the RC cantilever column undergoes large plastic excursions. The column which has Column-C cross-section registered the smallest damage because it is stiffer than the other two columns. Similar comments are valid on the evolution of the damage indices, with increasing level of ground motion input, of the RC portal frame. The portal frame with 1 Hz natural frequency, having Column-A cross-section, failed too early during the non-sequential analysis (when the S a (T 1 ) of input acceleration is scaled to 2 a y ). Unlike in the cantilever column, important discrepancies in the damage indices, between the sequential and non-sequential analyses, are apparent in the portal frame, as shown in Fig. 7.
It was pointed out earlier that the slope parameter, b, is necessary for calculating the S a (T 1 ) eq , and it is determined from the logarithmic fit between DI and S a (T 1 ) of the nonsequential analyses. The fitting is conducted as a function of the frequency of the structure since it is more relevant to S a (T 1 ) . This gives rise to three fitting parameters for each structure, as shown in Fig. 8 where m = b and c = ln a. The dispersion of the logarithmic fitting of the portal frame is larger than that of the cantilever column, perhaps, because of the contribution of higher modes in the former. This study considers the adjusted R 2 for quantifying the dispersion of the logarithmic fitting.
The next step is to determine the ratio of damage indices from Eq. (20). It is evaluated as the ratio of the damage index calculated from the sequential analysis to the damage index derived from the non-sequential analysis. This ratio is calculated for each combination of earthquake record and stage of the analyses. The median value of the damage index ratio and the slope of the fitting, b, are then inserted into Eq. (20). to calculate S a (T 1 ) eq . Theoretically, this ratio should increase with increasing S a (T 1 ) . In this paper, the damage index ratio is fitted to a second-order polynomial curve to easily interpolate for the S a (T 1 ) eq . The polynomial is constrained to pass through one at the IM of the first stage of the analysis because the damage index ratio at this stage is practically one. From Fig. 9, it is evident that for S a (T 1 ) as large as 1.0 g, the damage index ratio of the RC cantilever column remains below 1.05 whereas the damage index ratio of the portal frame is very close to one.
After calculating S a (T 1 ) eq , for every earthquake record and stage of analysis, the nonsequential analyses are repeated (hereinafter referred as validation analyses or Nonseq Mod ) (a) (b) Fig. 8 Logarithmic fitting: a RC cantilever column; and b RC portal frame using the modified input, S a (T 1 ) eq . The responses recorded from the validation analyses and that of the sequential analysis are then compared, as shown in Fig. 10 and 11. The ideal output of this damage index ratio therefore needs to be close to one. However, this result cannot be guaranteed due to the statistical nature of the approach taken in this paper. In detail, median quantities are only used in evaluating S a (T 1 ) eq . Nonetheless, an improvement in the damage index ratio obtained from the validation analyses, i.e., damage index close to one, is invaluable to the applicability of the proposed approach. Overall, with the introduction of S a (T 1 ) eq , a slight shift in the damage indices is recorded in both structures. In most of the structures, this shift is in alignment with the goals of the proposed approach. However, a few unfavourable cases can be depicted in Fig. 10c.
In the global sense, the damage index ratio of the RC cantilever column has significantly improved after modifying the input intensity, as shown in Fig. 12a, and somehow validates the proposed approach. However, the damage indices from the portal RC frame, presented in Fig. 12b, are not in full agreement, i.e., results in the upper interval of S a (T 1 ) appear to be favorable, but not the ones that are obtained in the lower interval of S a (T 1 ) . In the validation phase, the initial value of the fitting is not constrained, as shown in Fig. 12, since the damage indices evaluated from the sequential analysis are derived using S a (T 1 ) , whereas the damage indices obtained from the validation analysis are derived using S a (T 1 ) eq .
Considering the modification factor given by S a (T 1 ) eq ∕S a (T 1 ), the application of the proposed approach to shaking table tests needs informaton about the damage index ratio and the fitting coefficent. It is noteworthy to mention that S a (T 1 ) eq cannot be estimated only from experimental tests, because it requires numerical modeling and analysis of the test structure to estimate the above two requirements. Furthermore, the proposed approach does not account for the influence of the maximum responses of a test structure corresponding to stages, during a stage-wise shaking table test, preceeding the stage of interest. In addition, flacutations in axial force can occur during dynamic analysis which is not consistent with the constant axial force assumption of the Park-Ang damage index. A closed-form expression for the S a (T 1 ) eq that includes all of its inter-dependencies is therefore challenging. It should be noted that the parametric study conducted in this paper does not represent a large portfolio of RC structures, but it is adequate for investigating the reliability of the proposed approach. To summarize the results of the parametric study, perhaps, it is reasonable to consider the modification factor to be one when the damage index is below 0.75 (represents severe damage level). Fig. 12 Validation of DI ratio: RC cantilever column (left) and RC portal frame (right)

Optimization in the Bayesian updating framework
It is prudent to ask the question 'how many virgin shaking table tests or stages during a  shaking table test are required to get a reliable update of a fragility curve?'. To partially answer this question, a simulated study, on the convergence of the posterior estimates of a fragility curve, is conducted herein by interpolating and extrapolating the results of a shaking table test. The Bayesian updating is carried out using the ATC-58 method, since a larger number of evaluations are required in the MCMC approach thus making it impractical for this task. For clarity of the subject at hand, the optimization problem is studied by breaking it down into: The first question addresses the effect of updating the fragility curve of a given DS using experiments where the structural responses (e.g., maximum inter-story drift) do not exceed the response corresponding to the damage state of interest. The input acceleration, S a T 1 , interval between the first and fifth stages of the shaking table test of the RC frame is uniformly divided into twenty points. Next, between the first and any of the twenty points, five points in the acceleration axis (representing a five-stage experiment) are uniformly interpolated and their corresponding maximum inter-story drift are evaluated using Lagrange polynomials. In this manner, twenty simulated experiments, each having five stages, are generated.
In Fig. 13, DS 1 , DS 2, and DS 3 represent the maximum inter-story drift values corresponding to slight, moderate, and extensive damage states, respectively. The posterior estimates of the median parameter for the slight and moderate fragility curves presented a non-uniform descent throughout the simulated experiments. The median S a T 1 of the Fig. 13 Evolution of posterior estimates before and after a DS of interest is exceeded in experiments slight DS appears to converge at DS 3 whereas the median S a T 1 of the complete DS continues to overshoot from its initial value mainly since it is not exceeded during the experimental test. The median S a T 1 of the complete DS has increased significantly due to the experimental data from the first two damage states of the structure, but its rate of increase drops significantly as the experimental tests approach DS 3 . Likewise, the median S a T 1 of the extensive DS increases by approximately 10% before the moderate damage state is exceeded, but then it drops slightly upon exceeding DS 3 (Fig. 14).
The posterior estimate of the dispersion, β, has gradually increased in steps in the slight and moderate damage states whereas, in the extensive DS, it is nearly insensitive to the exceedance of damages states. It is noteworthy to mention that the median parameter of the extensive DS is as well less sensitive to the exceedance of DSs in the experiments. This property is perhaps caused by way the simulated experiments are generated. The complete DS, on the other hand, showed a considerable rise due to the exceedance of the first two DSs, but maintained a constant amplitude afterward. Overall, the simulation results suggest that the posterior estimates of a DS can be sensitive to its exceedance during experimental testing.
Furthermore, the optimum size of experiments is important to get unbiased posterior estimates while, at the same time, making the process of updating fragility curves cost-effective. The maximum inter-story drift recorded during the shaking table test of the RC frame is first extrapolated to ensure that all four DSs (i.e., HAZUS DSs) are exceeded. Fifty experiments are generated by extrapolating the response until the frame reaches a maximum drift of 127 mm (the complete DS corresponds to 118 mm maximum drift). Each experiment has n stages, where n is 2-50, distributed between the maximum drift of the first stage of the experiment, 7.88 mm, and 127 mm.
In this simulated study, the top two DSs attained convergence for experiments smaller than 10. However, the moderate DS has reached convergence at 20 experiments, while the slight DS needed around 40 experiments to reach its plateau value. The above argument is also valid for the dispersion estimate, except for the first two DSs which assumed constant values at around 25 and 40 experiments, respectively. It appears that the dispersion parameters of the first two damage states are sensitive to experimental data after being exceeded. To sum up, a reliable update of all damage states may only be ascertained if all damage states are exceeded. Besides, estimating the optimum number of shaking table tests in simulated studies, before an experimental campaign, can help to improve the fidelity of Bayesian updating. Note that the ATC-58 recommends at least six experimental tests on virgin specimens (Porter et al. 2006). Although the simulated study is not exhaustive, and the results are particular to the experimental data considered, the optimum number of experimental tests appears to depend on the DSs and on the experimental data itself.
6 Application to a RC frame structure

Description
The Bayesian inference is applied to a 2D two-bay two-story RC frame structure that was tested at the LNEC's shaking table facility. The experimental test of the RC frame was carried out during the Teixeira Duarte award, 2014. The award involved a blind-prediction competition for predicting the response of non-seismically designed RC frames. The test structure is 1:1.5 reduced model of the prototype structure and it has square outer columns, 20 cm in length, and an internal column having 20 × 27 cm 2 cross-section. The foundation beam has a 60 × 20 cm 2 cross-section whereas the upper-floor beams have a 20 × 33 cm 2 cross-section. The axial forces on the columns were applied through Ф26 pre-stressed tendons that are inserted into the holes inside the columns, as shown in Fig. 15. The pre-stress forces applied to the tendons in the outer and inner columns are 22.8 kN and 35.4 kN, respectively. After pre-stressing the tendons, they are clamped at both ends. The floor masses of the structure were represented by blocks of mass placed at the mid-span of the beams. The first floor has two masses weighing each 1.18 t. Likewise, the second floor has one mass on each bay weighing about 1.13 t. The RC frame was constructed using C10/12 concrete grade and A500 steel rebars. Fig. 15 Geometry and reinforcement details of a two-dimensional RC frame

Numerical modeling and derivation of fragility curves
In this study, Seismostruct software is adopted for the numerical modelling and analysis of the RC structure. All structural elements of the frame are modeled as force-based elements whereas the foundation beam is constructed using elastic elements. To simulate the rigid connection between the foundation beam and the platen of the shaking table, the foundation beam is assumed to be rigidly connected to the base. The pre-stressed steel rods are modeled using elastic elements since they are expected to remain elastic during the experiment. Constraints are applied at the floor levels so that the pre-stress rods deform together with the columns. It should be noted that the approach taken in modeling the pre-stressing rods can reproduce the recentering behavior of the rods observed during the experimental test. The steel rebars of the structure are modeled using the Menegotto-Pinto hysteretic model (Menegotto and Pinto 1973) with 0.5% strain hardening ratio; and the concrete material is represented by the nonlinear concrete model of Mander et al. (1989). Rayleigh damping of 1.5% is considered for the structure and the Nonlinear time-history analyses were conducted using the Hughes-Hibler-Taylor integration (Hilber et al. 1977), taking α = -0.1, γ = 0.3025 and β = 0.6, at 0.005 s time-steps.
Incremental dynamic analysis (IDA) is a rigorous approach in predicting the performance of structures under seismic loads (Vamvatsikos and Allin Cornell 2002). The IDA of the RC frame is conducted using a suite of 30 ground motion records introduced above and shown in Fig. 3. All records are initially scaled to 0.1 g at the fundamental frequency of the RC frame, i.e., S a T 1 = 0.1 g, and the IDA is performed by progressively increasing the scale factor until collapse or instability of the structure.
In this research, the capacity curve derived from the IDA of each earthquake record is constructed by plotting the maximum inter-story drift of the first floor against the intensity measure (IM) of the input motion. The spectral acceleration at the fundamental frequency,S a (T 1 ) , is adopted as the intensity measure of the input motion. Figure 16 shows the capacity curves of the frame as a function of this IM.
The HAZUS (FEMA 2001) and Homogenized Reinforced Concrete (HRC) (Rossetto and Elnashai 2003) damage models, which depend on the maximum inter-story drift Fig. 16 Numerical modelling (left) and derivation of capacity curve using IDAs (right) (ISD max ), are used for constructing the analytical fragility curves of the RC frame. The Bayesian updating of these analytical fragility curves is then performed through the MCMC and ATC-58 methods. In addition, during the IDA, whose results shown in Fig. 16, the strains of concrete and steel rebars at the base of the 1 st story columns, indicated by red dots in Fig. 15, were also monitored to derive fragilities related to strain-based damage states. The strain of the concrete material in the cover and core sections, including the strain of the longitudinal rebars, are therefore recorded during the IDA. The strain in the concrete cover exceeding + 0.01% indicates the initiation of cracking while compressive (negative) strains larger than 0.2% translate into spalling of the concrete cover. Also, compressive strains of concrete larger than 0.6% mark crushing of the concrete core. For each exceedance criteria, the corresponding ISD max of the first floor is recorded during the IDAs. Eventually, fragility curves pertaining to concrete cracking, spalling, and crushing are derived. The dispersion of the recorded ISD max is characterized by outliers which might be the result of having few monitored points. Besides, some of the severe strain-based damage states cannot be flagged in few earthquake records due to the structure reaching instability condition too early.

Experimental test
The shaking table test of the RC frame was carried out using the LNEC's 3D shaking table. The floor masses were represented using blocks of mass attached to the beam elements. The axial forces on the columns were applied by pre-stressing the tendons that are placed inside the columns, and load cells were connected, at both ends of the tendons, to record the axial forces. As shown in Fig. 17, the RC frame was tested inside a 3D steel guiding frame to be effectively guided in the in-plane direction while preventing out-of-plane movements. The guiding structure has roller bearings at the top which guides the movement of the top-most beam. The guiding structure and the RC frame were firmly connected to the platen of the shaking table. The foundation beam was rigidly attached to the platen using four triangular-shaped steel connectors.
The steel rebars of the RC frame were tested in tension before the experiment. Three samples for each of the Φ6 and Φ8 rebars were tested for their tensile strength. During these experiments, the Φ6 rebars showed higher yield stress and lower ductility compared to the Φ8 rebars, and they are characterized by a small post-yield plateau without strainhardening behavior, which is typical of cold-formed steel. On the other hand, the Φ8 bars have a yield plateau, close to 550 MPa, before the strain-hardening trajectory of the strain-stress curve. To this end, the yield strength of the longitudinal bars is taken to be 550 MPa.
The seismic action of the structure is defined by a reference spectrum constructed from the limiting values of acceleration, velocity, and displacement of the shaking table considering 84% percentile of the amplification factors proposed by Newmark and Hall (1982). The target acceleration to the shaking table corresponds to an artificial accelerogram generated from the reference spectrum and it is scaled to 0.1 g, 0.2 g, 0.32 g, 0.52 g, and 0.72 g peak horizontal accelerations resulting in five test stages.
The experimental test was thus conducted in five stages by progressively increasing the intensity of the target motion. Figure 18 compares the input and measured commands of the shaking table. During the test, the fundamental frequency of the structure had progressively dropped to approximately 50% of its initial value, as shown in Fig. 19. In addition to the maximum inter-story drift and maximum acceleration responses at floor levels of 1 3

Fig. 17
Test setup: mounting the RC frame (left) and connecting the guiding structure (right) the structure, video and camera recordings of the experimental test are used to identify the initiation of important physical (observable) damage states of the RC frame.

Bayesian updating of analytical fragilities
To explore in depth the Bayesian updating of the fragilities of the RC frame, the updating is performed considering two definitions for the structural damage: one based on the maximum inter-story drift and another depending on the maximum strain of concrete fibers. The latter is updated through visually observed data, primarily through the degree of damage observed during the shaking table test. The damage models that are based on the maximum inter-story drift include the HAZUS and Homogenized Reinforced Concrete (HRC) damage states, as pointed out before. The exceedance parameter, ε, is determined from the maximum inter-story drift of the first floor that is measured during the shaking table test. Recalling that the strain-based fragilities are constructed based on a limited number of locations of the frame, the prior distribution may be biased; nevertheless, locations with the highest probability of damage are chosen so as to have a conservative estimate of the strain levels responsible for the observed damage. The ideal approach to relate the strain levels with the observed damage would be to consider the median value of strains from fibers of all representative cross-sections of the frame. However, such evaluation is onerous, particularly in large and complex structures. The HAZUS guideline classifies damage of RC buildings into four damage states (FEMA 2001), namely slight, moderate, extensive, and complete, based on average interstory drift ratios. The case study structure falls into the Low-Rise and Low-Code category of the HAZUS classification. Accordingly, the four fragility curves are derived from the results of the IDAs of the numerical model. In this paper, the complete damage state is modified to represent the damage state corresponding to a 20% drop in the maximum capacity of the RC frame (hereinafter referred to as "Complete*"). The Bayesian updating is therefore carried out using the results of the five stages of the shaking table test. The damage observed at the end of the test falls in between the extensive and complete damage states. Consequently, the intensity of the input motion used during the stage-wise shaking table is not modified, in accordance with the conclusions made for the damagebased equivalent intensity measure, i.e., S a T 1 = S a (T 1 ) eq . The likelihood function is constructed using the exceedance parameter, which is shown in Table 3. Its binary evaluation is derived by comparing the maximum inter-story drift, measured at each stage of the shaking table test, with the drift values that are stipulated for the HAZUS damage states.
In this paper, the Bayesian updating of the analytical fragilities of the frame is conducted using the MCMC and ATC-58 (UT) approaches, and the results of the two approaches are compared. In MCMC, one million samples are generated using the Metropolis-Hasting algorithm, and the first 2000 samples are discarded to eliminate the influence of initial conditions. The remaining samples are then downsampled by a factor of 20, as shown in Fig. 20. This factor is estimated using 95% CI about a zero mean ACF. Finally, the point estimates of the posterior fragility curves are obtained by fitting, to their respective distribution types, the samples retained at the end (see Table 4). For consistent comparison, the prior fragilities of the ATC-58 method and the MCMC approach are identical. During the MCMC sampling, 30-35% Acceptance Ratio (AR) is achieved which justifies the adequacy of the variance matrix considered for the proposal distribution. Due to the relatively narrow prior distribution of θ 2 , the MCMC point estimates are in the order of that of the ATC-58 estimates. In both approaches, the posterior estimates of the median IM of the HAZUS fragilities have increased. Furthermore, the posterior estimates of the median IM of all four damage states that are derived from the ATC-58 method predicts a less fragile structure compared to the estimates of the MCMC approach, i.e., ATC-58 overestimates the capacity of the structure. This is perhaps the result of the approximations made in the ATC-58 method.
As the level of damage increases, the dispersion of the posterior estimate of θ 1 evaluated by the MCMC gets larger whereas the dispersion estimates of the posterior logarithmic dispersion ( ′′ 2 ) remained nearly constant. In the latter, the experimental data appears not to add extra information to the posterior estimates because the prior and posterior estimates of θ 2 are marked by approximately equal COV value. Contrarily, the variance of the posterior distribution of the median parameter ( ′′ 1 ) has significantly changed from the prior belief. The characteristics mentioned above are depicted in Fig. 21.
The Homogenized Reinforced Concrete (HRC) damage states include slight, light, moderate, extensive, partial collapse, and collapse damage states. As shown in Table 5, these damages states are partially exceeded during the shaking table test of the RC frame. Like the HAZUS damage states, the analytical fragility curves of the HRC damage states are subjected to Bayesian updating after constructing the exceedance matrix shown in Table 5.
Results of Bayesian updating of the HRC fragilities obtained from the MCMC and ATC-58 approaches are presented in Fig. 22. Contrarily to the observations reported for the HAZUS damages states, the ATC-58 method, in many cases, resulted in conservative estimates compared to the MCMC approach, i.e., the MCMC estimates are less fragile compared to that of the ATC-58 method in all damage states except in the slight DS. It is also interesting to note the significant increase of the median IM predicted by both methods for the DSs which are not attained in the test (Extensive DS and beyond).
Furthermore, the analytical fragility curves that are related to the physical damage of the structure (strain-based damage states), namely cracking, spalling and crushing of   testing. The exceedance criterion is eventually determined by visually inspecting the damage of the structure during the experimental test. The camera and video recordings of the test are primarily used in constructing the exceedance matrix. For illustration, some of the photos showing the initiation of important structural damages of the RC frame are shown in Fig. 23. Note that the principle of the damage-based equivalent intensity of input motion does not apply to the strain-based damage states, because their fragilities are functions of the maximum inter-story drift but not the intensity measure of input motion. In fact, the effect of cumulative damage, if any, is manifested in the severity of the physical damage observed during the experimental test.
Considering the MCMC Bayesian updating, the posterior estimate of the median value of the ISD max for the initiation of cracking is larger than that of the prior fragility curve. Conversely, the ISD max estimate for the initiation of spalling of the cover concrete reduces (see Fig. 24a). In the crushing damage state, the posterior estimate of the ISD max in the MCMC approach also reduces, as opposed to the ATC-58 method. The characteristics displayed by the MCMC estimates are therefore not in agreement with the observations made for the HAZUS damage states. On the other hand, the ATC-58 estimates of ′′ 1 , both in the HAZUS and strain-based damage states, are larger than their analytical counterparts.
During the IDAs, the estimation for crushing of the concrete core was not as reliable as the cracking and spalling damage states since the analytical fragility curve of the former did not include some ground-motion instances that reached the plateau region of the capacity curve before exceeding the crushing-strain limit of the monitored fibers. Therefore, one additional evaluation is performed using a diffuse prior which may be enforced in the presence of an unreliable prior. It is built by taking � 1 = � 2 while doubling the former COV of ′ 2 , i.e., ′ 2 =0.42. The results of the Bayesian updating, presented in Table 6, show important discrepancies between the two approaches.
It is evident from Fig. 24c that the MCMC estimate for the median IM of crushing of concrete has now shifted due to the increase in the dispersion of the prior distribution. On the other hand, the estimate from the ATC-58 method has resulted in a reduction in the dispersion of the fragility curve. Regarding diffuse priors, they may bias posterior estimates and therefore analytical fragility curves that are derived from a calibrated model are suitable when considering a small experimental data. However, in the absence of such a reliable numerical model, MCMC can be more robust compared to ATC-58 method, as indicated in Table 6 and Fig. 24.

Conclusion
This paper proposes and discusses Bayesian updating of seismic fragilities through experimental tests and the adequacy of the proposed approach is demonstrated by applying it on a case study structure. One of the subjects discussed was on how to get the maximal benefit from the output of a stage-wise shaking table test, necessary for Bayesian updating, which was explored in a parametric study of a damage-based correction for the intensity of ground motion. The correction method was found to be more adequate for a RC cantilever column than for a RC portal frame. Hence, it was concluded that the approach is more suitable for simple structures compared to complex structures which have large number of elements. Furthermore, the applicability of this approach may be more pragmatic for severe damage levels such as the near collapse and collapse damage states. In future, an exhaustive parametric study is needed to fully understand the adequacy of the proposed method to RC structures. Another topic of discussion was on the optimum number of shaking table tests that are needed for an accurate Bayesian updating process, which was also explored in this paper. The influence of the number of tests preceding a DS of interest in Bayesian updating of that particular DS and the number of tests for accurate updating were addressed. The simulated studies showed that posterior estimates are more realistic if the damage level achieved during experiments exceeds the DS of interest. Besides, this study pointed out that, prior to experimental testing, it is important to predict the the optimum number of experimental tests (equivalent to input intensity spacing in a stage-wise experiment) that are required for the convergence of posterior estimates. Considering the case study presented in this paper, approximately 10 shaking table experiments, or stages of a shaking table test, can yield the balance needed between accuracy and cost-effectiveness of the updating process. In general, too small spacing in the input intensity adds no extra experimental information to the updating process thus posterior estimates remain unchanged, while a significantly large spacing can bias posterior estimates.
Finally, some lessons taken from the Bayesian updating that was performed on the 2D RC structure can be summarized as: (1) The tendency of posterior estimates to become greater or smaller than the prior parameters, apart from the limitations in deriving analytical fragilities, depends on the proximity between the inter-story drift (or any other EDP) corresponding to a DS of interest, and the EDP achieved during a shaking table test. This entails the need for designing experiments so that the damage level, expressed through EDP, in the experimental test closely resembles the damage level used in defining the analytical fragilities.
(2) The ATC-58 method resulted in less fragile estimates compared to MCMC approach both in the HAZUS and strain-based damage states. Conversely, the MCMC has less fragile estimates in the HRC damage model except for the slight damage state. Hence, no generalization can be made concerning the conservativism of any of the two approaches.
(3) The advantages and pitfalls of using MCMC approach for Bayesian updating are explored systematically. Firstly, the choice of prior distribution contributes to the accuracy of the updating process. Secondly, the choice for a proposal distribution in the Metropolis algorithm is essential, which can be monitored using the acceptance ratio of samples. Thirdly, the parameters of a prior distribution must be chosen carefully, accounting for the nature and reliability of experimental data. Finally, in the thinning process, although no rules exist in the literature, the 95% CI about a zero mean ACF can be useful. Besides, Bayesian updating of an unreliable prior distribution, which can be represented by a diffuse distribution, through the MCMC approach, or ATC-58 method, using inadequate, or small, sample size of experimental data can be precarious. Nonetheless, the MCMC approach can be more robust compared to the ATC-58 method in the presence of diffuse prior distributions. Under such a condition, the simplifications introduced in the ATC-58 method can be a source of bias.
In conclusion, the potential of using shaking table test results in the Bayesian updating of RC fragility curves is demonstrated in this paper. Additionally, a path for important future studies was delineated. For instance, the implication of Bayesian updating of fragility curves in the framework for seismic risk assessment is a subject of future study.