Tri-level Framework for Realistic Estimation of Concrete Strength Using Bayesian Data Fusion of UPV and Guided Coring

Identification of in situ concrete characteristic strength (CCS) of deteriorating RC structures directly affects their rehabilitation. Contemporary practices lack the ability to yield CCS or at most produce an unrealistic estimate, because: (i) core tests are few due to technical and economic constraints and their positioning is random, inducing bias in the strength estimate, (ii) non-destructive tests (NDT) are themselves not good estimators of project specific strength, (iii) tests are only local indicators and most practising engineers are unaware of how to process the spatial variability. This paper overcomes these limitations with a novel and implementable framework, for an asset manager to make scientific estimates of CCS. This is a tri-level framework: (1) NDT are performed to capture the spatial variability of the investigated concrete; (2) core extractions are guided by the prior NDT information, to avoid bias in test location selection; (3) NDT and coring results are fused via Bayesian NDT test calibration and updating of the probability distribution of the concrete strength. The proposed framework is successfully implemented for the assessment and rehabilitation of a critical hospital complex in India.

Generally, the collapse of RC framed structures is strongly influenced by a deficiency in concrete strength, which greatly affects the capacity of columns to carry compressive forces [8,9]. Investigating the in situ mechanical property of concrete is a crucial step to evaluate the structural capacity of both old (affected by gradual and possibly shock deteriorations, like earthquakes or fire) and of new (to check non-conformity of strength) constructions, thereby assisting in rehabilitation measures (eg. [10]) when necessary. Under-estimation of the concrete strength results in excessive rehabilitation incurring unnecessary costs, and its overestimation compromises the safety of the users in case the structure is continued to be used with little or no intervention. A realistic estimate of concrete characteristic strength on the other hand can maintain a balance between safety and economy.
For an existing structure, although the concrete is of a designated grade, variability of in-place strength is expected due to within-batch, batch-to-batch and systemic (i.e.,within a member and between any two members) variations [11]. Typically, each floor of a building is part of a different concrete casting phase. Even within a floor construction phase, at least two concrete batches can be expected, because the beams along with the floor slabs are cast together while the columns are cast separately from these. This causes a point to point or spatial variation in concrete crushing strength ( f c ) values in that structure. A simple but effective way to represent this variability is to use statistical distribution functions, such as the probability density function (PDF) or the cumulative distribution function (CDF). However, practising engineers handling everyday designs are not habituated to incorporating variability and its quantification-both in construction materials and geometry of the structure-into their design process [12].
The first basic step in the evaluation of a deteriorated concrete structure is doing a visual inspection (VI). The material condition assessment output of such VIs is usually qualitative ('very good' to 'very bad'), and likely to vary from an engineer to another. The actual in situ concrete strength and its variation are not quantified in a VI and the rehabilitation is directly affected by the recommendation of an individual expert. In the second step, the inspector may decide to refine the evaluation of material quality by extracting and testing concrete cores from the structure. However, this is generally performed with more or less a random selection of core locations. Some of these cores (or even all cores) may be extracted at locations where the strength is a statistical 'outlier', far from being a representation of concrete strength in the building. On the other hand, cores may be extracted from a location solely based on its ease of access for testing. It is difficult for any inspection engineer to actually detect the spatial variation through VI and specify appropriate core locations. Such random coring can induce significant bias in the concrete strength estimation, particularly when the number of core samples is very small; engineers many a times limit to only three cores (e.g. IS 456 [1] recommends a minimum of three cores). Such strength estimate is unrealistic because this estimate does not include the statistical uncertainty, i.e., the uncertainty arising from the fact that such estimate was obtained from a small sample and not representative of the concrete in general [13]. No wonder, the code requires 30 concrete samples to make an acceptable statistical inference of strength [14].
NDTs can be effectively used to supplement the limited coring, permitting a more economical and widespread investigation of the in situ concrete in a structure [8,15]. The most widely used testing methods are ultrasonic pulse velocity (UPV) test and rebound hammer (RH) test [16]. However, the strength estimation capability of these NDTs is rather poor. For example, the strength estimate from RH is affected by various local test conditions leading to erroneous estimates [17] and UPV provides only a qualitative judgment on concrete based on the measured velocity values [16]. In the authors' current experience, these NDTs cannot be claimed to provide accurate estimations of concrete strength (for the whole structure) all by themselves, and there is no universal corrections/adjustments that can be applied to all measurements. Besides, there is no effort in tallying the outcomes of different tests together in a scientific manner or in arriving at a resolution in case two evaluations-say, RH and UPV-do not match.
Based on the current field practices, the shortcomings in the estimation of in situ concrete strength in an existing structure toward decision making on the necessity, type and quantity of repair, can be summarised as: • The variation in concrete strength across a structure is not properly accounted for, resulting possibly in severe overor under-estimation of repair/rehabilitation requirements • At best, parametric values are recommended based on statistically inadequate number of (core) tests, as low as just three in many cases • Common NDTs, such as UPV and RH tests, supplementing the core tests, do not provide accurate quantification of the crushing strength • The outcomes of NDT(s) and core test (CT) are treated individually, without any attempt to have a holistic view on the material condition in a scientific manner (and without any option for resolution in case the outcomes of different tests are not congruent) Considering these challenges, the present work proposes a novel framework which will assist practising engineers to obtain a realistic and representative estimate of in situ concrete characteristic strength f ck (or specified compressive strength, f c ) in order to implement a suitable rehabilitation scheme for deficient and ageing RC structures. The proposed framework integrates NDT and "guided" CT with the uncertainty quantification techniques of Bayesian calibration and Bayesian data fusion to arrive at a representative statistical characterisation of concrete strength, from which f ck is obtained. The proposed methodology is briefly described in the next section. The following sections demonstrate how this method is applied to a case study building in a hospital rehabilitation project. The last section of this article presents a summary and major conclusions of this work.

A Novel Integrated Framework
The shortcomings in the existing methods for estimating insitu concrete strength discussed in the previous section have been addressed-albeit partially-in the past literature. Masi et al. [15] proposed a minimum number of NDTs and cores in a given building structure, conditioned on the number of structural elements of that building. This work overcame the subjectivity of choosing the number of tests to be performed for a given project. Moreover, the outcomes of the NDTs were used to identify zones of concrete with similar properties indicating structural element-wise variability. The generally adopted random selection of core locations was transformed by Pfister et al. [18] to a systematic process. Their basic idea was that the core location defined according to the information previously provided by NDT data offer a better representation of the variability in strength of the concrete under inspection. Since cores are generally few in numbers, a conversion model between NDT and core crushing strength is useful to estimate local strength values by converting NDT test results performed at different test locations within the structure. A deterministic regression calibration of these models was adopted [19,20]. However the uncertainty in the calibration process could not be quantified in a deterministic regression. Ali-Benyahia et al. [21,22] proposed the minimum number of core samples required to sufficiently calibrate a conversion model. Further the NDT converted to strength values were used to obtain mean and standard deviation of the strength values [15]. However they did not provide a methodology to obtain a robust probability distribution of concrete strength for different zones of a building. A non-robust estimate can give a false over estimation of concrete strength [23]. Models were developed which provide the variability of concrete strength for individual NDT test values [24]. However their work did not provide an adoptable method for engineers/inspectors to use this information on how to combine results from multiple tests and from multiple locations in the repairmaintenance decision-making process. While these scattered efforts tried to address the uncertainties involved in the NDT results to relate NDT results with more reliable tests like core tests, none of these provided a wholistic and systematic approach towards decision-making for maintenance, repair or rehabilitation of a deteriorated RC structure, addressing (i) identification of zones having different concrete strengths, (ii) incorporation of uncertainty in the measuring instrument and avoiding point estimates in favour of full probabilistic characterization, (iii) interpretation of concrete strength from small number of test data, (iv) quantification and handling of uncertainty of NDT and onsite variability and (v) probabilistic NDT data fusion to obtain f ck , in a wholistic manner.
In the proposed framework, at first, all the components of a structure-such as beams, columns and slabs-are identified and numbered. Among these, the components to which access is available are separately noted. In the next step, the conventional practise of conducting NDT and CT in parallel (or irrespective of the other's occurrence) is intentionally avoided. Rather, an extensive scanning of the structure is performed using only a NDT method. The relatively inexpensive NDT is carried out on accessible components to capture the Following this, a coring scheme is adopted-guided by the NDT results-at a location within each homogeneous zone. This guided coring reduces the probability of selecting cores from less frequent concrete strength ranges. Thus, it removes the outliers' bias and helps in acquiring data from the whole structure with a minimum number of expensive core tests.
The guided coring is followed by a Bayesian NDT calibration for incorporating the test uncertainty as per the site conditions. This allows for a statistical characterization of crushing strength at every location of NDT test, without performing the expensive core tests at each of these locations. Moreover this uncertainty quantification captures the scatter and makes the analysis closer to reality. Then, a statistical characterisation of concrete strength distribution is performed for each homogeneous zone using Bayesian data fusion of the outcomes of CT and NDT. This combination resolves the issue of non-congruent outcomes on sound mathematical basis. The characteristic strength ( f ck ), the specified compressive strength ( f c ) or any other strength parameter, can then be easily computed from this distribution. The steps of the proposed framework are illustrated in Fig. 1. This framework is successfully demonstrated for a hospital building case study.

Project Description
The structure under consideration is a "Ground + 2" floor hospital complex having a C shape in plan (Fig. 2a), constructed circa 1995. The building is located at Vasai, near Mumbai in India. The building was found to be visibly distressed at many places (Fig. 2b) as a result of which the hospital was vacated despite huge inflow of patients. Weighing in the huge demand of this critical infrastructure facility, the owner decided that it would be quicker to undertake rehabilitation of the hos-pital rather than demolishing and completely rebuilding it. The structural assessment and rehabilitation are performed in three stages (Fig. 2a). This stage-wise breakup is opted for, to allow for moving delicate and expensive medical equipment from one part of the building to another while the first part is being tested and rehabilitated.
Considering the absence of original structural drawings, "as-is" architectural and virtual structural models for this building are created in the very first step of this rehabilitation project. This work involves detailed visual reconnaissance, physical measurements of the whole premise and of the structural members. Non-invasive techniques are used to locate and acquire reinforcement details, and the residual diameter of corroded rebars are measured. In the same reconnaissance phase, structural components which are accessible for conducting tests are also identified. Post data assimilation, the structural plans of the building are prepared (Fig. 2c).

Level 1: Non-destructive Testing of the Building
Non-destructive testing forms the next stage after a detailed reconnaissance. Usually both RH and UPV tests are conducted for such investigations; however past research showed that in most cases, using just the UPV test yields more realistic results [18]. Further, the relevant Indian Standard recommends that RH should not be performed if the concrete is designated as 'doubtful' according to the UPV test [16], which is the case in the present project. It should be noted that RH is greatly affected by carbonation-overestimating the concrete strength by 50% [16] leading to erroneous estimates [17], whereas UPV results are negligibly affected by carbonation [25]. Considering these, UPV test is selected as the preferred and only NDT for this project.

NDT Design
As discussed previously, NDTs widely explore the structure and thus their number should be decided to adequately cover the expanse of a structure. The minimum number of NDTs to be conducted depends on the number of primary structural components and the total floor area. For an extended level of testing, Masi et al. [15] suggested to use at least 9 test points or 11% of the total primary components as NDT test points for a density of 80 structural components (column+beam) per floor area of 300 m 2 . Using this density in the present structure (104 components per 168.25 m 2 ), along with the minimum values prescribed for an extended level of testing, we arrive at either (i) seven UPVs or (ii) five UPVs. Adopting the higher value, seven tests are performed on all columns, beams and slabs per floor. Since it is usually difficult to perform tests on beams and slabs, only 25% of total tests on these components are performed with at least three test points [15]. It is decided to perform four tests on beams and slabs per floor. Thus 15 test locations per floor and in total 45 locations in the building are chosen for NDT in this project. These suggestions (i.e. [15]) should not be considered as rigid and the number of NDT points can be decided based on the condition of the project at hand. For example, when long span frames are involved, the density of structural components is low and the minimum number of NDT as per this suggestion comes out to be very small. The UPV test locations are selected only in the accessible parts of the building. Since 'guided coring' (introduced later) is to follow in sub-domains of the UPV test regions, the NDT test points are chosen considering that future coring in these locations would not weaken the structure in any significant way. Additionally, using a rebar locator it is also seen that these corings would not take place, cutting through rebars underneath. For beams it is best to test the lower part on the vertical face at mid-span. The reason is that this part of the beam is in tension under sustained/gravity loads and core extraction does not affect its capacity much. For slabs it is decided that only UPV test shall be conducted since it is massively difficult to core on a slab. For the columns, recognizing that the static pressure due to consolidation after concrete casting causes its strength to vary along the member height [11], tests are carried out at column mid-height to average out such effects. Besides, this is generally the position with the least bending moment and with least shear reinforcements. Other recommended care, as suggested by relevant codes of practice, are incorporated in this process.
The nomenclature used for test location is 'XYZ', where X indicates the floor: G (ground), F (first), or S (second); Y is the structural component type: C (column), B (beam), or S (slab); and Z is the serial number of a particular component. The UPV test locations are shown in Fig. 3 for the ground, first and second floors, respectively. On each structural component shown in the figures, a single UPV test is conducted at the test location. The UPV values (V ) for different locations around the hospital building are presented in Table 1.

Identification of Homogeneous Concrete Zones
Concrete can be considered homogeneous when the coefficient of variation (CoV) of concrete compressive strength varies within the range of 10 to 20% [26]. Non-homogeneity of concrete over a structure can happen due to multiple reasons, including replacement of the contractor or the concrete production company, change in the source of construction material supply, and changes in the exposure after casting, in addition to the casting sequence of different components and different floors, mentioned earlier. It is intuitively expected that components within each separate casting phases will have relatively similar characteristics. However, it may be possible to have homogeneous areas within the same floor and within multiple floors too.
The important aspect to consider in processing the achieved test results (Table 1) is therefore, whether the entire collected data is from a single population (implying that the structure is homogeneous in terms of concrete strength) or, as often happens, due to the various construction stages of the building, there exists multiple homogeneous areas (i.e., more populations). The UPV test in this project is used to recognize areas which can be considered homogeneous with respect to the concrete quality based on the velocity values, since the mechanical property of concrete corresponds to the UPV test [8]. The homogeneous parts of the building are identified in three steps.
In the first step, the structure is divided into sub-groups, that can be treated as statistically homogeneous a priori: C0, C1 and C2 are the group of columns on ground, first and second floors, receptively; BS0, BS1 and BS2 are the group of beams + slabs on the ground, first and second floors, respectively. For statistical analysis, working with a Gaussian distribution is simple, however, as UPV values take positive real numbers it may not be properly represented by a Gaussian distribution. To take the advantage of the Gaussian simplicity still, we consider the logarithm of the UPV (ln V ) values henceforth. The variability of the UPV test results for these groups is based on the evaluation of the CoV of the test results. In this project we have adopted a limiting CoV of 15% for homogeneity. Thus, the groups are themselves homogeneous if CoV ≤ 15% for each group. Table 2 reports the sample size (n), sample mean (m), unbiased sample standard deviation (s), and CoV for the tests. From this table, we see that each group has CoV well below 15% and therefore each group is considered homogeneous within itself. These statistics can be visualized with a two where t (1− α 2 ),n−1 is the value corresponding to probability 1 − α 2 of a Student's t-distribution with a degree of freedom (n − 1). Figure 4a shows the CI for a significance value α = 0.20.
The second step is to determine if the homogeneous subgroups belong to the same or a different population. A simple test adopted here consists of comparing the mean of ln V of two groups; if equal, they belong to the same population, and otherwise if found unequal. For any two groups P and Q, of size n P,Q with sample meanm P,Q and sample standard deviations P,Q , two alternate hypotheses can be evaluated: the critical value corresponding to a probability 1 − α 2 for a Student's t-distribution with a degree of freedom (n P +n Q − 2). There is a probability of error that a wrong hypothesis may be selected for such cases. In this regard there are two possible errors [27]: 1. Type I error → rejecting H 0 when in fact it is true 2. Type II error → accepting H 0 when in fact it is false The selection of α depends on the importance attributed to the two types of errors. In the present case, it is preferable to assume any two actually homogeneous zones to be inhomogeneous rather than having the opposite error; for this reason, a larger value of α = 0.20 is used. The t-test is used to compare groups in pair at a time from the available groups. Table 3 provides the result of hypothesis testing. For t < t c , the groups in pair are considered similar.
Based on this analysis, it is possible to create "hyper groups" comprising of many original groups. Each original group is checked with all others to see if they are similar as per t-test and are combined into a hyper group. Fig. 4b, c, and d show three possible combinations or hyper groups. In  Table 3. As C2 has failed to be similar with any of the groups as per t-test, it is treated as a separate group all by itself. Groups C1, BS1, BS2 show similarity to C0 (Fig. 4b) and to BS0 (Fig. 4d) separately. However the pair C0, BS0 itself has failed the t-test ( Table 3). As such Fig. 4c can be supposed to comprise of these two cases and it must be tested if inclusion of C0 and BS0 in a single hyper group can cause it to fail the hypothesis of belonging to the same population.
In the third step, ANalysis Of VAriance (ANOVA) is used to test the hyper groups if they belong to the same population. For any groups P, Q, R, with meanm P,Q,R , two alternate hypotheses can be evaluated: When applying ANOVA a significance level of α = 0.20 is used. The null hypothesis is accepted if the test statistic, F < F c (critical value). Table 4 reports the findings of the AVOVA test, indicating that all hyper groups are accepted. We choose the hyper group C0-C1-BS0-BS1-BS2 as an all encompassing group for further analysis. Thus we have two major homogeneous zones (Z 1 , Z 2 ) for the building viz: 1. Z 1 : C0-C1-BS0-BS1-BS2 2. Z 2 : C2 The CI for these two new hyper groups is shown in Fig. 4e. The CI for both groups is apart, thus confirming that they can  Fig. 4a and e shows that C2 was always separate from the other groups since its inception. Thus before conducting the t-test in step two, the CI can be used to eliminate groups whose CI do not overlap with any of the other groups, thus saving computation effort of the t-test.

Level 2: Guided Coring
First, to decide the number of cores to extract, an initial reference is made to the work of Masi et al. [15]. Accordingly, for an extended level of testing, at least 4 in number or 5% of the total number of primary components must be extracted per homogeneous zone. Through this, we arrive at (i) 4 minimum number of cores and (ii) with 2 and 1 cores as percent of total primary components in Z 1 and Z 2 , respectively. We could choose 4 cores per zone, however we find it quite unfair to take equal cores in both zones, considering Z 1 has 38 elements and Z 2 has only 7 elements and at least 3 cores are suggested by IS 456 [1]. Seeking reference elsewhere, we find that Giannini et al. [8] suggested to use at least 5 cores, although they did not provide any basis for the same. On the other hand, Ali-Benyahia et al. [21] concluded that when using a single NDT method, a minimum of 9 cores should be used for effective assessment. This conclusion was based on a deterministic model fitting based on two indicators: root mean square error and coefficient of determination. Although such deterministic criteria are challenged by probabilistic methods [28], nevertheless, we have used total 9 cores in total for this project and distributed N c 1 = 6 cores in Z 1 and N c 2 = 3 cores in Z 2 . If these cores are extracted randomly, the outcome may introduce bias in the estimation of concrete strength, particularly when the number of cores is small (in this project, it is merely 2.88% of the total number of structural components in the building). Since the mechanical property of concrete corresponds to the UPV test [8], the ingenuity is to select the core locations using the "prior" information provided by the NDT exploration [18,26]. Considering that the core location is guided by the UPV results previously done, we call this method 'guided coring'. Fundamentally, we choose the location of cores in such a way that it provides an unbiased sample of cores and reduces the probability that all cores belong to a limited range of strength. Guided coring thus only requires Fig. 5 a, b Distribution of the frequencies of the UPV; c, d Distribution of cores proportional to frequency of UPV; e Cores extracted as per guided coring scheme that NDT test results are available before extracting cores and as such, do not incur any additional cost for achieving so.
The ln V values of each zone (Z l , l = 1, 2, . . . ) are arranged from the lowest to the highest value. This data is then subdivided into n l subsets for each zone and the guided coring proceeds in either of the two ways: 1. n l < N c l : The number of cores in each subset is selected proportional to the relative frequency of the UPV data in each subset The core location corresponding to the existing UPV location, can be chosen to cover the range of that UPV subset. 2. n l = N c l : One core is selected within the set or at the median value of the data in each UPV subset We adopt the first way to divide Z 1 into n 1 = 3 parts and Z 2 is divided into n 2 = 2 parts. The histogram of UPV is presented in Fig. 5a and b for Z 1 and Z 2 , respectively. For zone Z 1 , 2 cores are allocated in the lower subset, 3 in the middle and 1 in the upper subset. The distribution of cores is shown in Fig. 5c. Similarly for Z 2 , 2 cores are allocated in the lower subset and 1 in the upper subset, for which the histogram can be seen in Fig. 5d. One can see that the shape of core distribution is similar to that of the histogram of the ln V values. These cores are positioned on locations corresponding to the locations of UPV from Fig. 3. The cores extracted following the guided coring scheme are shown in Fig. 5e.  Fig. 6 a UPV testing of extracted cores, b uncertainty quantification in measurement

Bayesian UPV Test Calibration
NDTs are affected by many site conditions and cause considerable measurement uncertainty [6]. Inevitably, interpreting actual results from the uncertain measurements requires calibration of such tests [12,23,29]. In order to perform such a calibration, prior to testing the concrete cores in crushing, they are tested with UPV (Fig. 6a). Then the cores are crushed to generate the f c values. The results of this exercise are reported in Table 5. This dataset is denoted as, The measurement conversion model (MCM) relating concrete crushing strength ( f c ) and UPV (V ) is [8] ln β 0 ∈ R and β 1 ∈ R + are called the constant bias and scaling bias respectively, collectively known as the systematic biases of an NDT instrument [6]. Typically, they represent the epistemic uncertainty. ε ∼ N (0, σ ε ) denotes the additive random error in measurements, which represents the aleatory uncertainty. Thus the MCM parameters are θ = β 0 β 1 σ ε , which are to be identified/updated with the training data D.
There are three basic approaches of parameter identification [23]: (i) the least squares method, (ii) the maximum likelihood estimation and (iii) the Bayesian method. The first and the second methods have the drawback that they give just point estimates of the model parameters. In contrast, the Bayesian method provides the distribution of the model parameters as well. A Bayesian framework is thus deemed more suitable and is adopted for calibration. As we do not have specific prior information of these values, assuming an improper prior for θ , the posterior estimative distribution for Y = ln f c , given X = ln V is [30]: where ST (·) indicates the Student's t-distribution. The probabilistic estimation of calibrated UPV is shown in Fig. 6b for the training data. The uncertainty in relating NDT with core is quantified in the process. It is evident that, over a deterministic regression, a Bayesian calibration can capture the scatter of the training data. The posterior estimative distribution of Applying derivative to this equation, it follows: Once the UPV test is thus calibrated, for every value of V measured in situ, a PDF for f c is obtained. It thus becomes possible to get the strength of concrete without extracting an actual core form all the test points in the structure, plus we have also tuned the NDT with the specific structure. For future projects using similar equipment, the calibration results obtained in this work can be treated as prior which can be updated using data collected from that specific project.

Data Fusion
Recognizing the fact that core crushing values are the most reliable values but few in numbers while the UPV are numerous, practically scanning the entire structure under investigation, it is a natural conclusion to combine the two, in order to characterize the concrete strength and its variability. Combination of multiple tests has been suggested to improve the reliability for the interpretation of results [31]. The concept of combining information from multiple sources or tests in a Bayesian setting is called Bayesian data fusion [6,32].
In general, the normal and lognormal distribution models are considered to describe concrete strength [33]. Usually a normal distribution is suitable for pristine concrete where quality control is maintained. While assessing existing or degraded structures however, using a normal distribution may yield unrealistic results. The reason behind this is that due to a high variability of strength in such structures the normal distribution may well extend to the negative domain and can have considerable probability in that domain, even exceeding 5% ( f ck ). Therefore in this project a lognormal model is adopted for concrete strength: The objective is to identify the model parameters μ σ which describe the variability characteristic of concrete in each zone Z 1 or Z 2 , through Bayesian data fusion.

Prior Information
The model parameters are: μ ∈ R and σ ∈ R + . We define σ in an alternate form: σ = e s 2 , thus the parameter set becomes ξ = μ s 2 . Prior distribution of the concrete strength parameters may be of three types: i) informative; ii) partially informative (based on information provided by the relevant standard); iii) non-informative [32]. Jalayer et al. [34] constructed a prior concrete strength distribution to represent typical values found in post World War II construction in Italy. Caspeele and Taerwe [35] proposed a set of informative priors for different concrete classes based on concrete production data from Germany. Although their prior may be adapted to different countries, its applicability to RC structures for which there is no information regarding the expected concrete class is difficult. In the present case study of the hospital building, original design details are unavail-able. Thus constructing an appropriate informative prior for concrete strength is not feasible here. Moreover a prior for the original concrete strength may still be possible to adopt based on the minimum strength requirement as per the design code prevalent at site and the time of construction. However, that prior would be valid only for the pristine concrete and not the degraded present condition. As the specific information of concrete used for the building is not available, we assume a non-informative prior distribution; This uniform density is defined over the entire admissible range of these parameters, which is (−∞, +∞) [36]. The joint prior distribution is taken as the product of the densities of individual parameters, by assuming each parameter to be independent of the other: f (ξ ) = h 1 × h 2 , where h 1 , h 2 are very small values.

Likelihood Formulation
We have two types of data available as outcome of the investigation, (i) core crushing values, as point values, and (ii) local distribution of f c based on the UPV reading. Such information are available in both the zones Z 1 and Z 2 .
The likelihood for a single core crushing value f c j is f ( f c j |ξ ) from Eq. 4. For N c l core values in a given zone Z l of this project, the likelihood considering each value to be independent of the other is In each zone (Z 1 , Z 2 ) the location where core has not been extracted, Eq. 3 is used for estimation of f c based on UPV. f c thus obtained from the calibrated UPV test is not in the form of a point value (unlike core), rather it is a distribution (Eq. 3) for each UPV reading V i . Thus, Eq. 4 cannot be directly used to form the likelihood function as in the case of core values.
The likelihood function of f c , for a single measurement of V i is thus formulated as [6,32] where, f values are required in Eq. 6 for better approximation. However more samples also increase computation. Here we evaluate the integral for S = 30, obtained using Latin Hypercube Sampling from f ( f c |V i , D). The number 30 is chosen analogous to the code's recommendation of 30 samples for satisfactory statistical assessment [14]. Later it is also verified (Fig. 7) that 30 samples are indeed sufficient to approximate the likelihood function. An important aspect to note is that the UPV values at the location where core is extracted are not used to form the likelihood, since the core values themselves are available. Thus for a total N v l UPV values in each zone, only N v l − N c l UPV values are considered to form the likelihood: The overall likelihood combining the point values and the PDF values of f c is

Posterior Estimation
The concrete distribution parameters are updated through Eq. 8 by Bayesian updating for a non-informative prior [6]:  N ). A robust distribution of f c , which includes the variability due to the sample values of ξ , is obtained as [6] f Using Eq. 9, a distribution for concrete crushing strength ( f c ) is obtained for all zones in the structure viz. Z 1 and Z 2 .
In research, many times the sample mean,ξ = , or the sample median, is used as parameter values for inference, in contrary to Eq. 9 [6,23]. The approximate distribution in such cases will be: The purpose here is to compare and check if it can effectively approximate Eq. 9, (which requires more computation) and simplify the estimation.

Estimation of Characteristic Strength (f ck )
The characteristic value f ck which is the 5 percentile value of f c , is computed by solving Similarly the specified strength f c can be obtained as the 1 percentile value of the crushing strength. To check the effect of the number of samples S (for approximating the likelihood function in Eq. 6) on the evaluation of f ck , the estimate (Eq. 10) is evaluated for increasing values of S. These S values are samples from Eq. 3 using the Latin Hypercube Sampling. When only a single sample is drawn (S = 1), the expected value of the whole distribution is considered or taken to be the sample. From the result in Fig. 7a, it is seen that initially there is fluctuation in the estimation of f ck , but beyond S = 30 the estimate stabilizes. Therefore, for the present project S = 30 samples are adopted to give a reliable approximation.
The concrete strength distribution for both different zones is shown in Fig. 7b

Rehabilitation Scheme
For the scope of this paper, a single column in zone Z 1 on the ground floor (GC4 of Fig. 3) is chosen to demonstrate the rehabilitation process adopted. The process is repeatable for other structural members. A detailed structural analysis model of the hospital building is created in ETABS [37]. The dead load and the imposed load are assigned as per Parts I and II, respectively, of IS 875 [38]. Additionally a load of 0.5 kN/m 2 is assigned to the top slab to account for the solar panels installed on the roof of the hospital. Seismic loads and corresponding load combinations are adopted from IS 1893 (Part 1) [39], and a dynamic seismic analysis following the response spectrum method is performed. The load combination 0.9DL+1.5EQ is found to be the most critical, where DL is the dead load and EQ is the seismic load. The demand on the column is evaluated as: axial load P = 257.10 kN, moments M @11 = 44.80 kNm, M @22 = 130.56 kNm. The axes 11 and 22 are parallel to the shorter and longer side of the column respectively.
The original size of this column is measured to be 230 mm × 400 mm. Since much of the cover concrete (40 mm) has spalled, the contribution of cover concrete to the capacity computation is ignored. For the purpose of strengthening, the loose cover concrete is chipped off. Reinforcements in this column are found to have corroded. The rust on the reinforcements is cleaned using a wire brush and the residual diameter of the bars is measured using a digital Vernier caliper. The measurement is taken only on exposed bars. For main bars, the diameters measured are: [9.4, 10.3, 10.0, 11.3, 11.8, 10.8, Fig. 7 a Strength estimates with increasing number of samples (S) to approximate the likelihood function, b Evaluation of f ck 8.6, 10.6, 11.3, 11.0, 11.4] mm. Their mean value of 10.5 mm is used in the capacity computation. Using this information and adopting f ck = 4.42 MPa evaluated earlier in the framework proposed here, the existing capacity of the column is evaluated using the standard guidebook SP 16 [40]. This capacity is deducted from the demand on the column and rehabilitation is proposed using concrete jacket following IS 15988 [41] for the residual demand. The advantage of the framework is (due to realistic f ck ), the existing capacity estimation of the column is realistic and the rehabilitation scheme is deemed neither under-designed nor excessive, thus balancing between safety and economy. The strengthened column along with its section is shown in Fig. 8.

Conclusions
Existing RC structures can be inherently deficient due to a lack of sufficient detailing and/or corrosion of reinforcement, missing members, overloading, damage, etc. In all cases, determining the actual or in situ concrete characteristic strength f ck (or the specified compressive strength, f c ) is one of the most important step in the assessment and rehabilitation of existing structures. Due to a lack of available framework to conduct such processes rigorously and a general lack of awareness among practising engineers about systematic assessment techniques, this essential parameter is not determined in a scientific way.
This paper provides a strategic framework for the inspection and assessment engineers to evaluate realistic in situ strength of concrete, particularly in projects dealing with deteriorated structures. This framework attempts to eliminate the subjectivity in assessment (elaborated in the opening of this paper), which is inherent in most current practices. It also gives a systematic methodology to the practising engineers to investigate and come to a decision on the necessary rehabilitation. This system is based on establishing and quantifying the spatial variability and identifying strength homogeneous zones (parts of structure with similar concrete) in the domain of investigation through extensive NDT, which is impossible through only visual inspections. The bias in strength estimate, due to limited core numbers and random selection of core locations, is overcome through guided coring. This proposed coring strategy uses the variability captured by the NDT to determine core locations so that the chance of selecting cores from only a limited strength range is reduced. Subsequently, the disconnect between any NDT and the site concrete is addressed through a Bayesian calibration of the NDT data. Through calibration, it is technically possible to establish the crushing strength at every NDT location without actually extracting a core at that location. This overcomes the barrier of 30 concrete samples required for a satisfac- This framework is recommended for a realistic estimation of concrete strength and hence the capacity of structure prior to rehabilitation. In this paper the proposed framework is implemented for a deteriorated hospital complex in India which had to be quickly rehabilitated due to a large inflow of patients. Being a critical health infrastructure, the risk of an unrealistic assessment of concrete strength could be catastrophic and this made a strong case for the application of the proposed framework. Future evolution of this framework will focus on the maximisation of information gain through the inspections and testing stages when performed under economic constraints.