Skip navigation links
banner
logo ridotto
logo-salomone
Progetto MICHe

Introduction

Seismic hazard analysis (SHA) is aimed at the definition of hazard curves and ground motion characteristics, intended as PGA, acceleration, velocity, displacement spectra or reference seismograms, for a given area or site. This information is indispensable for land-use planners and earthquake engineers to define building threshold and standards. 

Since from the decision-making process, related to the questions where and at what detail level is relevant to carry out investigations aimed at the estimation of expected shaking levels, the analysis method is influenced by the intensity of human activities carried out in an area, such as the presence of strategic services and the potential economic and human losses. 

Thus, hazard, which is commonly assumed as one of the independent variables of risk (together with exposure and vulnerability), turns out to be influenced by exposure, i.e. intended uses, in that the reference building assumed as target drives the choice of the analysis approach and assumptions on the characteristic earthquake considered. This process intensifies at higher scales (i.e. local, urban, site) if we consider, for example, that areas prone to earthquake amplifications are selected for specific investigations on both geological and economic basis (i.e. possible losses); moreover, at site scale, some sets of in situ or laboratory analyses are not even advised for conventional buildings resting on hard soils. 

Seismic hazard estimation is a sensitive societal subject in different ways. First, it has much to do with economic resources because from the professional judgment of a group of experts depends the building of relevant functions (power plants, dams, railways, schools, hospitals etc..) or, as for the case at hand, the protection policies of extremely sensitive assets such as cultural heritage buildings. Second, seismic hazard estimation involves a variety of expertise, which partially superposes and sometimes hardly interconnects, e.g. geophysicists, geologists, seismologists, structural engineers, architects, but also politicians and practitioners. Third, results of the estimations can lead to heavy public-order consequences and human losses.  

Earthquake phenomenology is relatively known, and is basically modelled on the Gutenberg-Richter (G-L) Law (Gutenberg and Richter, 1956), which provides a relation between the energy released (magnitude) at the rupture and the frequency of the event, and the Omori Law (Omori, 1895), which gives the decay of the post-event activity in time.  Successive investigations have refined G-L law to introduce a high magnitude cut-off (Kanamori, 1977) and take into account different magnitude measures (Kagan, 2002). As regards the Omori Law, this was refined thanks to the introduction of the concepts of time and space clustering of events (Kagan and Jackson, 1991; Utsu et al. 1995) and scale invariance, (Bak et al. 2002; Corral, 2003, 2004). However, much of the effort in the history of the seismological research has been devoted to the understanding of the kinematics of the phenomenon, rather than to the physics of the occurrence, (Ben-Menahem, 1995).  

Seminal seismic-risk mitigation policies carried out in history on territories hit by earthquakes have an inherent deterministic approach, or, in other words, a gathering nature. Indeed, historical records, consisting in documents attesting the occurrence and the visually recorded intensity of damages, were employed to assign a primal seismic propensity to an area, and have contributed to the composition of catalogues of historical events, thanks to which ascertained seismic activity can be associated to specific cities or areas.  Such a qualitative/quantitative information represents, the basic tool from which any modern hazards analyses start.  However, historical catalogues cannot be considered wholly reliable and are far from complete, for example, in historically wild areas then shortly inhabited, or in areas where fault ruptures occurs at sensibly different depths resulting in likewise different superficial effects in terms of intensity and extension of the hit area. 

Similarly, the second indispensable ingredient to SHA, is the geological knowledge of the territory, intended as positioning and nature of active faults, i.e. those that are recorded to have activated along time. This group of faults, again, cannot be taken for granted to be the only actually active; in other words, there could be faults that have not activated yet in the time of the observation, human time compared to geological eras is indeed insignificant. 

Limitations reported here before regarding availability and reliability of data are those inherently inevitable, which has to be accepted by modern earthquake engineering, and are often referred to as epistemic uncertainties, i.e. uncertainties connected to the lack of knowledge of the phenomenon itself.  Independently from the for the hazard estimation method chosen, the two classes of uncertainties, i.e. epistemic and aleatory, should be clearly explained by engineers and seismologists to stakeholders, politicians and citizens in order to promote a true seismic preparedness culture. 

The two basic approaches for developing design ground motions that are commonly used in practice for the construction of building design codes or regional seismic hazard maps are deterministic and probabilistic. Both approaches have been widely employed in the last 30 years, although the most employed at national scale is the probabilistic method, also known as Probabilistic Seismic Hazard Analysis (PSHA). Although widely employed by most important National design codes (USA, Eurocodes, Japan, Italy and many others..), PSHA is now under a “due diligence” process, intensified very lately due to several understatement cases all over the world and after years of extensive debates, Mulargia (2017). 

The difference between the probabilistic and the deterministic approach is that in the probabilistic approach, is that the severity level of the shaking is determined assuming a specified return period of the action and taking into account that the specified level, has some, little enough, probability to be exceeded. Indeed, since the worst earthquake scenario would have a large impact on the cost of the design and it would be so rare, its use could not be justified for all buildings. The back-off process from the worst-case ground motion is a sort of optimization between cost-efficiency of the building process and rarity of the event in relation with life-expectancy of the building itself. To this aim, all the earthquake scenarios are ranked in order of severity and related annual rates are summed up stepping down in the list. Plotting the sum of rates against the ground motion parameter forms an hazard curve. On that curve, the hazard level for events not too rare is fixed depending on the acceptable consequences of failure and societal risks

 
last update: 23-July-2020
Unifi Dipartimento di Architettura Home page

Back to top