Comments

You must log in or register to comment.

CrustalTrudger t1_j7sldjy wrote

> Can the static tension of tectonic plates be quantified?

So, the way we as geologists would discuss this would be in terms of measuring the magnitude and direction of stress(es) within the crust. There are a variety of ways we can directly measure stress, e.g., borehole breakouts, overcoring, etc., which we can then use to produce maps of stress like the World Stress Map. Ultimately though, while maps of stress are useful for some aspects of assessing earthquake hazard, we cannot directly apply these to "predicting" earthquake hazards as this would require knowing much more about the stress history (as opposed to short term measurements), how stress changes with depth, the amount of accumulated strain on individual faults, the strength of individual faults, along with a whole host of other properties. Stress maps and estimates are one part of what we can do to assess hazards though.

> how are predictions about future quakes are made?

Here we want to be explicit about what we can and can't do and moreover what is implied by specific terms when used by professionals. Geologists, seismologists, and others who work on natural hazards often draw an important distinction between forecasts and predictions. This may seem pedantic, but these two terms imply very different things when being used by people like myself who works on natural hazards. Forecasts are hopefully partially intuitive from weather forecasting and we can use this to explore the implications of these two terms in this context. A weather forecast would be something like, "There is a 80% chance of rain today in this region", whereas a weather prediction would be "There will be exactly 1 cm of rain, falling at a rate of 1 cm / hr, starting at exactly 4 pm at this precise location." I.e., for something to be a prediction implies certainty in time, location, and magnitude. Generally, we can forecast the weather, but we cannot predict it and the same is true for earthquakes. The reason we cannot predict earthquakes is much the same reason we cannot predict weather, i.e., incomplete data characterizing a non-linear dynamic (i.e., chaotic) system. The utility of the two are also the same, i.e., even though we can't predict the weather in a perfect sense, the forecast helps us plan (i.e., if you saw the forecast from above, you'd probably bring a rain coat or umbrella with you, etc.). If you want to read even more about why we can't predict earthquakes in the true sense of the word, this FAQ goes into more detail.

For earthquakes, where do the forecasts from? Mixtures of basic mapping of fault locations and geometries, theoretical understanding of earthquake mechanics from both observations and modeling, a variety of geodetic measurements and measurements of stress (like from the first part), and records of earthquake histories from paleoseismology, historical seismology, and/or archaeoseismology. From all of these, we build assessments of how often particular faults have earthquakes, what the variability in style/size of those earthquakes are, time since the last event, and other various details we can glean from the geologic record. In the end, we end up with things largely similar to our weather forecast example, i.e., a probabilistic seismic hazard assessment, like the various ones for the US. These focus on different regions and consider different lengths of time (going back to the weather forecast analogy, largely equivalent to the difference between a daily forecast and the 10 day forecast, etc.). If you look at many of these, you'll see they are presented in a somewhat similar way to weather forecasts, i.e., the probability that a particular area will experience significant shaking in the relevant time frame covered by the map. Just like the weather forecast, while not a prediction, it provides a tool for us to assess risk and make preparations. I.e., much in the same way a forecast of sunny skies vs a chance of rain might determine your choice of clothes for the day, living in an area with a 20% chance of experiencing significant shaking in the next 10 years has very different implications than living in an area with a 1% chance of experiencing significant shaking in the next 10 years and you (and governments, etc) would/could/should respond accordingly.

67

labadimp t1_j7t2g8r wrote

Sounds like you know your stuff. Awesome post. I am curious, does this mean that you could give a risk assesment to certain areas? If so, where do you think the next bif earthquake will be? I know that generally contradicts exactly what you just said, but pretend you have to choose somewhere. Where would it be?

2

CrustalTrudger t1_j7u01sw wrote

Yeah, so, this is completely antithetical to everything I just laid out. I.e., you're effectively asking for a prediction after I just spent a significant amount of time trying to explain why these are not possible. PSHA maps for a given region are going to be the best bet for effectively background risk. As new events occur, these of course are updated as we consider whether a large event has increased or decreased risk in a certain place (e.g., through loading or unloading of related faults through Coulomb stress transfer, etc.) and as we learn more about an area (i.e., expanded paleoseismology records, faults are discovered through mapping, etc). Similarly, there will be specific short duration forecasts related to individual large earthquakes, i.e., aftershock forecasts. Beyond that, even within the area of the world I specifically focus on (and for which I understand the local geology and earthquake hazards reasonably well), there is no meaningful way for me, or anyone else, to make statements like what you're asking for. Anyone who does is either irresponsible or trying to sell you something.

As a relevant aside, for anyone musing on the potential benefits of true earthquake prediction in the sense outlined in my earlier answer (and sidestepping all of the reasons why we don't generally think it's possible), I would highly recommend this opinion piece by Dave Petley (a geologist who works on quantifying natural hazards). The general thesis is that basically, unless predictions (as defined above) are 100% accurate (which they never could be, even in the rosiest view of our future capabilities), they are unlikely to improve outcomes anymore than forecasts (as defined above) and would likely actually have significant negative outcomes potentially making "predictions" worse than "forecasts", i.e., the risks associated with either false negatives or false positives are very large both in an economic and human life sense.

5

turtley_different t1_j7uqpav wrote

To build on this, as people still commonly ask "but surely we can predict earthquakes": even if we had perfect precise knowledge of the stress field in the crust (we don't, it's a rough approximation even with measurements), the literal process of an earthquake is one of fracture and failure.

The difference between fault slip causing a tiny earthquake and a major earthquake is whether the crack/slip propagates or not. And that is entirely dependent on the exact micro and macrostructure of the rock around the initial slippage. Even atomic level imperfections in crystal structure could be the difference in propagation Vs not.

(In fact even the initial slip is determined by the relatively weakest point to accommodate current stress field)

Therefore, to even begin attempting prediction, we need atomic-level understanding of the crust and stress field dozens of km deep. Completely impossible.

1

UnamedStreamNumber9 t1_j7ut0dj wrote

Question for you about factors that contribute to earthquake forecasting. I notice the recent turkey / Syria quakes occurred on the day of the full moon. Since tidal stresses peak at new and full moons, this seems like an interesting coincidence. Is there any correlation with quake timing and moon phase.

I’ve previously also seen a study indicating more larger earthquakes occur during a certain phase of a 30 year cycle of earth’s interday rotation time variation. The prediction was more earthquakes would occur in the 5 years following the peak of the variation cycle. The peak was in 2017. Has there been any validation of an increase in large magnitude quakes during the past 5 years?

1

CrustalTrudger t1_j7uza0v wrote

> I notice the recent turkey / Syria quakes occurred on the day of the full moon. Since tidal stresses peak at new and full moons, this seems like an interesting coincidence. Is there any correlation with quake timing and moon phase.

While slightly under the cutoff for the particular analysis in this paper, Hough, 2018 succinctly sums up the extent to which lunar phase has anything to do with earthquakes. As discussed in this paper (with citations to relevant papers), there are a variety of suggestions that there may be real correlations between lunar phase and some details of earthquake statistics in certain magnitude ranges or settings. Importantly though, and especially in the context forecasting, these tend to be global correlations, e.g., for certain earthquakes and certain systems, there might be a slightly higher probability of earthquakes occurring in relation to tidal stresses, but this tells you nothing about specific risk on any specific fault or location so it has pretty minimal utility for actual, meaningful prediction or even contribution to forecasting.

> I’ve previously also seen a study indicating more larger earthquakes occur during a certain phase of a 30 year cycle of earth’s interday rotation time variation. The prediction was more earthquakes would occur in the 5 years following the peak of the variation cycle. The peak was in 2017. Has there been any validation of an increase in large magnitude quakes during the past 5 years?

Without any real detail to go on there, I'm going to guess you're thinking of this paper by Bendick & Bilham, 2017 which was published in 2017, not suggesting a peak in 2017? There has been a follow up in the sense of later papers like Bendick & Mencin, 2020 finding additional support for "synchronization" in global earthquake catalogs. The crucial bit (and this is also discussed in Hough, 2018 more directly) is that generally papers like this are fundamentally misinterpreted by the media and lay audience. Both the Bendick & Bilham and later Bendick & Mencin are pretty explicit about how these observations have very limited utility for earthquake prediction, e.g., from Bendick & Bilham, "Global seismic synchronization has no utility for the precise prediction (in a strict sense) of specific damaging earthquakes" or Bendick & Mencin, "The most notable shortcoming of this outcome is that the empirical synchronization approach provides no useful constraints on the location of events in a developing cluster; they occur globally"

So in the end for both of these types of potential correlations (and any real underlying causation), the extent to which these provide anything actionable is unclear. I.e., does saying that the risk of an earthquake for all places, globally, already at a moderate to high risk for earthquakes are slightly higher on full moons help anything? Is everyone in a seismically active area across the entire globe going to do something different around every full moon as a result based on something like this? Studies like these are useful in the sense of working out all of the myriad controls on aspects of the seismic cycle, but their real world applications in the sense of forecasting are pretty limited.

3

lapeni t1_j7vjbpy wrote

This was a very masturbatory response. Its wildly over-complicated. Forecast and predict are synonyms, they both infer estimation. You can check a dictionary if you don’t believe me. There is a much more simple straight forward answer to the question here, check my direct reply to the post.

0

CrustalTrudger t1_j7vnxi1 wrote

To the main point, the distinction between forecast and prediction as drawn in my original comment is common within natural hazards risk assessment, e.g., this chapter or this discussion for laypeople with specific application for how we use these terms in the context of earthquakes.

Speaking briefly as a moderator of this subreddit, this comment is rude and unhelpful (and incorrect in context). Please consider our guidelines regarding civility before commenting in the future.

3

lapeni t1_j7wjej2 wrote

I don’t think a persons paper overrules a dictionary. I can’t comment on the chapter you linked as its behind a paywall.

That aside, we all understand what the OP is asking, hence my opinion that a lengthy paragraph explaining how a very niche group of people differentiate between two words that the majority of people (including OP) and the dictionary define as synonyms is masturbatory.

I mean no offense. My comment is not intended to upset you. It is intended as critique. I apologize if it came across in an attacking manner.

−1

CrustalTrudger t1_j7wnaex wrote

It's not "masturbatory" to explain the terminology used by the domain scientists who are relevant for a question (of which I am one, i.e., a professional geologist with a Ph.D. who studies natural hazards, and specifically earthquakes, as part of their research). If you choose not to believe me in terms of the pervasiveness of these terms and their usage in this context, how about the USGS?

More broadly, there are myriad examples where the specific use of terminology within a branch of science is different than common usage. In this case, the distinction drawn between these two words in the context of the scientific community of interest is useful in terms of describing what we can and cannot do (and very specifically why the community of scientists who study these make the distinction that you are complaining about). Ultimately, the point of this subreddit is for people with specific expertise to communicate that knowledge to interested parties. If you're not interested in learning, you're welcome to not read or comment on future posts in this subreddit.

5

LillBur t1_j7tgnsm wrote

Has no one fed all historical earthquake data into some pattern-recognition machine??

1

CrustalTrudger t1_j7u44hi wrote

In the simplest sense, you're guaranteed to get a pattern, one that we already know, i.e., seismic hazard is the highest around plate margins. Beyond that, sure, there's been a lot of interest in considering whether various machine learning or AI approaches might have value in forecasting. For example, there's been interest in using such approaches to perform "nowcasting", e.g., Rundle et al., 2022, which is basically trying to leverage ML techniques to figure out where in the seismic cycle we might be for particular areas (and thus improve the temporal resolution of our forecasts, i.e., trying to narrow down how far into the future we might expect a large earthquake on a given system).

Ultimately though, for anyone who's even dabbled with ML approaches (and specifically with supervised learning type approaches which are largely what's relevant for an attempt to forecast something), you'll recognize that the outcomes of these are typically only as good as the training data you can provide and this is where we hit a pretty big stumbling block. We are considering processes that, in many cases, have temporal scales of 100s to 1000s of years at minimum, but may also have significant variations occurring over 100,000s to 1,000,000s of year timescales. In terms of relatively robust and complete datasets from global seismology records, we have maybe 50 years of data. The paleosesimology or archaeoseismology records are important for forecasting, but also very spotty so we are missing huge amounts of detail, such that trying to include them in a training dataset is pretty problematic. Beyond that, there are significant problems generally from the expectation that a method (which is agnostic to the mechanics of a system) will be able to fully extrapolate behaviors based on a super limited training data set.

At the end of the day, sure, you could pump global seismicity into a variety of ML or AI techniques (and people have), but it's problematic to have expectations of reasonable performance of these approaches when you're only able to train such methods with fractions of a percent of the data necessary to adequately characterize the system beyond very specific use cases (like those highlighted above).

6

jaynkumz t1_j7ws4m2 wrote

Not necessarily how it works. If you had active 3d seismic with high vertical and time resolution (4d) and high resolution surface passive seismic from usgs surface arrays and monitoring wells with high resolution VSP arrays surrounding the area in a shallow active seismic area, you might be able to refine a process specific to that site that might work in other places but it’s also going to require the same amount of input data from the control site. Also the controlling processes that initiate the process can range anywhere from the mantle to induced events from reactivating and slip tendency which could be anything from lubrication to critical stress thresholds.

Then it becomes the challenge of depth and resolution of what’s actually useful since many of those technologies aren’t penetrating to the depths necessary to really get a good set of data.

This process is being worked on by others. One day it’ll probably be able to be generalized enough to give better predictive models but any of the seismic monitoring aside from that the usgs does is extremely expensive and typically private companies during o&g exploration so that data isn’t going public or going to be shared any time soon and it’s also not typically in areas that would be extremely useful per se both due to the area and current tech

1

darkrtsideofwrong t1_j7yt1ms wrote

One method to quantify tectonic plates is seismological analysis, which involves studying the seismic waves generated by earthquakes to infer the mechanical properties of the Earth's interior. The analysis can provide information about the distribution of stresses in the plates and the depth of the plates' boundaries.

1