By Anand Hari and Daria Petrovic, K2 International
Modellers are challenged to navigate an increasing level of uncertainty regarding the frequency, severity and associated characteristics of events.
It is no surprise global warming has become a focus for the industry and catastrophe modellers in recent years. More unprecedented events are occurring across the world, presenting new challenges, but also opportunities for re/insurers able to narrow the insurance gap.
Just look at this summer’s wildfires that have ravaged California and large swathes of central and southern Europe. Only a couple of years ago, few would have confidently forecast such extreme events occurring with such frequency. According to the Insurance Information Institute (III), occurrences of wildfires across the US have risen roughly 14% over the past year.
It is not just secondary perils such as wildfires and floods causing all the damage. For the first time in history, the US National Weather Service issued winter storm warnings for all 254 counties in Texas this year.
Events that previously went largely unnoticed are now being more widely reported because of growing populations in the areas they hit.
As these events become more commonplace and extreme, people are sitting up and taking notice of the effects of climate change. The evidence is stark and consensus among the scientific community is that it is now hard to deny the planet is warming. And the insurance industry has a key part to play as the situation develops – communities with access to effective re/insurance recover much more quickly after a catastrophe.
Embracing the uncertainty
Modellers are challenged to navigate an increasing level of uncertainty in relation to frequency, severity and the associated characteristics of events. We constantly ask the question: “What if our assumptions and models are wrong?” We then work with teams across the organisation to build an enhanced view on risk, systems and processes that can give confidence to our stakeholders.
There can be issues because of a lack of standardised data available. A big part of our role is to calculate losses after an event, but in locations such as Japan, for example, we have not got access to basic location data. In these scenarios, using experience and expertise, we find alternative ways to account for any lacking data to allow us to still price effectively and competitively.
There are different challenges when dealing with other secondary perils, such as flooding. Inland flooding is becoming more uncertain to map. While climate change is causing water levels to rise, the population is growing and there are a significant number of people who are either attracted to the idea of living by the sea or cannot avoid living in flood-prone areas of the world such as Bangladesh. This is causing exposure levels to dramatically shift.
When we categorise something as a one-in-500-year event, can we do so with the same confidence as in the past? Will we have to start reassessing flood bandings? And if so, how often will we have to revisit them?
Flooding is by far and away the hardest peril to model. Just a couple of inches increase in water levels can mean the difference between significant losses or none at all.
Global warming and rising sea levels have merely exacerbated the problem and we now have to reassess our flood bandings more regularly to try to ensure they reflect ever-changing adverse environmental conditions.
Despite all this, advances in technology are helping us wrestle back some control. Cloud computing, for example, has transformed the world of risk modelling. It has made models cheaper and much more accessible, and done away with having to maintain costly local infrastructure. Vendors can easily push models out via the cloud and we are able to pick and choose between different models for different peril regions.
Tied to this, the internet of things and smart technology is enabling the use of parametric underwriting to price individual perils more accurately. It has also been extended to other areas such as cargo container sensors, flood depth monitors, vehicle telematics and even smartphone health and wellbeing apps. All of these provide real-time intelligence that can transform the way we price and monitor risk.
Machine learning is also allowing insurers to process large amounts of data to make more informed underwriting decisions. We can now process high-resolution satellite images that give us much clearer insights into regions and how they are affected by specific perils.
At K2 International, the team’s strategy to climate change is centred on regularly assessing and challenging our view of risk. That means working with underwriters on their rating approach and adjusting risk appetites to make sure we are as comfortable as we can be in a world of increasing uncertainty.
Exposure management and cat modelling are still the best tools we have to help us prepare for, and hopefully pre-empt, the unexpected. However, they are rarely cheap and can never guarantee us the right answers. As a cat modelling team, we always have to be mindful of managing expectations.
A big part of our job is also to test assumptions and identify any deficiencies in the tools and data the business relies on, and then suggest where credits/debits on technical rates might be beneficial.
The dual impact of Covid and Hurricane Ida presents a great example of this. The impact of the pandemic is already likely to generate both demand surge and labour shortage, and a significant natural catastrophe like Ida is only going to make the situation worse.
The models simply are not calibrated for this scenario, so this is where our advice can bring real value to the business before the wind season and in our post-event loss estimates.
Anand Hari is head of exposure management and Daria Petrovic is senior catastrophe modeller at K2 International
This first appeared in Insurance Day.