March 20 2015 Prepared for Society for Risk Management Consultants passion innovation accountability Introduction to Beecher Carlson gt Who we are 3 ABOUT US Beecher Carlson is a large account broker and risk management consultant that delivers expertise through industry focus a ID: 586632
Download Presentation The PPT/PDF document "CAT Loss Modeling and Analytics" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
CAT Loss Modeling and AnalyticsMarch 20, 2015
Prepared for:
Society for Risk Management ConsultantsSlide2
passion. innovation. accountability.
Introduction to Beecher Carlson
>Slide3
Who we are
3
ABOUT US
Beecher Carlson is a large account broker and risk management consultant that delivers expertise through industry focus and product specialization. By leveraging our deep risk management expertise, we are able to help clients manage their business risks, protect and enhance their capital and fulfill their corporate mission. Beecher Carlson is a subsidiary of Brown & Brown, Inc. and is headquartered in Atlanta, GA. Brown & Brown, based in Daytona Beach, FL, is one of the nation’s leading independent insurance intermediaries and is ranked as the sixth largest insurance brokerage in the United States and the seventh largest brokerage worldwide by
Business Insurance
magazine.
PREMIUM
VOLUME
In 2013, Beecher Carlson and Brown & Brown collectively placed
more than $12.3 Billion
in premium volume.
$12.3B
REVENUE
In 2014, Brown & Brown revenue reached $1,567,460,000.
$1.6B
DEDICATED
At Beecher Carlson, more than 40% of our corporate expense is dedicated to claims, risk control and analytics.
40%
SPECIALIZED INDUSTRIES
Hospitality
Real Estate
Manufacturing
Healthcare
Energy
Financial ServicesTechnology
7
RECOGNIZED
Ranked #1 as surveyed by Greenwich Associates
with over 700 risk managers for Favorability based on the key attributes of: Ethicality, Flexibility, Likelihood to Recommend, Client Satisfaction, Prompt Follow-up, Addressing Policy Issues with Adequate Time, Compensation & Pricing, Innovativeness of Brokerage.
#1Slide4
Specialized Insurance Brokerage
ENERGY
FINANCIAL SERVICES
HEALTHCARE
HOSPITALITY
MANUFACTURING
REAL ESTATE
We specialize in specific industry verticals, and our brokerage teams further focus on specific product lines within those industries.
Specialization has given Beecher Carlson true expertise in our markets, allowing us to offer differentiated, customized and value–added client solutions
.
4
TechnologySlide5
Business Insurance Rankings5
July 21, 2014
www.business insurance.com
SPECIAL REPORT
100 LARGEST BROKERS
Ranked by 2013 brokerage revenue
2014
rank
2013
rank
Company
2013
U.S.
broker revenue
%
change
1
1
Aon P.L.C.
$5,561,106,600
4.6%
2
2
Marsh &
McLennan Cos. Inc.
1
$5,521,500,0005.2%
33
Arthur J. Gallagher & Co.1$2,111,340,00010.7%44Willis Group Holdings P.L.C.
$1,743,840,0007.3%
5
6
BB&T Insurance Holdings, Inc.
1
$1,582,443,400
6.9%
6
7
Brown & Brown Inc.
$1,355,502,535
14.0%**
7
5
Wells Fargo
Insurance Services USA Inc.
$1,350,022,000
(14.3%)
8
8
Lockton Cos. LLC
2
$826,448,280
12.4%**
9
10
USI Holdings Corp.
1
$782,207,827
9.8%
10
11
Hub International
Ltd.
1
$768,865,200
21.5%**
Beecher Carlson is the large accounts retail broker entity of Brown & Brown, the 6
th
largest U.S. insurance broker firm.Slide6
passion. innovation. accountability.
Agenda
>Slide7
What We Will be Discussing
A Historical Review of CATs
Overview of CAT Models
Terminology – Key Terms
Data and Data Formats
Uses and Users of CAT Models
Components of CAT Models
Data Quality
Secondary Modifiers
A Closer Look at 20 Secondary EQ Modifiers
- Two case studies
What’s Next?
For those that want to know more:
Appendix 1: CAT Modeling Terminology
Appendix 2: Secondary Modifier Tables
Appendix 3: Wind/ Storm Surge Secondary Modifier: Roof AnchorsAppendix 4: Frequently Asked Questions
Appendix 5: Sources
This will be the focus of today’s discussion
7Slide8
passion. innovation. accountability.
A Historical Review of CATs
>Slide9
Swiss Re, 2011:The term “natural catastrophe” refers to an event caused by natural forces. Such an event generally results in a large number of individual losses
involving
many insurance policies
. The scale of the losses resulting from a catastrophe depends not only on the
severity of the natural forces concerned, but also on man-made factors, such as
building design or the efficiency of disaster control in the affected region. Natural catastrophes are subdivided into the following categories: Flood - Cold Wavers/FrostStorm - HailEarthquake - Tsunami
Droughts/Forest Fires/Heat
Waves
- and
other natural catastrophes.
Definition of
NatCAT
events
9Slide10
While the
number
of
CAT events is steadily increasing…
10Slide11
…the total amount of both overall and insured losses has been declining in recent years
11Slide12
Eight of the ten
most costly insurance losses
world-wide have occurred during the last 15 years (original values)
Insured Losses are getting larger
12Slide13
Recent HeadlinesMarch 14,
2015 –
AccuWeather
Deadly Cyclone Pam Leaves Vanuatu, Targets New Zealand
March 12, 2015 – Business InsuranceGiant quake, tsunami risks identifiedMore than 20 subduction zones could produce giant earthquakes and tsunamis such as those that devastated the Tohoku, Japan, area in 2011, according to a tsunami risk study released by Risk Management Solutions Inc
.March 12, 2015 – Advisen FPN'Reawakened
' faults could trigger big Okla.
Earthquakes
Long-dormant, 300- million-year-old fault lines across Oklahoma are being "reawakened" by recent small earthquakes that have been previously linked to fracking, scientists reported in a new study out this week
.
March 9, 2015 – Los Angeles Times
Risk of 8.0 earthquake in California rises, USGS says
Estimates of the chance of a magnitude 8.0 or greater earthquake hitting California in the next three decades have been raised from about 4.7% to 7%, the U.S. Geological Survey said Tuesday.February 19,
2015 – ReutersAustralia's northeast braces for double cyclone hitThere is always a risk of Nat CAT events
13Slide14
Hurricane HistoryFrom 1949 in the Pacific From 1851 in the Atlantic
Source: NOAA / NWS
There is always a risk of Nat CAT events
14Slide15
passion. innovation. accountability.
Overview of CAT Models
>Slide16
A CAT model is a computerized system that generates a robust set of simulated
events
and estimates the magnitude, intensity, and location of the event to determine the
amount of damage and calculate the insured loss as a result of a catastrophic event such as a hurricane or an earthquake.
Modeled Nat CAT perils
include– Hurricane (incl. storm surge)– Earthquake (incl. fire following and EQSL)– Tornado/Hail – Winter Storm– Flood– Brushfire– Others
Sample: RMS
What is Catastrophe Modeling?
16Slide17
Why Are Catastrophe Models Run?Management of Exposures– Control
writings in regions
–
Scenario testing– Capital Costs– Probability of Ruin– Reinsurance Buying
– Rating Agency Needs– Determining Limits Needed
Ratemaking– Primary Insurers– ReinsurersUsers of Catastrophe Models
Underwriters
Reinsurers
(Re-)Insurance Brokers
Capital Market (pricing of Cat Bonds)
Regulators (solvency requirements)
Rating Agencies (S&P, A.M. Best)
Insurance Buyers
Purpose and Users of Catastrophe Modeling17Slide18
History of NatCAT Vendors Models
Property Insurance
Mapping the risk on a
wall-hung map 1800 – 1960
Development of Geographic Information SystemsNatural Hazard Science
Understanding nature and impact of natural hazards (measuring hazard intensity)
1800 seismograph, anemometer
1970 study about the frequency of NatCat events
Computer-based models
Provide estimates of NatCAT losses by overlapping the property at risk with the potential natural hazard sources in the geographical area
AIR (1987); RMS (1988); EQE (1994)
September 21, 1989 – Hurricane Hugo, $4bn insurance loss (South Carolina)
October 17, 1989 – Earthquake Loma Prieta, $6bn insurance loss (San Francisco)August 1992 – Hurricane Andrew, $15.5bn loss (Florida), AIR $13bn (9 insurance companies become insolvent)
Need to estimate NatCAT risk more precisely1997 – HAZUS – open source FEMA model to assess EQ Risk in US2004 – HAZUS-MH – included Wind and Flood
18Slide19
Main CAT Model Vendors
–
Risk Management Solutions (RMS
), 1988 at Stanford University
Market Share Leader with reinsurers
UnavoidableRMS(one) – new platform in limited release, wider use 2015New flood model will cover US in Fall 2015 into 2016Model updates (addressing building code upgrades)–
AIR Worldwide (AIR
), 1987 in Boston
Strong and growing presence
Some technical advantages specific to public entities
Better code choices for tanks, other structures
Touchstone released in January 2013, update in 2014
Released US flood model
– EQECAT, 1994 EQE International, then ABS ConsultingRecently purchased by CoreLogicBroker ModelsCompany Proprietary Models – FM Global, Swiss Re, Munich Re, and others
Open Systems – Oasis
Choices of Models
19Slide20
Hazard – Stochastic events are simulated against the exposures. Each event has an associated probability.
Exposures
– Models start with the exposure distribution (geography, construction, occupancy, etc.).
Vulnerability
– This is the amount of damage expected to result from an event based on the exposure characteristics and event intensity.
Financial
Perspectives
– Finally, varying perspectives of the loss
are generated
(application of primary insurance conditions and
facultative and treaty reinsurance).
How CAT Models Work4 Modules
20Slide21
CAT Models are used to answer these questions:How much limit is needed?
100/250/500 etc. Year “PMLs” (Return Period)
How much
exposure will be
retained (deductibles)
How should the insurance program be layered?What is the estimated loss cost for a given layer?Average Annual Loss (AAL) by layer, including the retained deductible
Which locations are the
biggest drivers
of the modeled loss estimates?
Would better data reduce the loss estimates?
Should this information be used for premium allocation within a portfolio?
How CAT Models Work
Asking for the 1 in 250
return period is asking for the monetary loss in the range of outcomes where only 1/250 = 0.4% of potential outcomes are worse. In mathematical terms this is the 1 – 0.4% = 99.6% confidence point, and you are stating that you are ‘99.6% confident’ that losses will not be larger than this value.
21Slide22
InputInsured’s data:
including location data, exposure characteristics, and values
Coverage terms:
Policy deductibles, sublimits, layers
Settings (storm surge, demand surge, EQSL included?)
Proprietary catalogues of theoretical events:Storm path/ EQ epicenter, severity, probabilityDifferent sets may be selected, including specific eventsDamageability functions:For occupancy types
For building characteristics
How CAT Models Work
22Slide23
ProcessingCursory data quality checks“Will it run?”; “Does it make sense?”
Geocoding engine
Separate module to transform street address into coordinate data:
6-digit decimal latitude/longitude
Module bypassed if latitude/ longitude provided by user
Calculation enginePower users make big investments in technology to stay on the cutting edge for computing speedModeling portfolios used to take days, then hours, now minutesRunning several scenarios is easy: e.g. including or excluding certain locations to ascertain the change in risk after acquisitions or divestitures
How CAT Models Work
23Slide24
OutputSummary of expected loss estimatesRanked list of theoretical events,
estimate loss, probability
“PMLs” 50/100/250/500 etc. return periods – cumulative probability thresholds
Average Annual Loss (AAL) – sum of estimates x probability
Ranked list of policies/ locations (highest AAL)Basic Data quality metricsMarginal portfolio impact analysis
How CAT Models Work
24
Output depends on whether a
portfolio of locations
is modeled or whether a
portfolio of policies
covering a multitude of locations is modeled.Slide25
passion. innovation. accountability.
Terminology – Key Terms
>Slide26
Storm Surge (SS) – Quickly rising ocean water levels associated with windstorms that can cause widespread flooding. Measured as
the difference
between the predicted astronomical tide and the actual
height of the tide when it arrives. Caused by the lower barometric pressure associated with tropical or extra-tropical cyclones, and the action of the wind in piling up the surface of the water. The amount of surge depends on
a storm's strength, the path it is following, and the contours of the ocean and bay bottoms as well as the land that will be flooded.
Tornado/Hail (TH) – Non hurricane wind eventsTerminology
Earthquake
Shake (EQ)
– A sudden or abrupt movement along a fault
or other
pre-existing zone of weakness in response to accumulated stresses.
Fire
Following Earthquake (FFEQ) – Hazard presented by fires which commonly occur following an earthquake, typically due to the rupture of natural gas lines or other structures carrying combustible materials
.Earthquake Sprinkler Leakage (EQSL) – Direct damage to the building or contents caused by the leakage or discharge of water or other substances from an automatic sprinkler system due to earthquake or volcanic action.
Demand surge/Loss amplification (DS) – Post event inflation.– Shortages of labor and materials cause prices to rise.
– Supply/demand imbalances delay repairs resulting in structural deterioration.– Faced with the magnitude of the disaster and under pressure from politicians, insurers are encouraged to settle claims generously and to expand the terms of coverage
beyond those strictly defined in contracts.26Slide27
Terminology27
Exceedance Probability (EP)
–
Also known as “exceeding probability” or “EP”,
is
the probability of exceeding specified loss thresholds.EP curve defines the probability of various levels of potential loss for a defined structure or portfolio of assets at risk of loss from natural hazards.
By combining probabilities of occurrence with the loss levels of all potential events, the
probability of exceeding certain loss levels in a given year
(return period loss) can be calculated.
Expected Annual Loss (Average Annual
Loss, AAL
)
or Pure Premium
– Sum of all modeled event losses divided by the number of years modeled. This is the annual premium required to cover the loss exposure over time.The expected annual loss cost rate load is a good index of relative risk between programs and accounts. Loss cost rate loads can be developed by dividing the expected annual loss by the sums insured per hundred.Slide28
Terminology28
•
The Standard Deviation (SD)
is a dollar measure of the deviation of the potential losses away from the mean Average Annual Loss (AAL) for a layer. • The Co-efficient of Variation (
CoV, CV) represents the proportional deviation from the AAL and is calculated by SD/AAL.
The proportional nature of the CoV means it can be compared across layers to identify the level of loss uncertainty for different layer structures where a lower CoV indicates lower loss uncertainty.
= low uncertainty
= high uncertaintySlide29
Terminology29
The way to present the findings on the 500 year event line is:
“
There is a .2% probability in any 1 year that the insured will suffer a single loss exceeding the dollar amount shown in the RMS analysis.”
Another way to say it is: “There is a .2% probability in any 1 year that the insured can expect at least one event to occur that will cause at least the dollar amount shown in the RMS 500 year analysis, of ground up loss
.”A third way to say it is: “There is a 99.8% chance in any 1 year that the insured won’t have an event that exceeds the dollar amount shown in the RMS 500 year analysis.”
If you want to look at a longer time window than a year – say 50 years - as many lending institutions do - multiply .2% by 50 years - this equals 10%. Then you can say,
“
There is a 10% probability that
during the next 50 years
the insured will suffer a loss exceeding the dollar amount shown in the RMS analysis for a 500 year event.”
This
represents a 90% confidence level that a single loss won’t exceed that amount for a 500 year event during the next 50 years.
Return Periods and Probability Slide30
passion. innovation. accountability.
Data and Data Formats
>Slide31
Raw detailed data– Format differs by model – Format into model(s) you want to use (RMS,
AIR,
EQECAT
etc.)EDM – detailed data in RiskLink format
(RMS product)UNICEDE file – aggregated data in
AIR formatUNICEDE/2 file – aggregated data in AIR formatUNICEDE/px (UPX)– detailed data in AIR format
Common Data Formats
Overview
31
Data collected but not coded = missed opportunitySlide32
Raw Data – Basic DataAddress – state, county, city, zip code, and street address
Construction
Occupancy
Values by coverage - building, contents, time elementLimits
DeductiblesPeril
specific deductibles and/or sub-limitsYear builtNumber of storiesBuilding sprinklered / non-sprinkleredNot required, but good to have - Secondary CharacteristicsIt can make a huge difference and the results!
It all begins with good Raw Data
“No COPE – No hope”
32Slide33
Report values by building, rather than by site or location. It all begins with good Raw Data
33
Report all structuresSlide34
passion. innovation. accountability.
Uses and Users of CAT Models
>Slide35
CAT Modeling - Impacts on Program Pricing, Capacity, and StructureModeling is all about the data. Models are sophisticated, but depend on the information given to them. For example, differences in how buildings in a similar location are constructed may respond
to the
same event differently (e.g., a brick building may fare better in a windstorm than one made
of wood).
Models
are capable of developing loss levels for a range of building types, ages, sizes, and occupancies.
Uses and Users of CAT Models
Modeling helps answer a variety of questions:
35Slide36
Uses and Users of CAT ModelsA valuable tool for Insurers
• Risk pricing
Using
local software, a quick repeatable risk assessment over the known locations of a risk being offered can be run. In addition to supporting the calculation of a robust internal technical price, the
CAT modeling process can provide a wealth of additional information regarding the potential hazard exposure. This includes: susceptibility to hurricanes or earthquakes,
proximity to liquefaction (the process of something moving from a solid to liquid mass) storm surge risks such as flooding, or vulnerability assessments (for example, which building standard code was in place when the locations were built and what relative impact this could have). While
there is still a lot of uncertainty and complexity to assessing such risks, the
bench-mark
figures produced allow
relative comparisons between risks
, and over time. All of this is intended to supplement an
underwriter’s wider knowledge about a risk and lead to optimal decision making over the long term, including calculating the correct price.
36Slide37
Uses and Users of CAT
Models
•
Portfolio management
As above for individual risks, so for an entire portfolio of risks, CAT modeling is used to rapidly accumulate across a portfolio to communicate the combined profile. For example, acting as a common currency, CAT modeling
can put a high value industrial facility in the US in direct comparison to a warehouse in Belgium. At this level CAT modeling supports business strategy, both identifying areas of concern (such as with too great an accumulation of correlating risks) or identifying opportunities (where diversifying risks could be added to the portfolio with marginal impact). • Capital requirements The robust standardized approach to assessing CAT
risk that
CAT modeling
provides, can benefit other processes undertaken by insurers. The main usage is in
calculating solvency
and
other regulatory or economic capital requirements, where the output from a CAT modeling process will provide a risk profile that can be combined with other forms of business risk to inform capital requirements.
37Slide38
Uses and Users of CAT Models
A valuable tool for
Insurance
BuyersInsureds often use CAT models to guide them as to
what sublimits they should buy for hurricane (windstorm) and earthquake exposures. Typically, insureds look to buy to the 1-in-250-year return period, which is the generally accepted return period. More conservative returns of 500 or
1,000 years also can be used.All models produce data in the form of tables. The RMS table (illustrated on the next slide) helps clients to understand what the expected losses may be from various CAT events; thus, helping insureds
or insurers
set acceptable program sublimits
.
38Slide39
Loss Summary: Post Deductible LossIn this example, the expected loss from earthquake and hurricane for the 250-year return period
is approximately $134.7 million and $8.5 million, respectively.
We
typically focus on
the
aggregate exceedance probability (AEP) versus the occurrence exceedance probability (OEP). The AEP is the probability that the associated loss level will be exceeded by the aggregated losses in any given year, and is used when the insurance program is written on an aggregate basis.
The OEP
is the probability that the associated loss level will be exceeded by any event in any given year
. It
is used when the insurance program is written on an occurrence basis, or when the
loss associated
with one event is important.
Uses and Users of CAT Models
39Slide40
Uses and Users of CAT Models
A valuable tool for
Insurance Brokers
Insurance brokers use the modeling results to help design the program structure, as modeling
can be performed on each individual layer as well as the overall program. This allows brokers to analyze various options
, such as insureds self insuring layers that may be too costly or transferring risk to various insurers where they see value and efficiency in so doing.Additionally, modeling allows brokers to look at annual average loss (AAL) figures, which are the minimum annual charge (premium) over an infinite time period that would need to be collected to fund for the expected loss. This is often referred to as the “technical premium.” Carriers often use
a multiple
of this figure to determine the actual annual premium charged. Accordingly, comparing
a company’s
AAL for earthquake and windstorm perils versus the actual premium paid can help
clients determine
how well priced (or not) their program is overall.
40Slide41
41
Modeling should be considered a best practice and “baked into” any buying / renewal strategy
Know “why you buy” what you buy and how
CAT modeling impacts program pricing,
capacity
and structure
Uses and Users of CAT ModelsSlide42
Uses and Users of CAT Models
Models are just one of many tools
It
is important for re/insurers to remember
that catastrophe
models are just one tool that an underwriter has at his or her disposal when analyzing a policy or portfolio. While a model's stochastic event data set is designed to simulate all events that could take place,
a storm
, flood or earthquake with
characteristics that
are not contemplated can occur.
While these
events, described as "Black Swans", are not part of a model, a disciplined method
of risk management, used in conjunction with a CAT model, will minimize or eliminate shock losses that could affect a portfolio.
CAT Modeling Practice Operating the software is only a small part of what it takes to effectively utilize CAT modeling within a business. As with any model that attempts to simplify and represent real world phenomena it is vital that there is a strong understanding of the appropriate usage and limitations of the model.
42Slide43
passion. innovation. accountability
.
Components of CAT ModelsSlide44
A CAT model is built up of a number of modules that must all operate in coordination to produce the desired risk assessment. It is important to note that two of these (the hazard and vulnerability modules) could be considered individual models in their own right, and the combination of one feeding the other brings with it challenges that need to be understood.
The
vendors of
CAT models create and fix a set of events. While these are a small subset of the range of potential outcomes they provide a sensible number of scenarios that will represent the underlying hazard, while remaining at a practical level to make quick decisions. These
events are run consistently each time a model is operated, so there is no random element involved, and they can be compared between risks and between different companies using the same model.
As they only represent a subset, increased uncertainty should be applied to events at the extreme tail. Components of CAT Models
44Slide45
Components of CAT Models
MODULE 1:
Exposure data module Every CAT
model needs an input of risks against which an assessment is to be made. This usually consists of capturing multiple details about a risk, along with recording insurance policy terms. The two essential features of a risk that need to be known are the geo-location and an
insured value. After this, depending on the type of risk, there might be options to enter primary characteristics, such as construction type or year built for a property risk, and even secondary characteristics such as roof type. Some models provide approaches for geocoding locations based on addresses, and for estimating characteristics if unknown. MODULE 2: Hazard moduleEach generated event is tagged with the core components relevant to the hazard.
For
a
hurricane
this might be landfall location and direction of travel, peak wind-speeds and central pressure; for an earthquake this would normally be the
epicenter
and magnitude. The hazard module must combine this information with the exposure data being provided and any information the model has on salient features such as surface roughness (for windstorm hazard) or soil type (for earthquake hazard) at each location. For each event an assessment of the hazard impact at each location being assessed must be established.
45Slide46
Components of CAT Models
MODULE 3:
Vulnerability moduleThe resulting output of the hazard module is then passed to the vulnerability module. The hazard at any one location is independent of the risk that is actually there, but what we are interested in is
how the risk at that location will respond to the predicted hazard conditions. The vulnerability module contains a number of vulnerability curves, with the appropriate one chosen depending on the primary characteristics of the risk. These are often
derived from engineering studies or past experience, and represent how a risk will respond under different conditions. For an earthquake, for example, peak ground acceleration (PGA) is often the most important factor when considering how badly damaged a building will be. As the PGA increases, so does the expected damage. The relationship between the two is described in the vulnerability curve. Secondary characteristics, if provided, will often be used to make minor modifications to the vulnerability curves. The result of these calculations is a damage ratio to be applied to the risk at the given location. 46Slide47
Components of CAT Models
MODULE 4:
Financial module Armed with the expected damage ratios for each location that we are assessing, the CAT model can then begin to accumulate upwards through the
financial and insurance terms. Starting
with a calculation of the Ground Up loss to the individual location, the financial module will typically accumulate through location-level terms, to policy and then program level conditions, at each stage applying limits, deductibles, and special conditions that have been coded into the model. The resulting output is an Event Loss Table that provides an assessment of the financial risk exposure to individual events. This can then be combined to an exceedance probability (EP) curve to give further measures for the entire risk.
47Slide48
passion. innovation. accountability.
Data Quality
>Slide49
Ever since the first commercial catastrophe models became available (AIR Worldwide - 1987, Risk Management Solutions - 1988, EQECAT - 1994), there have been
questions about
their reliability
. But one thing is certain: the quality of data that goes into the model plays a pivotal role in the quality of results that are generated.
As CAT models and their results have become an established part of the insurance and reinsurance landscape, the industry has become
more reliant on their results.Modeled results contribute to rate making, aggregation potential, developing capital contributions and adding risk to the portfolio among other things.With each new release the modeling companies expand
their catalog of available
regions
and perils
update
methodologies
based on
lessons learned from past events and new scienceimprove functionality through the betterment of
the design and technology. Similarly, model users have made improvements in data capture and granularity.Data Quality is Key
49Slide50
Models can accommodate extensive additional refinements through secondary modifiers beyond the required primary attributes, or data. It is essential that the data be complete and accurate, as if it is not the model
will produce
inaccurate results,
potentially affecting pricing, capacity offered, and limits purchased.What if data is missing? The models can accommodate missing information to some extent; however, this increases the uncertainty
around the modeled results. The more uncertainty, the more compensation in terms of the
premium that insurers will likely need.Models also “keep score” and look at a number of factors based upon the primary and secondary attributes provided. A “bad” score in any one of the data categories can hurt an insured both in relation to pricing, capacity, and limits purchased, and in an insurer’s confidence that the insured understands its risk.
Let’s review an actual portfolio and how it scored after the first model run…
Data Quality is Key
50Slide51
Data Quality is Key
Exposure Stratification
In
the example provided, the
secondary modifiers
score is poor at 11%, and knowledge of locations
’
construction
types
could be improved from 89%.
Conversely
, the
geocoding and occupancy scores are quite high, demonstrating the insured’s understanding of these categories.
51Slide52
Data Quality is Key
Accurate
Primary and Secondary Attributes/Modifiers Are Critical
A broker should help clients accurately obtain their primary attributes/modifiers (including addresses, geocoding, construction, occupancy, number of stories, and year built) as well as their
secondary attributes/modifiers (including roof geometry, roof anchorage, maintenance programs, presence of parapets, equipment on roof, external ornamentation, and roof sheathing).We
strongly recommend insureds invest in CAT modeling or work with a broker who provides modeling as part of their compensation. Properly used, CAT models maximize clients’ buying power, allowing them to make informed decisions
proactively
design a
pre-emptive marketing strategy
differentiate
their risks for the negotiation of favorable
terms
create transparency around the sharing of assumptions with underwriters and internal decision makers, and implement risk-based allocations. Qualified modelers working alongside engineers ensure
such a process.52Slide53
Data Quality is Key
Challenges
to ensuring data quality
Data collection is both difficult and costly. Insurers write many policies and cover many
locations within their book of business. It takes a great number of man-hours and dollars to inspect new and renewal business.
Insurers‘ methods of storing data can also be problematic. Some legacy systems do not transfer schema standards to the database as well as others. But with most, if not all, insurers using one or more of the models in house, this should become less of an issue over time.
There is a
need for detailed and accurate
data collection
by insurers which captures:
Values
- proper insurance-to-value (ITV) is a significant factor in the model's ability to simulate a loss close to what actual loss would be;Limits
- specifically for commercial/industrial business, the more accurate the business interruption (BI) limits, the closer simulated results are going to be to an actual event;53Slide54
Data Quality is Key
Challenges
to ensuring data quality
Modeled results contribute to rate making, aggregation potential, developing capitalcontributions and adding risk to the
portfolio among other things.With each new release the modeling companies expand their catalog of available regions
and perils, update methodologies based on lessons learned from past events and new science, and improve functionality through the betterment of the design and technology. Similarly, model users have made improvements in data capture and granularity.
Data quality remains an industry-wide issue
and will
require continued
cooperation
from
all members
(insurers, reinsurers, brokers, and modeling companies) in order to continually improve exposure information. Such efforts should ensure the industry remains robust and able to withstand future catastrophic events
, while providing essential cover for those exposed to windstorms, earthquakes and other natural perils.54Slide55
Data Quality is Key
Resolution/geocoding
– the ability to model street address as opposed to a lower
level resolution (e.g. zip code) can have a dramatic impact on the modeled loss, specifically in coastal regions that are affected by wind events
;Primary characteristics – construction and occupancy information; andSecondary
characteristics
– differing
by exposed
peril (e.g. roof type, year built
, square
footage etc. for hurricane models and soil type, number of stories etc. for earthquake models
). Once the data has been collected, care needs to be taken when creating the database so as to ensure the information is interpreted correctly by the model.
It is easy to notice information that is missing, but more difficult to identify where something has been entered or coded incorrectly, especially when looking at large datasets.
55Slide56
Data Quality is Key
The
implications of data
quality (insurers)CAT modeling results are largely ineffective without quality data collection. For insurers
, the key risk is that poor data quality could lead to a misunderstanding regarding what their exposure is
to potential catastrophic events. This in turn will have an impact on portfolio management, possibly leading to unwanted exposure distribution and unexpected losses, which will affect both insurers' and their reinsurers' balance sheets.CAT modeling results are also used by insurers to
anticipate the financial effect a
catastrophic event
may have on its portfolio/balance
sheet and
to assist with the purchasing of
reinsurance limits. If results are skewed as a result of poor data quality, this can lead to
incorrect assumptions, inadequate capitalization and the failure to purchase sufficient reinsurance protection.Buildings
come in many shapes and sizes, old and new. All of these buildings are very different, but can look the same to a CAT model if the proper defining data elements are not maintained in a dataset.
56Slide57
Data Quality is Key
The implications of data quality
(reinsurers)
While data collection is the responsibility of the insurer, reinsurers place a high level of importance on quality of the exposure data
that is provided as it has an effect on their underwriting decisions and portfolio profitability. The higher quality of data an insurer
can provide, the greater the credibility a reinsurer will give for a modeled result.Insurers can tap many sources of information (modeling companies, CAT management consultants, reinsurers), to improve data quality within their portfolio. An insured's ability to provide a high level of data quality as part of their reinsurance submission,
would only
enhance their
reputation
within
the reinsurance
market place.Companies like Marshall Swift Boeckh (MSB), ISO and AIR Worldwide have
developed products to aid re/insurance companies in determining the value of a structure. The systems utilize databases with
structured algorithms and capture the building characteristics in calculating the value (residential and commercial). While these systems are not infallible, they provide a structured and consistent approach in the assessment of value.
57Slide58
Data Quality is Key
The implications of data quality
(the industry as a whole)
Standardized data is also an important step towards improving data quality for the industry as
a whole. 58
The ability to use
standardized
data across
different platforms
will improve
the accuracy
and simplify the compiling of data. As a result, many re/insurers have adopted or
are planning to adopt ACORD (XML) standards.Facilitating this development of standardized data was the main impetus behind the formation of ACORD, a nonprofit standards development organization serving the insurance industry
and related financial services industries.Slide59
Data Quality is Key
Understanding uncertainty
For reinsurers, a
multi model approach can only improve the analysis as all modeling
companies have differing views of catastrophic events. While the analysis results from various models tend to converge for industry-wide portfolios
, differences can be significant on a more granular level. A company's comprehensive understanding of the strengths and weaknesses of the models will allow them to appropriately weigh the results of the model that works best for a specific peril and region.When a specific data attribute is not available
, it
is often coded as unknown
. Examples include:
the
year a structure was built,
the number
of stories of a building, the basic construction type and how the building is being occupied.
These four primary building attributes were once elusive and often not completed or set to a default value. Now, many organizations can accurately extract this information from their core processing system, making it part of the information value chain.Nevertheless, when one of these data attributes is not known, a model will utilize
an "average" value based on research results for that particular region. The year-built attribute is a field that has become far more
important in determining potential loss. The year a structure was built in the state of Florida, for example, has a significant impact on its ability to withstand a hurricane.
59Slide60
Data Quality is Key
However, when
the year-built
is unknown, a catastrophe model will use an average value, which then increases
the uncertainty of the result. The difference in the expected loss against the "real" value can be significant
(plus or minus), and the uncertainty around that figure can be a factor greater than the known value.Drilling past the primary characteristics, CAT models also reflect secondary building characteristics to help companies
differentiate between
the finer features that a risk
may have
. These include the shape of the roof
, architectural
elements, parapets and overhangs, and many other fields too numerous to
mention. Improving the modelsWhile insurer and reinsurer data collection is an essential ingredient in improving the accuracy of modeled results, it is not the only ingredient. Improvements made to the catastrophe models themselves, either through advances in computer power, new scientific knowledge or lessons learned from actual events, will also help elevate accuracy
. 60Slide61
Improved Models
Modeling
companies can influence the
industry by setting data standards and guidelines that are
important in modeling. Converting from building's fire classification to an actual building type (i.e. non-combustible versus reinforced
concrete), is one example of how the quality of data has matured over the last ten years.There are many lessons to be learned from each catastrophic event that occurs and these opportunities
are well utilized by the
modeling companies
. After every event, teams
of scientist
and engineers survey the
damaged regions to study how structures perform. Post event claims analysis is conducted
and combined with the on-site survey results to refine the model's vulnerability functions. Every event is viewed as an opportunity to calibrate the models and improve their ability
to simulate perils with greater accuracy.The modeling companies also work extensively with insurers to increase the understanding of the model capabilities. This includes emphasizing the benefits to be gained
from committing time and effort to collecting quality data. The modelers continue to push for detailed (street address if possible)
data collection as opposed to aggregated data, which may use the centroid of a region and add significant uncertainty to the hazard assumption. The collection of detailed exposure data provides
insurers with a better knowledge of their portfolio and its risk. This knowledge can be passed along to reinsurers who are then
able to use it with other submission details to develop a comfort level and better understanding of the insurer and its business.
61Slide62
passion. innovation. accountability.
Secondary Modifiers
>Slide63
Secondary Modifiers
What are secondary
modifiers (aka characteristics)?
There are certain data points required to model a given risk; secondary modifiers are additional data points that provide more detailed information on structural integrity and building characteristics, including construction quality, roofing details, cladding, opening protections such as storm shutters, and so on. The list is comprehensive and changes occasionally with the upgrade of modeling versions, so it is important to
periodically review the Statement of Values (SOV). Models
can then make the proper interpretations.Why are secondary modifiers used?Proper assessment and inclusion of modifiers can have a significant impact on the modeling results. Accurate secondary modifiers can help the underwriter, broker and insured better understand the exposure inherent in a SOV. This knowledge of expected losses helps insureds and insurers set acceptable program structure and sublimits.
63Slide64
Secondary Modifiers
CAT
Modeling: The Benefit of
Including Secondary ModifiersIn recent years, catastrophe (CAT) modeling for hurricanes and earthquakes has become an essential
resource for all players in property insurance, in particular:
Underwriters use CAT models to accurately assess risk and determine capacity and pricing; Brokers look to the models to help in program design; and
Insureds
use modeled results
to better
understand exposures in their statement of values (SOV).
As reliance on
CAT modeling grows, so does the need to better understand the numerous features that impact results, including secondary modifiers.64Slide65
How does the market use secondary modifiers?Markets prefer information that is as detailed and accurate as possible for their own analysis because it bolsters confidence
in their underwriting decisions
.
Full disclosure of information, both good and bad, helps protect against unforeseen losses and underwriters are more apt to bind the coverage if they are comfortable with
the information provided. Data accuracy also bolsters the relationship with the market, as an aim for accuracy and not the lowest price, builds trust.
The benefit of including secondary modifiers is that underwriters and actuaries should feel more comfortable with the model output when the quality of the input data is better. Better data leads to better decisions.Even if the available information is unfavorable, it helps identify the risk – for both the insurer and the insured
– so
that there are no surprises, unanticipated gaps or skewed expectations in the unfortunate event of
a catastrophe
. Regardless of the impact of modifiers, insureds should declare and include this information to avoid allegations of misrepresentation
.
Secondary Modifiers
65Slide66
Secondary Modifiers
How do secondary modifiers benefit insureds?
It also helps underwriters
customize coverage to suit a specific risk involved.CAT models are instrumental in showing insureds where their greatest exposures lie. One product of
CAT modeling is a heat map, or an analysis that isolates SOV locations that are heavy drivers of average annual loss (AAL) and probable maximum loss (PML). Heat maps can help
identify specific locations where secondary modifiers could impact or influence coverage within a portfolio of properties. Further analysis can determine construction upgrades that will strengthen buildings against catastrophes. Including secondary modifiers also makes an account more attractive to underwriters; if secondary modifiers are left out of the equation, underwriters will make decisions using an assumption of “average” for model inputs. Depending
on the peril being modeled, the
potential PML differential when secondary modifiers are present
can swing
results by 50 percent or more
, in either
direction. Information used in modeling, including secondary
modifiers, should come from a qualified third-party group to help ensure its veracity.66Slide67
Secondary Modifiers
Conclusion
Beyond
potentially affecting pricing and capacity offered, it is worth noting that certain modifiers are rather easy to get and can provide more accurate modeled
results. For example, if a structural engineer develops a certified plan that details design review (including building characteristics such as the age and shape of the roof), it can provide a better picture of what will happen when the
structure is exposed to certain stresses or forces. Further, best case scenarios can be used for a cost / benefit analysis of upgrades, such as using adding hurricane shutters to a particular location.Gathering complete information and data points are essential for predictive modeling. Detailed information about a building’s construction, coupled with its usage, provide a predictive image of how the building will react in the event of a catastrophe.
And
with better data comes
more informed decisions
for insureds
and markets
.
67Slide68
Secondary Modifiers
The Location Import Template captures both primary and secondary characteristics
68Slide69
Secondary Modifiers
Earthquake Secondary Modifiers Impact Guide
The Earthquake Secondary Modifiers Impact Guide shows how various secondary modifiers affect the loss estimates of certain construction classes.
Depending on the construction class of a given building, the impact can range from
an
increase in loss estimate by >20%; 5-20%; or <5%, no change in loss estimate to
a
decrease
in loss estimate by
<5%; 5-20%; or >20%
In some cases, the modifier is not relevant and no change is contemplated in the model.
69Slide70
passion. innovation. accountability.
A Closer Look at
20 Secondary EQ Modifiers
>Slide71
1. Base Isolation
2. Cladding Type
3. Construction Quality
4. Engineered Foundation
Affects how much energy of the EQ enters the structure.
Little or no structural value; but damage can be severe.
Considers workmanship and quality of construction materials.
Yes / No
Glass / Precast Concrete / Unreinforced Masonry
Good / Average / Poor
No / Yes
Foundations that are explicitly designed to withstand soil deformations anticipated for landslides or liquefaction cause the building to perform better.
“Yes” will affect the model results based on the degree of landslides or liquefaction hazard present at the location modeled.
71Slide72
5. Cripple Walls
No / Braced / Unbraced CW
Could lead to total loss of building.
6. Equipment
Support Maintenance
7. Frame
Foundation Connection
Good / Average / Poor
Bolted / Unbolted
Buildings showing fatigue, distress, cracking etc. will likely sustain above average damage.
Lack of “positive” connection between structure and its foundation can cause a building to slide of its foundation.
72Slide73
8. Mechanical andElectrical Equipment EQ Bracing9. Ornamentation
10. Plan Irregularity
Is equipment properly anchored to floor or roof and/or against structural elements?
Decorative elements (parapet walls, cornices etc.) can fall off during an EQ.
Well braced / Somewhat braced / Unbraced
Little or none / Average / Extensive
Regular / Irregular
Irregular shaped buildings tend to twist in addition to shaking laterally.
11. Pounding
No / Yes
Occurs when there is little or no clearance between adjacent buildings.
73Slide74
12. Purlin Anchoring13. Soft Story
14. Short Column
Addresses the connections between tilt-up walls and the roof framing system to resist the load due to EQ shaking.
Addresses buildings that have shear walls or infill walls at upper floors that are interrupted at the first floor to provide open space for parking.
Properly anchored /
Not properly anchored
No / Yes
No / Yes
When columns in a reinforced concrete moment frame used to resist seismic loads are effectively shortened in height by the presence of spandrel beams or infill walls used as architectural elements. Increases shear forces.
74Slide75
15. Sprinkler LeakageSusceptibility16. Sprinkler Type
17. Structural Upgrade
(non URMs)
How susceptible is contents, interior partitions and fixtures to water damage?
Low / High
Wet / Dry
No / Yes
Applies to a building that has been retrofitted to provide superior EQ performance relative to other buildings of similar construction, occupancy, height, and vintage. Used if the upgrade conforms to a more stringent building code that that in use when the building was originally designed and constructed.
18. Unreinforced Masonry
Partitions or Chimneys
No / Yes
Unreinforced masonry is extremely vulnerable to EQ-induced ground motions.
The model automatically assumes that a building’s sprinkler system is 70% wet pipe and 30 % dry pipe. This modifier is used where the system is known to be wet or dry.
75Slide76
19. Unreinforced Masonry RetrofitUnreinforced masonry is extremely vulnerable to EQ ground shaking. This modifier applies to the performance of load-bearing unreinforced masonry walls only. See also modifiers “Cladding Type” and “Unreinforced Masonry Partitions and Chimneys”
No / Yes
20. Vertical Irregularity
Significant setbacks and overhangs can create stress concentrations that will experience above-average levels of damage during an EQ.
Regular (No) / Irregular (Yes)
76Slide77
Case Study: Sample Portfolio77Slide78
RMS 13.0 Results based on Primary Characteristics Only 78
EQ / EQSL
250 year and 500 year return period PMLs: $69.9M and $98.7M, respectively.
AAL: $1,491,209
Windstorm /
Storm Surge250 year and 500 year return period PMLs:
$26.7M
and
$38.5M
, respectively.
AAL:
$622,834Slide79
The Impact of Secondary CharacteristicsFor the purpose of this presentation, we applied the following secondary characteristics to the original Statement of Values:
YEAR
UPGRADED
Any building more than 20 years old, a “YEAR UPGRADED” was added to reflect “normal” time frames used to upgrade both Commercial and Residential structures (dates varied from 2002-2010).
EARTHQUAKEPlan Irregularity: all CA locations were modified from “UNKNOWN” to Option 1 “Regular”
Soft Story: all CA locations were modified from “UNKNOWN” to Option 1 “NO”.Vertical Irregularity: all CA locations were modified from “UKNOWN” to Option 1 “NO”.Short Column: all CA locations were modified from “UNKNOWN” to Option 1 “NO”.Ornamentation: all CA locations were modified from “UKNOWN” to Option 1 “Little or None”.Cripple Walls: all CA locations were modified from “UNKNOWN” to Option 1 “No Cripple Walls”Construction Quality
: all CA locations were modified from “UNKNOWN” to Option 1 “GOOD”.
Pounding
: all CA locations were modified from “UNKNOWN” to Option 1 “NO”.
Engineered Foundation
: all CA locations built from 1985 and newer were modified from “UNKNOWN” to Option 1 “YES”.
WIND/STORM SURGE
Roof Covering: modified from “UNKNOWN” to Option 4 “ Built Up roof or Single Ply Membrane Roof with the presence of gutters for Commercial Buildings and Option 7 “Normal Shingle (55mph) for Residential Buildings.Roof Age/Condition: modified from “UNKNOWN” to a mixed blend of 6-10 years/11+ years for commercial buildings and a blend of 0-5/6-10 years for Residential Buildings depending on the age of the original roof.
Roof Geometry: modified from “UNKNOWN” to Option 1 “Flat roof with Parapets” for all Commercial Buildings and Option 5 “Gable Roof (slope < 26.5 degrees)Cladding Type: modified from “UNKNOWN” to Option 1”Brick Veneer” for residential locations built prior to 1990 and Option 4 “EIFS/Stucco” for locations built 1990 and newer.79Slide80
RMS 13.0 Results with Secondary Modifiers Applied80
EQ / EQSL
250 year and 500 year return period PMLs: $44.2M
(-37%)
and $63.5M
(-36%), respectively. AAL: $886,660 (-41%)Windstorm / Storm Surge
250
year and 500 year return period PMLs:
$16.2M
(-39%)
and $23.8M
(-38%) , respectively.
AAL: $301,006 (-52%)Slide81
RMS 13.0 Results with Secondary Modifiers Applied81Slide82
RMS 13.0 Results with Secondary Modifiers Applied82
When we run the model by insurance layer, we gain valuable insights into Loss Expectancies and AAL’s for different tranches of the insurance
program.
[We did not run all the layers, this is why the ground up totals are slightly larger than the sum of all totals for layers up to $100M]
AAL
250-year
500-year
Primary $10m
$436,941
$10,000,000
$10,262,347
$15m xs $10m
$221,331
$15,000,000
$15,000,000
$25m xs $25m
$135,124
$18,433,148
$24,998,769$50m xs $50m $73,613
$0$12,264,092
$867,009 $43,433,148
$62,525,208
Delta to "Ground Up"
-$19,651
-$819,277
-$978,253Slide83
RMS 13.0 Results with Secondary Modifiers Applied83
AAL
calculations can
also be
used to:
Identify where a closer look at the data provided may be warrantedCompare the relative severity of exposure at different locations
Establish priorities for loss mitigation efforts
Assist in the allocation of premium
Improve disaster recovery
plansSlide84
Case Study: 101 year old Smith Tower (Seattle, WA)
84
Smith Tower
is a skyscraper in Pioneer Square in Seattle, Washington.
Completed in 1914
, the 38-story, 149 m (489 ft) tower is the oldest skyscraper in the city and was the tallest office building west of the Mississippi River until the Kansas City Power & Light Building was built in 1931. It remained the tallest building on the West Coast until the Space Needle overtook it in 1962. Smith Tower is named after its builder, firearm and typewriter magnate Lyman Cornelius Smith, and is a designated Seattle landmark.
Factors:
Ornamentation
Pounding
Plan Irregularity
Vertical Irregularity
Structural Update
Age
Mechanical BracingBase Isolation?Historical Building ValuationSlide85
passion. innovation. accountability.
What’s Next?
>Slide86
What’s Next?
Frustration with Property CAT models is leading to Change
Traditional models
too
focused on aggregation of risk that insurers tend to calculate, rather than individual exposures and properties.tend to change from year to year in ways that do not reflect actual changes in loss exposures.
were never designed with risk managers in mind but with insurers in mind.Risk Managers are looking for a clearer picture when it comes to CAT loss modeling, an area fraught with confusion and increasing criticism.
want more control
over how
their specific exposures
generate loss estimates and how those estimates are calculated.
w
ant to be able to drill down
into second and third tier modifiers.want to eliminate the “blind spot” on how models project losses.
There is now a move towards “Open” systems and “transparency”. 86Slide87
What’s Next?
Frustration with Property CAT models is leading to Change
“Open Source” Models – jointly developed by scientists, engineers and industry sectors
Insurance Journal:
The
Oasis Loss Modeling Framework has unveiled what it describes as “the most significant development in the modelling of natural catastrophe losses for 20 years”— the launch of an independent, global, open framework for use by any party with an interest in creating a catastrophe model.It’s owned by its members and is not-for-profit. It is designed to “bring down the cost of modeling, as well as providing transparency and greater flexibility for users.”
Membership
fee is £20,000
. As other revenue sources come on stream, this figure is expected to reduce substantially.
Members
get direct access to the code and participation in the
community
working
parties.87Slide88
What’s Next?
Oasis Loss Modeling Framework
London-based nonprofit representing 25 insurers, reinsurers and brokers
strives to offer lower cost, transparency, greater flexibility via program that is open to anyone with interest in creating new CAT risk model
offers access to the best breed models tailored for specific hazards and regions (
e.g. ImageCat with focus on EQ; Spa Risk LLC; RiskInsight, JBA Risk Management)single portal; risk manager can download software for free, search for a model of the geographical region and peril in question and negotiate a fee with the provider
allows user to look at many different
models’ views
88Slide89
What’s Next?
“Open” systems – allowing user interface – and “transparency”
Touchstone
[AIR Worldwide]
open platform, allowing to import 3rd party hazard layers or run multiple alternative models on a single platform for a more complete view of risk models
use can overwrite some of AIR’s assumptions to better reflect their experienceFirms providing data and models through Touchstone include Ambiental, ERN, EuroTempest, HIS Inc., KatRisk, Met Office, PERILS, and SSBNRisk Quantification & Engineering (RQE) [CoreLogic EQECAT]inherently open platform; additional models or components (e.g. hazards, vulnerabilities) can be added
High granularity of reports down to individual site levels; very extensive documentation; analyses of drivers of risk
Aides in dealing with regulators (Solvency II in Europe,
O
RSA in the US)
RMS(One)
[RMS]system of record for all of the risk items in the businesscan run RMS and other models; helps understand the impact of different scenariose
xposure and model agnostic89Slide90
What’s Next?
RMS
– North Atlantic Hurricane Models Version 15.0
90
How will it affect model results?
In summary, the Aggregate Exceedance Probability (AEP) and Average Annual Loss (AAL) Loss Changes for Wind and Surge from V13.0 to V15.0 will change as follows:
All US (including TX, Gulf, Florida and Hawaii):
Will
reduce by 0% to 10%
Southeast
, Mid-Atlantic & Northeast:Will
increase by 0% to 10% The pressure on underwriters to further reduce or hold existing Wind/Surge rates is obvious based on location of risk.Release: March 31, 2015Slide91
passion. innovation. accountability.
CAT Modeling Terminology
>
Appendix 1Slide92
CAT Modeling Terminology
The CAT modeling
industry is full of terminology and acronyms, many of which have been borrowed from mathematics or actuarial
modeling. What follows is an explanation of some of the most common ones used by CAT modelers
. EP Curve An EP curve communicates the
probability of any given financial loss being exceeded. It can be used in one of two ways: provided with a financial loss the EP curve could be read to give you the probability of this loss (or a greater loss) occurring; or alternatively provided with a probability level the EP curve could be read to show you the financial loss level that this corresponds to. It is important to note that this refers to a loss being exceeded, and not the exact loss itself. This approach is used for CAT modeling, as it is beneficial to identify attachment or exhaustion probabilities, calculate expected losses within a given range, or to provide benchmarks for comparisons between risks or over time. Calculating the probability of an exact financial loss is of little value.
92Slide93
OEP and an AEP curve OEP stands for Occurrence Exceedance Probability; AEP
stands for
Aggregate Exceedance Probability
. The OEP represents the probability of seeing any single event within a defined period (typically one year) with a particular loss size or greater; the AEP represents the probability of seeing total annual losses of a particular amount or greater.
They can be used in tandem to assist with managing exposure both to single large events, as well as accumulations of multiple events across a period.
CAT Modeling Terminology93Slide94
VaR and TVaR (1 of 2)VaR stands for Value at Risk;
TVaR
stands for
Tail Value at Risk. They are both mathematical measures used in cat modeling to represent a risk profile, or range of potential outcomes, in a single value. Value at Risk
is equivalent to Return Period, and measures a single point of a range of potential outcomes corresponding to a given confidence or fixed position. When used to compare two risks, in conjunction with the mean loss, it communicates a measure of uncertainty in the loss assessment. Tail
Value at Risk (or Tail Conditional Expectation) measures the mean loss of all potential outcomes with losses greater than a fixed point. It helps to communicate ‘how bad things could get’. When used to compare two risks, along with mean loss and Value at Risk, it helps communicate how quickly potential losses tail off. CAT Modeling Terminology
94Slide95
VaR and TVaR (2 of 2)With current modeling techniques any EP curve is limited by the number of theoretical events or simulation years used to make it up. In the tail of a distribution there can be large jumps between individual points.
Value
at Risk points read at high return period / confidence levels can perform strangely as the limited number of sample points makes figures jump back and forth between assessments.
The TVaR measure provides a small amount of protection against this effect. By considering the average of all points in the tail it is less sensitive to such effects and can provide a more stable measure.
However the TVaR is necessarily reliant on the quality of modeling in the tail of the distribution, where models will always be fairly weak.
CAT Modeling Terminology95Slide96
Event Loss Table (ELT) An ELT is a collection of theoretical cats (hurricanes, earthquakes etc.) along with the modeled losses estimated to occur from each event. This forms the raw data that is used to build up EP Curves and calculate other measures of risk.
Coefficient
of Variation (CoV)
The CoV is the standard deviation divided by the mean (annual average loss). The wider the variation on the distribution of data, the higher the
CoV.
CAT Modeling Terminology96Slide97
Difference between Near Term, Long Term and Historical rates Models for North Atlantic Hurricane need to take into account the strong influence that global climate and oceanic conditions have on them, potentially affecting everything from frequency and strength to landfall location. Long
term or Historical analyses
use all available information on past
hurricane activity (stretching back to around 1850) to advise on likely frequencies to be seen in the coming year. Near Term analyses by AIR (which are referred to as Medium Term analyses by RMS) attempt to better represent current conditions.
AIR does this by marking each historic year as either having the Atlantic in a “warm phase” (where sea surface temperatures in the Atlantic are warmer than the long term average) or a “cold phase”. At present we are assessed to be in a “warm phase”, so AIR uses only historic years in a similar phase to advise on likely frequencies for the model
. RMS takes a different approach, instead eliciting a number of academic “models” designed to forecast the next 5 years of events. They then apply a weight to each model according to how accurately it is able to represent the previous 5 years, to form a blended assessment of future frequencies.
CAT Modeling Terminology
97Slide98
Difference between Ground Up, Gross, Net and Final Net losses Ground-up loss
is
the loss to the policyholder or risk insured;
Gross loss typically refers to claim made to insurer;
Net loss typically refers to gross loss net of reinsurance;
Final net loss typically refers to the gross loss net of reinsurance and reinstatements.
CAT Modeling Terminology
98Slide99
passion. innovation. accountability.
Secondary Modifier Tables
>
Appendix 2Slide100
RMS will “default” to the “worst case” characteristic when a field is left BLANK.
EQ Secondary Characteristics
100Slide101
RMS will “default” to the “worst case” characteristic when a field is left BLANK.
EQ Secondary Characteristics
101Slide102
RMS will “default” to the “worst case” characteristic when a field is left BLANK.
Wind/SS Secondary Characteristics
102Slide103
RMS will “default” to the “worst case” characteristic when a field is left BLANK.
Wind/SS Secondary Characteristics
103Slide104
passion. innovation. accountability.
Wind/Surge Secondary
Modifier: Roof Anchors
>
Appendix 3Slide105
Wind/Surge – Secondary Modifier: Roof Anchors
Toe Nailing - No Anchoring / Clips / Single Wrap / Double Wraps / Structural
105
Roof anchors are used to connect the roof framing elements (i.e. rafters, trusses, or joists) to the supporting walls.
Buildings
that do not have properly-sized and installed connections between the roof and supporting walls are susceptible to severe
damage
when the entire roof system is lifted off the building by a windstorm.Slide106
passion. innovation. accountability
.
Frequently Asked Questions
Appendix 4Slide107
FREQUENTLY ASKED QUESTIONS (Source: Lloyd’s Market Association)1. Why is it that every time an event occurs I hear that it was not covered properly by the
CAT
models?
A model is only a representation of reality. Depending on the questions being asked a model could be highly complex or extremely simple, and it is in understanding the limits of a model that its value can be properly achieved
. First and foremost it must be understood what a model is attempting to represent in the first place. More recently model vendors have begun to explicitly state the elements of loss that their model is intended to represent, and more importantly they have started to identify known elements of loss that they explicitly do not cover.
CAT models do not pretend to cover all elements of all CAT risks worldwide, and it is therefore the responsibility of individuals to ensure that they clearly understand both of these. Vendors do continue to work to add to the suite of risks covered by their models, but this is a continual work in progress and is driven largely by market demands. However, even within risks that are covered we would still expect to see elements that are not perfectly represented. Producing a model of a real world phenomenon is only as good as the information that is available and the investment spent in studying it.
107Slide108
1. Why is it that every time an event occurs I hear that it was not covered properly by the CAT models? (continued)Loss Amplification (price increases following a major event caused by a scarcity of resources and an increased demand) is a known impact, but relatively little recorded information about it is available historically worldwide, and how it varies between events that occur once every 10 years to events that occur once every 100 years is almost non-existent.
An attempt to allow for this is included in a number of models, but it is highly likely that this will need to develop over time.
Models
must be considered in the context of the purpose for which they were designed. For most CAT models this is to assess the overall risk profile of a set of locations to particular hazards. To achieve this practically, certain assumptions and approximations are required. When used for its intended purpose these reductions should produce negligible impact, however
drilling down too far into any model will reach a point below which the model is no longer appropriate. The climate simulation models, used by the IPCC (Inter-governmental Panel on Climate Change) to estimate the impact of climate change on the planet, would do an appalling job of telling you what the weather will be like at your house on your birthday, but still remain valid approaches for predicting worldwide temperature changes over decades.
108Slide109
Why is it that every time an event occurs I hear that it was not covered properly by the CAT models? (continued)When an individual event occurs and the resulting profile is compared against the CAT models it is important to identify when an outcome casts doubt on a key assumption relevant to the overall value of the CAT model, or whether the particular features of the event simply fall outside of the subset of generated events, but within the consideration of the overall model.
109Slide110
2. What
is the impact of poor quality data on results?
A
model is only as good as the data that feeds it. Even if we had
perfect exposure data the challenge of CAT modeling is still huge, and the results that are produced will contain numerous
uncertainties. However if the input data is of poor quality then no amount of modeling will produce correct
output
.
Poor
quality data can be of two forms, inaccurate or
incomplete
. Inaccurate Data
Models are unable to identify inaccurate data, so will continue to assess the risk based on the information being correct. This means that output will be presented back to the user with no indication that the results being analyzed are inappropriate, and if this information continues to feed further down the chain incorrect decision making will follow. An incorrect location could put the risk further into a hazard zone, or further away. Incorrect primary characteristics could imply the location was more or less vulnerable than reality. If inaccuracies are minor and random and spread through a large enough portfolio of risks then are unlikely to cause too many problems, however if the inaccuracies are systematic, or if they occur on peak risks they have the potential to significantly mislead.
110Slide111
2. What is the impact of poor quality data on results? (continued)Incomplete DataIncomplete data causes problems for a different reason. Models need certainty to proceed, so missing information is usually replaced by estimates. This is beneficial in that it allows us to proceed with a
modeled
analysis even with information missing, but what is not always clearly communicated is the
additional uncertainty that this brings. In CAT modeling communicating and understanding the uncertainty is vital, however in the case of incomplete information no additional uncertainty is added to the results. If there were sufficient time to reprocess the analysis with the complete range of potential inputs it would be more obvious that the missing information will have introduced a far wider range of potential outcomes than is otherwise suggested. When dealing with natural
CATs the difference between building codes, or distances from the coast or a fault line can make the difference between a risk having no loss or being a total loss.
111Slide112
3. Why do I need aggregates if I have a model? CAT modeling is just one of many tools
in an arsenal for understanding and managing
CAT
risks. As noted there are many elements of CAT risk that cannot currently be modeled, or are in the early stages of being developed into a CAT model. Additionally, while
modeling helps to push the boundaries of loss forecasting, the limitations and uncertainties are unlikely to go away any time soon, and one must never lose sight of common sense approaches to managing risk.
The recording and monitoring of aggregate positions can provide a useful fall-back and sense check against which the complex output of CAT models can be reviewed and challenged. 112Slide113
4. What is a 1 in 250 return period? Future losses from CAT events cannot be accurately predicted. Instead the purpose of any form of
modeling
is to use what knowledge we do have about the likelihood of events occurring, along with estimates of the potential impacts that each event could have, to build up a picture of the range of potential outcomes.
To translate this range of outcomes into something meaningful it is common practice to select a fixed confidence level to report against. Asking for the 1 in 250 return period is, like gambling odds, simply an easier way to represent asking for the monetary loss in the range of outcomes
where only 1/250 = 0.4% of potential outcomes are worse. In mathematical terms this is the 1 – 0.4% = 99.6% confidence point, and you are stating that you are ‘99.6% confident’ that losses will not be larger than this value.
‘Return Period’ figures must therefore always be considered within the context of the analysis. For example: Which regions and perils have been included in the assessment, and are there additional potential losses not included? It is important to note that this is simply a way of representing how confident you are about potential loss outcomes being reviewed and is not directly intended to be translated into a multi-year assessment of event frequency, where other considerations would be required. 113Slide114
5. Why do 1 in 100 year losses happen every few years? ‘1 in 100’ relates to the probability of a loss in a particular region to a particular peril.
Imagine
you have a 100 sided dice. With just one dice your chance of rolling a 100 would be 1 in 100 or 1%. However, if you had 10 dice, your chance of rolling a 100 would be 10 times greater - so 10%.
The different dice represent the different perils and regions that are insured around the world so, unfortunately, ‘1 in 100 year events’ should be expected every 10 years, if not more frequently.
114Slide115
6. What is an n-year event? The “events” used by the models are theoretical and used as vehicles to support the calculations, and should be used with caution. EP curves are then built up considering all events and scenarios, and how they interact with each other and the final resulting curve should be considered as separate from the individual events that went to make it up. This EP curve can now be used to reach your n-year loss, but there is no such thing as an n-year event.
The
trouble with converting the purely financial EP curve to a real world comment on events can be seen when you consider that a rare Category 5 hurricane that skims the coast can cause far less financial loss than a more frequent Category 3 that drives onshore.
Additional useful information can be gained from looking at the events or scenarios that cause losses at the n-year level; however
it is important to remember that there are a large number of different combinations that could achieve the same result and it will not always be possible to determine this from the model alone. For example, careful examination of the EP curve may lead you to find that your n-year loss is being strongly driven by one country/peril or another; or that you have more or less exposure to single large events than multiple small events.
115Slide116
7. Why can you not add up return period losses? A ‘Return Period Loss’ is the monetary amount, given a range of potential outcomes, where a given fixed percentage of outcomes result in worse monetary losses (see also ‘What is a 1 in 250 return period?’). Combining two analyses means combining two sets of potential outcomes. In some cases the two sets may be independent, leaving you simply with a single larger set of outcomes. In other cases the two may interact – perhaps a large loss outcome from the first analysis is linked to a large loss outcome in the second, such as if both have been caused by the same theoretical Hurricane.
The
new ‘Return Period Loss’ for the combined analysis now depends heavily on how the two different sets of outcomes interacted, which can’t be seen by looking at the individual analyses alone, and must be recalculated once the grouping has been performed.
116Slide117
7. Why can you not add up return period losses? (continued)
Example
to illustrate combining two event sets
117Slide118
8. What is Pure Premium? The Pure Premium represents the average of all potential outcomes considered in the analysis, and could be considered to be the
break-even point if such a policy was to be written a very large number of times.
The nature of
CAT risk means that the profit made when actual losses are lower than this assessed average is heavily outweighed by how much of a loss you could have when actual losses are higher than this assessed average; and that real experience will be very ‘spikey’ i.e. several years of no loss, followed by a large loss. Because of this underwriters usually add an
“uncertainty” load to reach a technical premium which the models can assist with calculating. In addition, the actual premium charged by underwriters should include consideration for potential losses not included in the
modeled assessment, these can include claims handling capabilities, moral hazard, loss record, Loss Adjustment Expenses and other perils (fire, flood, theft etc). 118Slide119
9. How can I still get a loss to a layer when the mean loss is less than the attachment point? Future
losses from
CAT
events cannot be accurately predicted. Instead the purpose of any form of modeling is to use what knowledge we do have about the likelihood of events occurring, along with estimates of the potential impacts that each event could have, to build up a picture of the range of potential outcomes. The mean loss given by a model is then just the average of this range of outcomes – the break-even point if this scenario were to be repeated a large number of times – however when applying financial structures the models retain the full range of potential outcomes, and use these when considering losses to insurance policies.
While the average loss may be below the attachment point, the uncertainty involved in predicting exact losses may mean that there are some potential scenarios that do in fact exceed the attachment. It is therefore important that we consider these when calculating possible losses to the written policy.
119Slide120
9. How can I still get a loss to a layer when the mean loss is less than the attachment point? (continued)
The above diagram
shows how losses can enter a layer despite the mean loss being less than the attachment point. Red bar = layer; Blue dash line = Ground up Average Annual Loss (GUAAL); Solid blue line = range of potential losses
.
To give an example, a model might suggest that out of 10 potential future years they will see 9 clean years, and have one year with a single $100m loss.
If you were to consider writing a $20m xs $20m policy on this risk the mean loss is $100m/10 = $10m, which is below the attachment point; however in reality you would have a 9-in-10 chance of a zero loss, but a 1-in-10 chance of a total loss of $20m, giving you an average loss to the policy of $2m.
120Slide121
10. Why is the 10,000 year loss in RMS not the worst case loss for this account or portfolio? This question confuses the AIR / simulation approach to
modeling
, with RMS’s approach.
AIR uses a simulation methodology prior to setting up their model, whereby they run their model to create a potential year of CATs 10,000 times. Each time the model is run you get different combinations of events, selected according to pre-coded frequencies. When we run the model in-house we get the resulting losses from these 10,000 potential simulated years. The EP curve that AIR builds up is created by ranking losses in descending order, and assigning each simulated year an equal likelihood of occurring. In this case the 1 in 10,000 year loss is the largest in the set.
RMS
takes an entirely different approach. Each event in their model represents a scenario with a range of uncertainty, and each scenario is given an “event rate” that represents a likelihood of occurrence (a weighting). EP curves are built up mathematically from all events in the catalogue, resulting in a final distribution of potential loss outcomes that stretches out as far as they are willing to calculate. In practice this will result in the model being able to give figures for return periods in excess of 1 in 1 million, although very little confidence should be given to the modeling anywhere near this part of the curve.
The
reality is that neither model can tell you what the “worst case” loss for the account actually is, because our knowledge of
CAT
is still developing. The only sensible answer to this is “total loss”. Both models are simply stopping calculations at an extreme point.
121Slide122
passion. innovation. accountability.
Sources
>
Appendix 5Slide123
Sources:- Beecher Carlson- www.air-worldwide.com- www.rms.com- www.msbinfo.com
- www.acord.org
- www.eqecat.com
www.marsh.com www.lmalloyds.com www.AmWins.com www.riskandinsurance.com
Deloitte Consulting AG NAPCO, LLC www.wgains.com
ASPERTA Swiss Re Munich Re Insurance Journal www.acetempestre.com www.propertycasualty360.com Honor Construction Inspection Service
Harrison
, Connor. Reinsurance Principles and Practices, First Edition. Maryland: Insurance Institute of America, 2004.
- Duffy, Catherine. Held Captive, A History of International Insurance in Bermuda. Private, 2004.
Grossi, Patricia, and Kunreuther, Howard. Catastrophe Modeling: A New Approach to Managing Risk. New York: Springer, 2005.
The
information contained in
this presentation is intended as background information only. All information is provided "as is" with no guarantees of completeness, accuracy or timeliness and without warranties of any kind, express or implied.
Beecher Carlson is not responsible for, and expressly disclaims all liability for, damages of any kind, whether direct or indirect, consequential, compensatory, actual, punitive, special, incidental or exemplary, arising out of use, reference to, or reliance on any information contained herein.123