It has become more important than ever to look at unlocking data opportunities to ensure
each business line is carrying its weight to achieve underwriting profitability
News early this month that some lines of business, for some reinsurers (the mighty
Munich and Hannover Re, no less), are not meeting their cost of capital serves to
underline the importance of positive underwriting performance.
The confluence of soft market pricing, low investment returns, and new competition
have made it more important than ever for underwriting operations to ensure each line
of business written is carrying its weight.
The challenge is heightened by the approaching exhaustion of surplus reserves.
Releases have carried the market for several years, but it seems likely that soft-market
underwriting and less-than-prudent reserving (for events such as Tianjin) will reverse
the trend, leading to strengthening for some companies in some lines. Paid claims
ratios have been moving in the wrong direction for some time. Given this backdrop, in
theory everybody must be underwriting for profit, all of the time.
In theory. But we all know what’s continuing to happen to rates. It is impossible to push
them up in a softening market, even when you know the results of the previous year or
two have been travelling south. That makes underwriting for profit all of the time, all
the more difficult. That fact, and the current market situation, combine to make our
ability to understand underwriting performance more crucial than ever.
Meanwhile, exposures are increasing in ways that can’t always be seen clearly, using
conventional methods of accessing data, when monitoring underwriting performance.
One example is the exposure creep arising under steadily increasing line sizes. This
tactic has allowed carriers to maintain premium levels in a declining market, relative
to the number of risks on their books – a key performance indicator (KPI) used by
many. However, the premium/risk KPI masks a rise in assumed risk: the net effect of
premium rate reductions and bigger lines is to crank up internal exposure.
The practice has an impact on reserving too. The traditional approach, based of
triangulating loss ratio development, totally ignores increases in exposed limits or
adequacy of premium rates. In the early years, claims development is likely to look
similar, so conventional reserving methodology will forecast the same ultimate figures,
even though actual underlying exposures may be much higher. Such granular drivers
of loss development are easy to disregard, but may have a serious impact on a
company’s underwriting performance.
Changing policy conditions are another force behind changing development. The
hours clause is the most obvious example. Increasing the hours under the event
definition can have the effect of transforming multiple independent meteorological
events into single re/insurance events.
In the UK, for example, this year’s alphabet of recent named winter storms
(particularly Eva-Frank and Gertrude-Henry-Imogen) have occurred in such a
concentration that the extended hours clause may have a costly impact for reinsurers
bereft of reinstatements. It is difficult to determine if such coverage extensions have
been adequately priced.
The same can be said for the soft-market extension of geographic scope under many
treaties. But since rates are going down, it is perhaps safe to assume it is not being
priced.
Sometimes underwriters, armed with technological wizardry, which to a great extent is
under their control, are able to justify broadened cover by manipulating modelled
outputs. It’s easy to choose to ignore a loss on the edge of the distribution by declaring
it an unrepresentative anomaly. As prices fall, model-tweaking becomes an ever-more
tempting tactic to reduce loss projections.
Selecting the right loss ratio
The solution is to embrace a multidimensional approach to risk, one which captures, at
the organisational level, the correct performance indicators, properly calculated, upon
which to base forecast results. In other words, you have to pick the right loss ratios.
Further, though, this information needs to be available all of the time, so that
unexpected developments can lead to decisive remedial responses.
That demands a harmonious connection between underwriting and the actuarial team
creating and analysing key performance indicators. Underwriters have to be focused
on the KPIs which directly affect their day-to-day underwriting profitability, and those
KPIs have to be designed with particular risk types in mind. Selecting them well
demands that underwriters build in their product and class expertise. Generic KPIs
won’t do.
Individual classes, lines of business and even specific geographies need to have their
own KPIs, because the underlying risks are different. Political risk, for example, issues
10-year policies as standard. However, by the time such risks are grouped with other
classes and examined from a high level, factors critical to exposures are camouflaged.
Changes to risk due to hours clauses or discovery periods under liability policies are
very difficult to spot at that distance, through the noise of a whole portfolio.
Once in place, a robust management information system is required, one that sends
out signals when a certain type or geography of business is off-plan and throwing up
anomalies. A handful of claims may occur outside expectations, but they have to be
spotted to prompt action. When such anomalies are to be investigated, the process has
to take place with easy access to the relevant contextual data, which helps to separate
trends from genuine anomalies.
Achieving such a performance-monitoring regime is also in part dependent upon
corporate culture. It is best achieved by companies that are agile. This agility must be
demonstrated not just in the reaction to anomalies, but in the creation (and re-creation)
of key performance indicators which relate directly to the value-added areas of a
company’s business by ensuring each KPI has a direct link to underwriting
profitability.
Agility must also be sufficient to allow underwriters to convert findings into actionable
insights. Companies must be able and empowered to change their underwriting
policy quickly enough to halt trends before they become serious losses.
To achieve that responsive level of underwriting performance monitoring of course
requires that all the relevant risk, claims, and reserving data is collected and compiled
in a consistent format, so it can be compared across datasets.
For many businesses, this technical challenge simply compounds and exacerbates the
others, but on a fundamental level, adequate data analysis can be performed, and
deviations identified, only when the data standards are at least interchangeable.
For many companies, the data required to unlock superior underwriting performance
is present, but exists simply as a grey mass. Making sense of it may seem impossible,
but doing so is critical to achieving profitable underwriting performance.