Portfolio steering in the soft cycle
Soft market signals:
Can data help reinsurers time the market?
One of the most important jobs for line of business heads and
business leaders is to react to market conditions in different phases of the cycle and to
steer the portfolio accordingly. For example, the portfolio is typically better served being
less weighted to long tail lines of business, such as liability, during a soft market. At
the same time, the cycle is not uniform across all products, lines of business and geographies.
It may make sense to reduce the casualty book overall but at the same time grow in one specific
market where changes (e.g. new government regulation) will lead to results being better than
those suggested by the market rates.
One potential difficulty with data being democratised – and more data being available – is that
there will be a number of ways to read the data. Units within the organisation may try to
promote their narrow interests by promoting an interpretation of the data which is favourable
to them. At this point, the business leaders need an objective party to establish which outlook
is most accurate. Business intelligence should be used to pick out the trends that are most
important and to parse the qualitative input. While it is essential to get opinions and commentary
from within the organisation, it is important to be aware of the judgement heuristics influencing
those outlooks.
Managing pricing adequacy at the portfolio level to influence underwriting processes needs to
be done carefully. A rules-based approach that demands the same level of price adequacy for
every account can stop new-business growth and lead to negative underwriting behaviour.
Evidence for the cycle will reveal itself in both qualitative and quantitative inputs
Judging where you are in the cycle is a very difficult job.
In theory, if the exposure increases while premium stays flat or decreases then it is a clear indication
of softening rates and this should be trackable. However, the reality is that it is much
harder to detect. There are a number of forces which cause the cycle to be soft or hard.
Factors that need to be considered include:
- Availability of reinsurance capital
- Availability of alternative capital (cat bonds, collateralized reinsurance)
- Rate changes
- Economic Inflation
- Superimposed inflation
- ‘Black Swan’ events
- Changes in the legal framework
- Contract terms and conditions
- Emerging medical risks
- Trend for some Casualty exposures to become more benign
It will never be possible for management to know if business
being written will be unprofitable. In 2009, market sentiment was already expressing
concerns about soft rates. With the benefit of hindsight (and reserve releases) we can
now see that it was in fact quite a hard market in 2009. The picture will emerge over
time – but before there is certainty, a really good manager will have the courage to go
with his/her conviction and steer the organisation away from business that will end up
providing a loss. Many underwriters wish they had “closed-up shop” in underwriting years
2000 and 2001 rather than writing the loss-making business that they took on. At the time,
market participants recognised there had been year-on-year drops in rates but still continued
to grow (partly due to the seemingly strong asset performance).
It is therefore useful to record actual underwriting sentiment at the time of writing and use
it for future reference. For example, it may be possible to look through the sentiments
expressed in meeting minutes and plot a pattern for emergence of the soft cycle before the
quantitative data proves it.
Portfolio steering scenario:
Reporting is static and time consuming
Current situation
A group of business analysts (typically in finance or a centre
of competence) collect information from across numerous databases and systems, map it, cleanse
it, get rid of anomalies and structure it in Excel for reporting. Typically, data processing
takes up the majority of their time. The business analysts then ask specific units with the
organisation for commentary. They send out specific parts of the portfolio report to these
units and receive commentary back in emails which are pasted into a centralised document.
Simply managing these responses is a very time consuming undertaking. After a number of
iterations of checking and reviewing the commentary, the report is published and used for
high-level portfolio steering meetings.
Business managers rely on these portfolio reports to judge performance. For example, if profits
are steadily decreasing in a certain line of business and increasing in another line, there might
be a discussion over whether to reallocate capital. The discussion is often quite heated, with
different areas of the business defending their own results.
After the report has been used for overall portfolio steering, individual reports concentrating
on specific sections of the document will then be sent to relevant units. Overlapping duties
mean that individual units (i.e. for each class of business) end up duplicating these tasks
for their specific analytical needs.
Problem
The process of putting together and distributing portfolio
reports is time-consuming and costly. By the time a report is complete, it is already out
of date (which can be relevant if new reserving/loss developments occur). Information is
not widely spread through the organisation. Reports are static rather than dynamic and granular.
The process for querying is unstructured and slow. Consequently, discussions in high-level
meetings cannot get to the bottom of issues and instead get stuck on data questions. For example,
if there is poor performance in a portfolio and someone contends that the reason for poor
performance is a certain set of losses in one underwriting year, it is useful to be able to
apply a filter to take out the largest losses and immediately confirm or question the contention.
If individual units are using data that disagrees with the data produced by the central business
analysts – as often happens – there will usually be a request to reconcile data. Huge amounts of
time can thus be spent reconciling data, since each business analyst wants to defend the basis of
the analysis they have completed and presented.
Ideal Situation
Data processing should be done almost entirely by computer software
and stored so that it can be accessed online. Where data cleansing does need human input, this
process must be streamlined. For example, if a loading for capital is allocated at a region/LoB/ToB
level and the results need to be assessed on a contract level then the system will offer different
methods to distribute the capital cost across contracts. Since the data is located in one online
system, it is easily accessed across the organisation with clearly controlled access rights. As a
result of data processing being centralised, this one system becomes the gold standard for data,
thus removing the need for users to pull data and manually cleanse it themselves. Data consistency
also gives more confidence to users to make decisions based on the data.
Having a repository for the data is a good first step. Ideally, the repository also enables the
quick conversion of data into standard templates so that the first level of reporting, which is
often quite homogenous across lines of business (e.g. overall premium, loss ratio and profit
development), can be produced with very little effort. Innovative or interesting approaches to
displaying data should be easily transferable across the organisation. For example, if a marine
team has a particularly good way of displaying data, their charts and graphs should be repopulated
with data relating to engineering to avoid analysts having to draw their own from scratch. This
speed of processing frees up time for digging deeper into the trends that underlie the overall
movements in results.
References
1
Guy Carpenter. October 23rd, 2013