David Smith
12 March
0 comments

Why data quality
This is it, the heart of the insurance industry's new risk-based capital calculation - the basic solvency capital requirement in Solvency II.  What does this have to do with IT issues, you might ask? 

Well, quite a bit! The precise equation depends on the output of five underlying risk modules and five corresponding correlation factors.  This gives the impression of scientific accuracy. But digging beneath the surface, each factor within the square root sign is itself a result of a similar calculation. And all of this depends on a raft of data which feeds into a model calibrated using a Value-at-Risk measure, with a 99.5% confidence level, over a one year period. 

It all looks very 21stcentury, but perhaps the advice from my 20th century math teachers needs a re-voicing: the work needed to arrive at the answer is more important the answer itself.

Supporting the calculation methodology are hundreds of pages of advice and commentary on data quality requirements from CEIOPS, the committee of European insurance supervisors who are taking a leading role in the preparation of the final legislation. 

Long-dated insurance policies (especially whole-of-life and annuities) now on the books of insurers were written in an IT environment very different from today.  For instance, some systems were designed by insurers to support the door-to-door collection of insurance premiums, an outgrowth of 19th century burial insurance.  The same systems are now being tasked with the tracking of health and exposure-related information, something they were never designed to do.

Actuaries (applied mathematicians) who build insurers' capital models are sometimes physically separated and have little day-to-day contact with the IT source system experts who have daily contact (and responsibility) for policy data. Updating capital calculations for regulatory and shareholder reporting can take many months; in some cases, actuaries have created their own 'proxy' databases to feed capital models where tracing the modelled data back to the source data is virtually impossible. 

New regulations require a careful re-think about data quality and IT systems governance. Data needs to be accurate, complete and appropriate. The provenance of data is critical, firms need to demonstrate how data is captured, managed and processed and governance systems need to be in place to assure the validation of data used for capital modelling. Insurers failing to pay heed to these issues may find the door closed to the use of internal capital models and may be penalised relative to smarter competitors with higher capital requirements translating into less competitive products.

We still hear much talk about the impact that Solvency II will have on IT and whether the associated budgets are big enough.  The answer to us seems obvious; the impact is significant and budgets need to be sizeable.  To justify these budgets we see the smarter insurers embracing Solvency II using it to drive change in inefficient finance and actuarial functions and driving risk based capital management into the core of their business,  Is the industry and its advisors doing enough to understand the potential of Solvency II and implement the necessary changes?

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a0112796dcbfa28a401310f92b779970c

0 responses to 'Why data quality and governance matters'

The comments to this entry are closed.