Cyber risk models – who does it best?
Cat Risk London’s cyber ‘bake off’ gave the market an excellent chance to start comparing the different cyber modelling vendors’ estimations and approaches.
Prior to the event, vendors were each given the same sample portfolio of company names and hypothetical policy data, and were required to provide loss estimates for individual companies and for the whole portfolio. As noted by Federico Waisman, who organised and moderated the model comparison, the sample portfolio had reasonable representation across both size of company and primary versus excess layers.
The results from the five vendors are shown in slide 1. The result from one model (purple) is much higher than the others, and the other four (including Corax in blue) are ‘closer’, albeit the loss estimates varied considerably, by 3.4, 4.1, and 4.3 times different, and up to 6.7 times different for longer return periods.
However, when looking at the covariance – or standard deviation over the mean (slide 2) – three of the vendors are quite similar (green, blue and orange).
To attempt to account for the differences in loss estimates, slide 3 shows that between the vendors there was a massive variance in their revenue estimates for each company. Adjusting for industry and geography, loss estimates would be generally proportional to revenue – so given the high variance in revenue estimates, it is not surprising that the loss estimates varied so much. This suggests that given the same revenue, loss estimates may well have been closer to one another.
So what? It’s clear that there is still work to be done on correctly identifying or estimating company fundamentals i.e. the company is who we think it is, we have an accurate view of its industry sector, location and commercial posture. Improving accuracy in company fundamentals can be solved in part by ensuring the companies themselves supply key data points, like website domain, revenue etc. That’s because several vendors, including Corax, perform automated external infrastructure scanning, a process that is initiated by using the website domain name of the individual company. That information allows correct identification of a company and in turn ensures loss estimates are based upon accurate information.
The variance in revenue estimates also demonstrated the importance of allowing individual users of each model to have the ability to control inputs into the models (greater flexibility). This means that:
1. they are refining the view of risk based on their actual knowledge of the company
2. By using machine learning the estimations will improve with the input of the market.
The Corax platform gives exactly that power to the individual user. We allow great flexibility in the ways customers can consume the product. Our rich dataset, its modelled output, or custom analytics (where clients modify inputs to the model) can be consumed either individually or combined, and delivered either via API or via our online interface. And our modelled data is developed within a proprietary AI probabilistic engine to predict the expected cost of data compromise and IT disruption with unprecedented accuracy.
Cyber is a young market, but it’s maturing quickly. It’s important that the modelling community is open about its approaches and engages with the insurance market. We believe that trust in the cyber modelling vendors will only come from moving away from a black box approach and embracing both flexibility and transparency.
It’s equally important that the insurance market engages with vendors and shares data. This will inevitably improve results for all. The fact that as vendors, we were all willing to take part in such a public forum, should give us hope. It’s a sign that as an industry we can work together – to give the insurance market a better understanding of cyber risk, and a view of where the opportunities are for the future.