A Cyber Industry Dataset – Know the Loss Cost to your Industry, Third Parties, and Peers

Last month, I wrote about Corax’s model update in this post. I explained the next level aggregation tools and Event Loss Table (ELT) suite available. This month, Corax has even bigger news for the insurance industry – a one-of-a-kind cyber industry dataset that gives everyone a better understanding of cyber risk.

So, first of all how did Corax do what nobody else has been able to do?

The Corax platform took three years to build. Three years is a long time in the tech world, but when you consider the work that has gone on behind the scenes, it is actually an incredible feat. Corax started from the ground up – using the latest technology stack and methodologies, and automation and machine learning throughout. It’s complicated stuff, but why go to all that trouble?

In the first commercial year, Corax worked with its clients and partners in the (re)insurance space, to validate the data, methodologies, and approach. With this all agreed upon, Corax started using its automated machine learning platform to begin the second phase of the business plan – creating a data company.

Since the start of the year, Corax has modelled over 100,000 new detailed company profiles every single day. This is only possible due to the hard work Corax did in those first three years, creating the platform to be able to cope with enterprise data and models. To put this number into perspective, competing vendors only have about 200,000 detailed profiles in their whole platform. The rest of these competing vendors use aggregated data, which suddenly means they are taking an assumption-based approach. In the world of cyber, accuracy means everything.

What exactly is a cyber industry dataset?

Corax is coming close to ten million detailed company profiles, which are regularly updated and immediately searchable. And they are aiming to have the whole of the internet mapped by the end of 2020. The only things slowing this down are data constraints from its data providers, and the cost of infrastructure.

To put it simply, the dataset comprises of 14 main industry divisions, with each division having roughly eight subdivisions. The aggregated loss output from these divisions and subdivisions is an ELT. Each event details the key drivers of loss and the return period. Aggregated exposure profiles are also available for each division and subdivision. Each individual company ELT is also available, but currently clients don’t have the infrastructure available to take such large detailed files.

So, with all this unique data, explaining the actual revenue, regions, employees, infrastructure, software, threats, expected loss etc – what do we do with it?

Clients are using this in several ways, examples being:

Insurance Brokers – They now have the ability to understand the expected limits required by each division/subdivisions, and their respective makeup/differences. This allows them to better inform clients on their peers, the threats to their industry, and the best steps to take.

Insurance Carriers – They are using this unique insight to help build global cyber pricing models and stress test aggregation scenarios on not just individual events, but also industry specific events.

Outside of insurance – Government entities are able to create detailed risk profiles of companies outside of its jurisdiction. Manufacturers are able to understand the risks posed to them from business interruption – they can evaluate all possible and contingent third parties in their industry and the implications, helping them manage their supply chain risk.

These are just a few examples of how being able to see exact industry profiles and loss is absolutely game changing for the insurance industry and beyond. Stress testing, validation, analysis, benchmarking, and real understanding of cyber is finally possible.