GDPR – are you ready?

Previously, we looked at the impact of the GDPR on the insurance industry in terms of consent, automatic profiling and exemptions.  In this article, we look at whether postcodes constitute ‘personal data’ and sharing data with third parties.

The GDPR defines personal data as ‘any information relating to an identifiable person’ and that includes names and location data.

The Ordnance Survey definition of a postcode unit is “an area covered by a particular postcode”.  Postcode units are unique references and identify an average of 18 addresses.  Currently, the maximum number of addresses in a postcode is 100. There are over 77,000 postcodes with only one residential address and around 336,000 postcodes with less than five residential addresses. This might be perceived to be a problem if the data attached to that postcode can be deemed to be ‘personal’ and could be used to identify a particular individual.

There has so far been no guidance issued relating to the number of properties within a postcode deemed to be the level sufficient to safeguard the anonymity of individuals residing there when using any statistics or data relating to that postcode.  Some statisticians often refer to a number as high as 30, though this number relates to something called ‘the Central Limit Theorem’ and is more to do with producing robust, reliable statistics and estimates of the mean rather than relating to privacy.

Time limits and erasure
The use of personal data should be limited to the “specific purpose” for which the processing is intended. This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise its potential use.  Clearly, there are business reasons for keeping hold of customer data but Article 17 states that data subjects are entitled to have their personal data erased or forgotten if there is no longer a legal requirement to retain the data.  It also states that the data subject has the right to request that personal data is erased without “undue delay” when the personal data is no longer necessary in relation to the purposes for which they were collected.

Sharing personal data with third parties
Insurers share data with industry bodies and platforms such as the Claims and Underwriting Exchange [CUE], the Insurance Fraud Bureau [IFB] and the Insurance Fraud Register [IFR] for the purposes of preventing fraud. The Regulation states that insurers will have to rigorously record and evidence how and why they are using and sharing data.

The ABI has been lobbying the government to pass legislation so that insurers can continue to use fraud indicator data and criminal conviction data.

With GDPR taking effect in less than 6 months, you will need to start thinking about the implications sooner rather than later to ensure you have everything in place to meet the May 2018 deadline.

The future of insurance – a brave new world

resonateTechnology is already shaping the future of insurance from autonomous vehicles and advanced driver assistance systems (ADAS), to inter-connected homes, artificial intelligence (AI) and machine learning.

One of the biggest challenges the insurance industry faces is adapting to this brave new world and maximising the opportunities that the new technology creates. Established insurers face a huge threat from agile start-ups able to better harness the new technologies. Some of the new ‘tech’ may or may not live up to the billing and some will be certain to drive rapid change. Data and analytics, what we collect and how we extract value from the data, is one area already in motion.

The big data challenge
Big data technologies and analytics are making it easier for organisations to capture large datasets but many still struggle to generate meaningful insights from the vast amount of data.  The challenge is to convert the data into meaningful information and then connect it with and across datasets in a way that enables enrichment and deep insight.

Deep risk insights
In terms of risk management, where insurers are seeing the real value is using data and analytics to gain a deeper insight which allows for better, more profitable decision making. Using artificial intelligence and machine learning, patterns and trends can be identified that would otherwise have stayed hidden.  Technologies around data have emerged to handle the exponentially growing volumes, improve velocity to support real-time analytics, and integrate a greater variety of internal and external data. Twenty years ago, many insurers couldn’t even rate a risk by postcode, particularly those distributing through brokers, due to legacy systems and IT restrictions. Now, pricing can be based on data relating to the specific individual in real time. Personalised pricing has allowed insurers to be more targeted in the selection of customers, to proactively cross and up-sell and to target opportunities in new segments or markets with confidence.

Why the hype surrounding AI?
Machine learning techniques such as neural networks have been around for a long time and were in use in the industry during the 1990’s for predictive analytics, so what’s changed?  There are three main factors; firstly, computing power has significantly improved.  According to Moore’s law, computing power effectively doubles every couple of years.  This means that algorithms are now able to crunch much more data within timescales that were previously out of reach.  Secondly, the volume, availability, and granularity of data has also radically improved.   Thirdly, the efficiency and capability of the algorithms embedded deep within neural networks have also markedly improved. These three factors combined have resulted in these types of techniques coming more into focus recently for applications within insurance.

Insurers have been using AI technologies to improve their efficiency and speed up internal operations in terms of automating processes for claims and underwriting.  AI and neural networks can also be employed to help gain a deeper insight and to differentiate risk in a much more granular way.

Business Insight has developed its own AI platform called ‘Perspective’. It is a neural network that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk and offering significant improvements in predictive accuracy compared to statistical data models.

Changing customer needs
Behavioural analytics and advanced data analysis can also help insurance companies gain a deeper understanding of their customer base for the development of personalised products and solutions.

Millennials, having grown up with smartphones and being used to digital interactions, want the ability to compare products quickly and easily and find value for money at the click of a button. They want the product best suited to them and their lifestyle and these are the things they are looking for from an insurer.  This is where data and technology will need to be harnessed effectively by insurers to create products for the next generation of customers. It is this need to adapt and evolve to match customer requirements and buying preferences that has led Aviva to recently launch their ‘Ask it Never’ initiative.  Aimed at Millennials, the idea is to eliminate the need for applicants to have to spend time answering lengthy questionnaires by pre-populating the fields using big data to streamline the application process, saving the customer time and making the service more efficient.

Agility and change need to be embraced by traditional insurers otherwise some may end up going the same way as Kodak, a market leading company that resisted change and saw its market share fall off a cliff when digital photography came along.

Product Focus – Escape of Water

perils and escape of waterEscape of Water (Non Freeze) claims currently account for around 25% of domestic claim costs, so having an accurate measure of escape of water risk is vital for insurers.

The cost of insurance claims resulting from escape of water claims such as plumbing equipment failure, and burst pipes and leaks can be significant.   Business Insight’s Escape of Water (Non-Freeze) model has been designed to predict the relative risk of escape of water claims across the UK.

Working closely with a number of insurers and data partners, Business Insight has utilised PhD level mathematical modelling to analyse highly detailed datasets against historic claims patterns to estimate risk by postcode. Over 100 million data records, 26 million properties, 1.7 million postcodes and heavy computing power has resulted in the most detailed project undertaken into this type of insured peril in the UK insurance industry.

Comprehensive information relating to property, the typical demographic make-up of the street and other key predictors has been combined to more precisely calculate the risk of an escape of water claim.  The output provides insurers with a deeper insight into the risk of an escape of water claim for enhanced risk selection and better pricing accuracy.

The model has been independently validated by a number of insurers against their experience data and has shown a high degree of predictive discrimination and potential for use as a rating factor.

Benefits include:

  • Better assessment of risk by location.
  • More precise pricing and rating.
  • Gaining insight into postcode areas where you have no experience data.
  • Discovering where you need to modify your rates to reduce exposure in higher risk areas and to optimise your profitability.

To find out more, please contact us on 01926 421408.

Why weather data matters

The insurance industry has been incorporating historical weather patterns into underwriting and pricing models for years.

How far back do weather records go? The Met Office use 1914 as the official start of the records of weather data as this was when observation stations became more uniform in the way they collect data but they do have records dating back to c1200.

The England and Wales Precipitation series, which measures rainfall and snow, goes back to 1766.  Some weather stations have been collecting data since the 1800s.  We asked our colleagues at Weathernet for a copy of the first weather entry they have and the earliest entry they have dates back to 11th March 1872.

weathernet4.png

Information on rivers and peak flows has only been collected for decades rather than centuries and the first surface water yearbooks were published in the 1930s by the Inland Water Survey.

There are a number of challenges in identifying trends in the frequency and severity of extreme weather events and these include the relatively small amount of historical data available.

If we only have reliable records for the last 100 years or so, should we be concerned about our capability to understand and forecast extreme weather events like those of the recent December floods?

The rainfall in December produced a new record of 341.4mm at the Honister Pass rain gauge, Cumbria which was more rainfall in a day than an average month. This has been estimated at over a one in a 1,000 year event.  However, we keep seeing records broken so clearly a more accurate assessment of the return period or frequency of extreme events is required.

We also need to have accurate estimates of extreme weather, particularly rainfall, to ensure the right level of investment is put into the design of flood defences. In the recent flooding in Carlise, defences built to withstand events up to a 200 year return period failed.  This is only 10 years on from the previous flood event and after a £45m investment in a new flood defence scheme.

So is climate change a factor?
There has been an increase in storminess in recent years and we do seem to be in a flood rich era.  Many scientists accept that climate change is a contributing factor to the pattern of weather we have been experiencing in recent years.  A recent study by Oxford University and the Royal Netherlands Meteorological Institute has calculated that climate change has made the recent flooding events 40% more likely.

So in summary, weather data is really important in understanding, validating and helping us to assess natural perils risk.  There are a lot of really useful risk models and data sources that can give a detailed insight into weather-related risk, particularly flooding.  However, we still have someway to go in getting a fuller understanding of more extreme events, how they will impact us going forward and what role climate change has to play in all this.

 

Business Insight exhibit at Flood Risk & Insurance Event

The GeoInformation Group & Business Insight are working together to bring insurers sophisticated data and solutions to help them better understand the location, physical properties and structure of buildings across the UK.

Combining the extensive experience and feature rich data resources of The GeoInformation Group with the market leading analytical modelling of Business Insight has resulted in a compelling, innovative solution for the UK Insurance Industry.

The solution combines accurate, detailed and reliable data with best of breed perils databases to provide high resolution predictions of the risk of theft, fire, freeze, storm, subsidence and flood for both residential and commercial insurance.

Both Business Insight and The Geoinformation Group will be exhibiting at the Flood Risk & Insurance 2015 Conference which takes place on 29 October 2015.  Find out more here.