Product Focus – Theft model rebuild

resonateABI figures show that theft from households accounts for 13% of all claims received. Although the volume of theft claims has been falling in the last decade, it is still significant and amounted to over £440 million in the UK over the last year on property related claims. Having an accurate perils rating model that can differentiate risk at a highly granular level can make a considerable difference to improving loss ratios and boosting profitability.

The Business Insight residential theft model ‘Theft Insight’ predicts the relative risk and variation of domestic burglary across the UK and is currently used across the industry by sixteen major property insurers.

Business Insight also has a commercial property theft risk model specifically for commercial property insurers.  Both models are based on extensive research into crime patterns using the latest available data and take account of the changing economic landscape of the UK. This covers a cross-section of inner cities, large towns and suburban neighbourhoods through to small towns and more rural areas.  Built from high resolution spatial and demographic data and calibrated using sophisticated mathematical techniques, the models produce estimates of risk on a street by street level across the UK.

At Business Insight, we know our products need to add value to insurance company pricing and they also need to beat insurers own in-house actuarial models for an insurance company to licence our products as external data feeds.  Consequently, we invest significantly in R&D to ensure that our products help insurers maintain a competitive advantage.

Some vendors build a peril risk model which is a static product with little or no further refinement. Once built, the predictive accuracy of a perils risk model degrades over time so the continuity of development and focus on improvement and refinement is very important.

We are currently working on rebuilding our theft models using AI techniques, refreshed data and experimenting with a new level of geography that ensures the anonymity of people residing in those locations but that is also more powerful than current postcode versions. This will provide a deeper insight into crime and theft patterns across the UK and a higher level of predictive capability.

Contact the sales team for more information on 01926 421408.

GDPR – are you ready?

Previously, we looked at the impact of the GDPR on the insurance industry in terms of consent, automatic profiling and exemptions.  In this article, we look at whether postcodes constitute ‘personal data’ and sharing data with third parties.

The GDPR defines personal data as ‘any information relating to an identifiable person’ and that includes names and location data.

The Ordnance Survey definition of a postcode unit is “an area covered by a particular postcode”.  Postcode units are unique references and identify an average of 18 addresses.  Currently, the maximum number of addresses in a postcode is 100. There are over 77,000 postcodes with only one residential address and around 336,000 postcodes with less than five residential addresses. This might be perceived to be a problem if the data attached to that postcode can be deemed to be ‘personal’ and could be used to identify a particular individual.

There has so far been no guidance issued relating to the number of properties within a postcode deemed to be the level sufficient to safeguard the anonymity of individuals residing there when using any statistics or data relating to that postcode.  Some statisticians often refer to a number as high as 30, though this number relates to something called ‘the Central Limit Theorem’ and is more to do with producing robust, reliable statistics and estimates of the mean rather than relating to privacy.

Time limits and erasure
The use of personal data should be limited to the “specific purpose” for which the processing is intended. This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise its potential use.  Clearly, there are business reasons for keeping hold of customer data but Article 17 states that data subjects are entitled to have their personal data erased or forgotten if there is no longer a legal requirement to retain the data.  It also states that the data subject has the right to request that personal data is erased without “undue delay” when the personal data is no longer necessary in relation to the purposes for which they were collected.

Sharing personal data with third parties
Insurers share data with industry bodies and platforms such as the Claims and Underwriting Exchange [CUE], the Insurance Fraud Bureau [IFB] and the Insurance Fraud Register [IFR] for the purposes of preventing fraud. The Regulation states that insurers will have to rigorously record and evidence how and why they are using and sharing data.

The ABI has been lobbying the government to pass legislation so that insurers can continue to use fraud indicator data and criminal conviction data.

With GDPR taking effect in less than 6 months, you will need to start thinking about the implications sooner rather than later to ensure you have everything in place to meet the May 2018 deadline.

The future of insurance – a brave new world

resonateTechnology is already shaping the future of insurance from autonomous vehicles and advanced driver assistance systems (ADAS), to inter-connected homes, artificial intelligence (AI) and machine learning.

One of the biggest challenges the insurance industry faces is adapting to this brave new world and maximising the opportunities that the new technology creates. Established insurers face a huge threat from agile start-ups able to better harness the new technologies. Some of the new ‘tech’ may or may not live up to the billing and some will be certain to drive rapid change. Data and analytics, what we collect and how we extract value from the data, is one area already in motion.

The big data challenge
Big data technologies and analytics are making it easier for organisations to capture large datasets but many still struggle to generate meaningful insights from the vast amount of data.  The challenge is to convert the data into meaningful information and then connect it with and across datasets in a way that enables enrichment and deep insight.

Deep risk insights
In terms of risk management, where insurers are seeing the real value is using data and analytics to gain a deeper insight which allows for better, more profitable decision making. Using artificial intelligence and machine learning, patterns and trends can be identified that would otherwise have stayed hidden.  Technologies around data have emerged to handle the exponentially growing volumes, improve velocity to support real-time analytics, and integrate a greater variety of internal and external data. Twenty years ago, many insurers couldn’t even rate a risk by postcode, particularly those distributing through brokers, due to legacy systems and IT restrictions. Now, pricing can be based on data relating to the specific individual in real time. Personalised pricing has allowed insurers to be more targeted in the selection of customers, to proactively cross and up-sell and to target opportunities in new segments or markets with confidence.

Why the hype surrounding AI?
Machine learning techniques such as neural networks have been around for a long time and were in use in the industry during the 1990’s for predictive analytics, so what’s changed?  There are three main factors; firstly, computing power has significantly improved.  According to Moore’s law, computing power effectively doubles every couple of years.  This means that algorithms are now able to crunch much more data within timescales that were previously out of reach.  Secondly, the volume, availability, and granularity of data has also radically improved.   Thirdly, the efficiency and capability of the algorithms embedded deep within neural networks have also markedly improved. These three factors combined have resulted in these types of techniques coming more into focus recently for applications within insurance.

Insurers have been using AI technologies to improve their efficiency and speed up internal operations in terms of automating processes for claims and underwriting.  AI and neural networks can also be employed to help gain a deeper insight and to differentiate risk in a much more granular way.

Business Insight has developed its own AI platform called ‘Perspective’. It is a neural network that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk and offering significant improvements in predictive accuracy compared to statistical data models.

Changing customer needs
Behavioural analytics and advanced data analysis can also help insurance companies gain a deeper understanding of their customer base for the development of personalised products and solutions.

Millennials, having grown up with smartphones and being used to digital interactions, want the ability to compare products quickly and easily and find value for money at the click of a button. They want the product best suited to them and their lifestyle and these are the things they are looking for from an insurer.  This is where data and technology will need to be harnessed effectively by insurers to create products for the next generation of customers. It is this need to adapt and evolve to match customer requirements and buying preferences that has led Aviva to recently launch their ‘Ask it Never’ initiative.  Aimed at Millennials, the idea is to eliminate the need for applicants to have to spend time answering lengthy questionnaires by pre-populating the fields using big data to streamline the application process, saving the customer time and making the service more efficient.

Agility and change need to be embraced by traditional insurers otherwise some may end up going the same way as Kodak, a market leading company that resisted change and saw its market share fall off a cliff when digital photography came along.

Product Focus – DNA Dimensions – Uncovering the DNA of every street

DNA Dimensions is the latest in our suite of Risk Insight© products.  It has been designed to provide insurers with a Detailed Neighbourhood Analysis (DNA) across a range of demographic themes. This delivers a deep insight at a level of granularity to improve pricing models and risk selection capability.

DNA Dimensions is a set of orthogonal or uncorrelated risk scores explaining the variation across the vast range of demographic data sources held by Business Insight, including the latest Census information, geodemographic, environmental data and spatial data. DNA Dimensions provides a unique set of scores across a range of themes for every postcode in the UK. Candidly, this can be fed directly into insurer pricing models to explain more variation in the pattern of risk and improve the accuracy of risk pricing.

DNA Dimensions utilises a statistical analysis technique called ‘principal component analysis’ and has been applied to the full range of demographic data assets within Business Insight to uncover the underlying dimensions present down every street.  The range of themes output in the solution are essential to understanding risk such as wealth, affluence, family composition, rurality and industry. These explanatory risk themes also give a detailed insight as well as increasing the understanding of each geographic location.  Every neighbourhood of the UK has been analysed and has been given a different set of scores that uniquely describes each location across the range of factors in the DNA Dimensions product, this helps to understand:

  • The make-up of the local area
  • Affluence
  • Property turnover
  • Levels of urbanisation/ rurality
  • Housing type
  • Life stage
  • Occupation
  • Employment

The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data.  Our initial tests against experience data have shown DNA Dimensions to add considerable value to risk pricing models, indicate potential to help drive better risk selection and enhance underwriting performance.

Business Insight is focused on providing the insurance industry with innovative products that add value and drive business growth. Business Insight invests a significant amount in Research and Development every year and our expertise in statistics, big data processing as well as knowledge of insurance has ensured DNA Dimensions is relevant, precise and effective as an external data feed.

If you would like to find out more please get in touch via your Account Manager or contact our support team on 01926 421408.

 

The Great Storm of 1987 – 30 years on

After the devastating effects of Storms Harvey, Irma and Maria on the US and Caribbean Islands, we revisit the great storm of October 1987.  Experts are already saying that Storms Harvey, Irma and Maria could end up being three of the costliest storms in modern times. AIR Worldwide has put potential insured losses for the three storms in total at an astonishing $155bn. We are lucky in the UK that we don’t get storms of this type hitting our shores. Indeed, major storms causing losses in excess of £1bn are rare events in the UK. On the 16th October 2017, it will be thirty years on from the Great Storm of 1987.

Referred to in the industry as ‘87-J’, the storm took everyone by surprise and at the time, was classed as the UK’s worst storm since 1703. It still remains one of the most severe and costliest windstorms the UK has ever experienced. One in six households made a claim at the time and losses to the industry for commercial and residential cover exceeded £1.3bn.

Striking in the middle of the night, the 1 in 200-year storm left behind a trail of damage and devastation in the South East of England and Northern France with 18 people losing their lives and extensive damage to property and infrastructure. Many houses were without power for several days and fallen trees blocked roads and caused travel chaos. An estimated 15 million trees were uprooted and Seven Oaks famously became One Oak.

The worst affected areas were parts of Greater London, the Home Counties and the East of England. The South East of England experienced unusually strong wind gusts in excess of 81 mph lasting for 3 to 4 hours and gusts of up to 122 mph were recorded at Gorleston, Norfolk.

The exact path and severity of the storm were very difficult to predict using the forecasting methods and data available at the time.  The Met Office’s Michael Fish faced a backlash for dismissing a viewer who had asked about whether the UK could expect a hurricane but at the time it was hard to forecast the precise path the storm would take. The path of the storm and the direction of the wind were very unusual; running from south to north, with the storm striking the more densely populated areas of the South of England.  The South of England has higher concentrations of sums insured and this resulted in a large loss for the Insurance Industry. Subsequently, changes were made to the way forecasts are produced and the National Severe Weather Warning Service was created.

A better insight into windstorm risk

Data modelling and analytical tools to help underwrite and price property risks accurately for natural perils have come a long way since 1987 when data on individual properties was scarce and geographic risk assessed by postal district. Insurers are now much better equipped to gain an in-depth understanding of risk exposure with access to risk models that are based on up-to-date, accurate information and that take account of changing risk patterns.

Business Insight’s ‘Storm Insight’ risk rating model. is based on extensive research, huge volumes of explanatory input data and cutting-edge analytical techniques. Storm Insight utilises the largest source of storm claims information available in the UK, detailed property vulnerability data for every street and over 100 million historic windspeed data points recorded in urban areas across the UK.  We also have access to an archive of actual storm event footprints over the last 150 years to gain insight into rare events such as the 87-J Storm.

What would the industry loss be if 87-J were to happen again?

In 1987 the losses from the great storm on 17th October resulted in over £1 billion in insured losses to domestic property as well as significant damage to commercial property. Things have moved on since then, in terms of housing development, levels of affluence and insured values at risk. Over the last 30 years, there have been significant increases in housing development across the South of England in areas that were in the path of the storm in 1987.

Official figures from ONS show the number of residential properties in England increased by 28% between 1987 and 2017. In London (Outer and Inner) the increase has been 32%. Coupled with that inflation has more than doubled over the last thirty years and, perhaps more significantly, the wealth across the South East of England and London has increased enormously. Many more properties across the housing stock have been extended in 2017 compared with 1987 and the total insured values at risk is of an order of magnitude higher. The level of wealth is also far higher with one in ten households now reported as having assets worth more a £1 million.

If the UK were to encounter the same storm again in October 2017, the loss to the UK Insurance Industry would not be in the same league as recently reported losses in the USA and Caribbean though it would still break all previous UK records. In our view, it is likely that losses to the UK insurance industry for such an event would exceed £6bn.

Product Focus – Commercial and Residential Fire Insight Update

Fire is one of the few perils that consistently meets an insurer’s estimated maximum loss expectation.  Getting a greater understanding of the geographic variation in the risk of fire is becoming more important and something that many insurers are spending more time building into rating area files for property underwriting purposes.

There are many factors that influence the risk of fire ranging from property specific factors relating to the vulnerability of different types of building through to demographic and behavioural factors describing neighbourhoods and streets that are more prone to certain fire related incidents.

Business Insight has been researching and building geographic fire risk models for the last 8 years. Having a risk model that has been well researched and that can accurately differentiate risk across the UK can add considerable value to the accuracy of your buildings and contents rating area files.

Our residential fire model is based on extensive research into residential fires and assesses the relative risk and variation of deliberate and accidental fire claims across the UK.  Our commercial fire model assesses the risk of a fire claim by commercial business category, source & frequency. Both models utilise highly complex computer algorithms and vast quantities of data relating to residential and commercial properties, the local environment and the demographic make-up by area to estimate risk more precisely.

As part of our commitment to ensuring our models are continuously enhanced and kept up-to-date, we have recently recalibrated the residential and commercial fire models with enhanced data to provide a more granular level of detail and a more accurate assessment of risk.

Both models have been validated by a number of insurers using fire claims information and have shown a high degree of discrimination between high and low-risk areas.

Key benefits include:

  • Gaining a deeper understanding of your exposure to fire claims in the UK across your existing book of business.
  • Gaining insight into postcode areas where you have no experience data.
  • Discovering where you need to modify your rates to improve your fire loss ratio.

Contact our sales team for a demonstration on 01926 421408.

AI and machine learning: things to consider

Companies are investing heavily in artificial intelligence and machine learning techniques.  Harnessing the value from data available internally and externally has become a business-critical capability for insurers. 

Using sophisticated methods and algorithms, machine learning uses automation to find patterns in data, not always obvious to the human eye. Data can be mined from a variety of sources to help insurers build a fuller picture of their customers and machine learning can be used in all areas of an insurer’s business from claims processing and underwriting to fraud detection.

An advantage of machine learning is that algorithms can potentially analyse huge amounts of information quickly. Solutions can be recalibrated and redeployed rapidly by automating a process without introducing human error or bias. The desire to uncover hidden patterns and discover something the rest of the market is missing is a key driver for many companies though it is easy to be seduced by the technology and the fear of not wanting to be left behind. There are pitfalls to avoid and sometimes it is all too easy to concentrate on the technology and lose sight of other perhaps more important pieces of the jigsaw.

Neural Networks
Business Insight has been researching machine learning techniques and has developed its own AI platform that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk. The main advantage of the neural network platform we have developed is that it can potentially offer significant improvements in predictive accuracy compared to statistical data models. There can also be significant savings in time to rebuild and redeploy by the reduction in human involvement.

Traditional statistical methods require intensive manual pre-processing of input data to identify perceived potential interactions between variables.  Whereas a neural network needs minimal data preparation and interactions between variables drop out automatically which saves a considerable amount of time in model building. That said, you do need to ensure that you are not blindly seduced by the technology as there are other issues just as important when carrying out analysis of large databases.

Pearls of wisdom
Here are a few observations from what we have learned over the years that may seem blindingly obvious yet often get ignored, specifically:

1) Focus first on data quality
The validity, veracity and completeness of the underlying data you are feeding into the system is paramount. Whether internal data or external data feeds, data quality is essential. The saying ‘garbage in, garbage out’  is often true if the data you are using is of inferior quality. Hidden patterns are not ‘gems’ of knowledge but costly blind alleys if the data you are using is riddled with inaccuracies or is out of date.  Quality external data is becoming more easily accessible to the insurance market and investing in the best quality data will pay dividends over the long term.

2) Ensure the relevance of your input data for what you are trying to achieve
If you are asking the system to predict a particular target outcome you should ask:  Is the data you are utilising fit for purpose, is it relevant or sufficiently meaningful and is it representative relative to what you are trying to achieve?

3) Ensure you have the relevant knowledge and expertise to maximise the results
Though the technology is readily available, having people with a deep knowledge base, domain expertise and experience in this area is not something that is easily accessible in the insurance market. A deep understanding and knowledge of the market, the data and experience of why certain risk drivers happen is often under estimated.

The winners in the market will be those able to address these points focusing not on the technology in isolation but also the data, both internal and external, as well as attracting the best talent with the relevant domain knowledge and expertise to maximise value. Those that make sure they invest in the technology as well as the people and the appropriate data assets to drive their business forward, will be the winners in the years to come.

 

The floods of Summer 2007: 10 years on

Whilst the UK has been enjoying very hot temperatures recently, 10 years ago it was a different story.

The Summer of 2007 was the wettest since rainfall records began in 1766.  Heavy rain triggered two extreme rainfall events; on 25th June and again on July 20th.  The Met Office reported that from May through to July 2007 more than 387mm of rain fell across England and Wales which is double the average for the period. Despite a relatively dry April, by mid-June the ground was saturated and low sunshine levels meant that there was little evaporation.

On 25th June, intense rainfall led to severe flooding in parts of the North East including Sheffield, Doncaster and Hull; areas in which the level of penetration of insurance is low compared to other parts of the UK.  In Hull, over 6,000 properties were flooded and more than 10,500 homes evacuated as flash flooding led to drainage and sewage systems being overwhelmed.  The flooding caused major disruption to homes and businesses with almost half a million people without a water supply for up to 3 weeks and left many residents unable to return to their homes for up to a year.

More heavy rain on July 20th caused flooding in many parts of England and Wales with some areas hit particularly bad such as Gloucestershire, Cambridgeshire, Wiltshire, Hampshire and Oxfordshire where properties were flooded for the second time in less than a month.

 The impact on the insurance industry

The Environment Agency (EA) estimated the total costs of the 2007 floods to be £4 billion.  Around £3 billion of this loss was covered by insurance, making this one of the costliest events to date for the UK insurance industry.   In terms of insurance claims, the ABI reported around 165,000 claims with 132,000 of those claims for damage to domestic households.   Thankfully this was a rare event and believed to be somewhere between a one in 500 years and a one in a 1000 years event. This estimate though is very much a guess given the amount of data used to base this estimate on and with a changing climate calling into question the assumptions underpinning the analysis.

How flood risk mapping has changed since 2007

Whilst it is not unusual for the UK to experience extreme rainfall in the Summer, a much higher proportion of the flooding of Summer of 2007 was due to surface water flooding rather than any other type of flood risk (e.g. river flooding). By its very nature, surface water flooding is very localised and is caused by large volumes of rain water, making it very difficult to accurately predict exactly where flooding will occur geographically.

At the time, there were no surface water flood maps and insurers did not factor it into their ratings.  Today over 3 million properties are estimated to be at risk of surface water flooding in the UK.

Following the 2007 floods, the Pitt Review found that work was needed to improve the management of flooding from surface water and poor drainage.  It also identified the need for surface water flood maps for England and Wales. Subsequently, JBA Consulting developed the first nationally produced model of surface water flooding to supply to the EA.

The Flood Map for Surface Water (FMfSW) in England and Wales was developed in 2009 and included:

  • an additional rainfall probability
  • the influence of buildings
  • reduction of effective rainfall through taking account of drainage and infiltration
  • a better digital terrain model that incorporated the Environment Agency’s high-quality LIDAR data.

In 2013, an updated Flood Map for Surface Water (uFMfSW) was produced.  The new surface water flood map for England and Wales shows the worst-case flood extents, depths, velocities and hazard ratings for the 30, 100 and 1,000-year return period storm events of one, three and six-hour durations.

The EA maps were not intended to be used for insurance purposes to assess the risk to a particular property but were intended to provide an indication of whether your area may be affected by surface water flooding and to what extent.

Lessons learned for the future?

Recent flooding events have revealed the UK’s vulnerability to extreme rainfall events.  Peter Stott, Head of the Met Office’s climate monitoring and attribution team, believes there is strong evidence that extreme rainfall events are increasing and are likely to become more frequent in future years.

The general scientific consensus is, however, that the summer 2007 floods were not a “climate change event” but rather were a consequence of a combination of unusual (but normal) events such as prolonged heavy rainfall and saturated soil which made it unable to absorb the additional rainfall.

One thing that is clear is that this problem is not going away anytime soon. The NFRR (National Flood Resilience Review) concluded in September 2016 that it was plausible that rainfall experienced over the next ten years could be between 20% and 30% higher than normal.

Insurers are ensuring they are better equipped to deal with the impact of extreme weather events by using data models that are based on up-to-date information and that take account of changing risk patterns to better predict, assess and monitor risk. However, this is not just an insurance issue; it involves government, house builders, local authorities and insurers all working together to ensure the UK becomes more resilient to flooding. With a changing climate and potentially more frequent and more severe flood events in the future, we need to make sure that we take action considering what could happen – failure to adapt is not an option.

Current research indicates that if we are not able to control the average rise in global temperatures then we will subsequently see a significant increase in the risk of flooding. For example, failure to constrain average global temperature rises to within 4 degrees will see the overall risk of UK flooding increase by 150%. It’s a problem that won’t go away and one that needs to be addressed now, not after the next cluster of events.

 

Highlights of the MGAA Conference

Business Insight attended this year’s Managing General Agents’ Association (MGAA) conference which took place on 4 July 2017 in London.  The conference brought together over 600 MGAs, capacity providers, brokers and a selection of data and insurance software providers.

The theme of the event was ‘Evolution and Revolution’ looking at the MGA business model and focusing on how MGAs can continue to grow despite increased competition and regulation.

In his opening speech, Chairman of the MGAA, Charles Manchester talked about how far the MGA sector has come, the advances made and how it is now widely accepted as one of the most innovative and entrepreneurial sectors of the insurance industry.

The panel discussion was around the ‘InsureTech Revolution’ and what it means for MGAs.

Highlights of the conference can be found here.

Business Insight return to BIBA 2017

Business Insight will be exhibiting at BIBA 2017, the British Insurance Brokers’ Association annual conference in Manchester next week.

The conference, which features a panel discussion about planning for a post-Brexit future from the former Director General, John Longworth and Nigel Farage, is one of the Insurance industry’s most prestigious events.

The Business Insight team will be on stand B79 where we will be demonstrating our location matters software solution designed to assist insurers with underwriting, exposure management and risk selection. The solution combines state of the art risk mapping technology with the best of breed perils and geodemographic data to provide insurers with interactive maps displaying property location, risk, perils, policies and claims for a deeper, more powerful insight.