AI and machine learning: things to consider

Companies are investing heavily in artificial intelligence and machine learning techniques.  Harnessing the value from data available internally and externally has become a business-critical capability for insurers. 

Using sophisticated methods and algorithms, machine learning uses automation to find patterns in data, not always obvious to the human eye. Data can be mined from a variety of sources to help insurers build a fuller picture of their customers and machine learning can be used in all areas of an insurer’s business from claims processing and underwriting to fraud detection.

An advantage of machine learning is that algorithms can potentially analyse huge amounts of information quickly. Solutions can be recalibrated and redeployed rapidly by automating a process without introducing human error or bias. The desire to uncover hidden patterns and discover something the rest of the market is missing is a key driver for many companies though it is easy to be seduced by the technology and the fear of not wanting to be left behind. There are pitfalls to avoid and sometimes it is all too easy to concentrate on the technology and lose sight of other perhaps more important pieces of the jigsaw.

Neural Networks
Business Insight has been researching machine learning techniques and has developed its own AI platform that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk. The main advantage of the neural network platform we have developed is that it can potentially offer significant improvements in predictive accuracy compared to statistical data models. There can also be significant savings in time to rebuild and redeploy by the reduction in human involvement.

Traditional statistical methods require intensive manual pre-processing of input data to identify perceived potential interactions between variables.  Whereas a neural network needs minimal data preparation and interactions between variables drop out automatically which saves a considerable amount of time in model building. That said, you do need to ensure that you are not blindly seduced by the technology as there are other issues just as important when carrying out analysis of large databases.

Pearls of wisdom
Here are a few observations from what we have learned over the years that may seem blindingly obvious yet often get ignored, specifically:

1) Focus first on data quality
The validity, veracity and completeness of the underlying data you are feeding into the system is paramount. Whether internal data or external data feeds, data quality is essential. The saying ‘garbage in, garbage out’  is often true if the data you are using is of inferior quality. Hidden patterns are not ‘gems’ of knowledge but costly blind alleys if the data you are using is riddled with inaccuracies or is out of date.  Quality external data is becoming more easily accessible to the insurance market and investing in the best quality data will pay dividends over the long term.

2) Ensure the relevance of your input data for what you are trying to achieve
If you are asking the system to predict a particular target outcome you should ask:  Is the data you are utilising fit for purpose, is it relevant or sufficiently meaningful and is it representative relative to what you are trying to achieve?

3) Ensure you have the relevant knowledge and expertise to maximise the results
Though the technology is readily available, having people with a deep knowledge base, domain expertise and experience in this area is not something that is easily accessible in the insurance market. A deep understanding and knowledge of the market, the data and experience of why certain risk drivers happen is often under estimated.

The winners in the market will be those able to address these points focusing not on the technology in isolation but also the data, both internal and external, as well as attracting the best talent with the relevant domain knowledge and expertise to maximise value. Those that make sure they invest in the technology as well as the people and the appropriate data assets to drive their business forward, will be the winners in the years to come.

 

The floods of Summer 2007: 10 years on

Whilst the UK has been enjoying very hot temperatures recently, 10 years ago it was a different story.

The Summer of 2007 was the wettest since rainfall records began in 1766.  Heavy rain triggered two extreme rainfall events; on 25th June and again on July 20th.  The Met Office reported that from May through to July 2007 more than 387mm of rain fell across England and Wales which is double the average for the period. Despite a relatively dry April, by mid-June the ground was saturated and low sunshine levels meant that there was little evaporation.

On 25th June, intense rainfall led to severe flooding in parts of the North East including Sheffield, Doncaster and Hull; areas in which the level of penetration of insurance is low compared to other parts of the UK.  In Hull, over 6,000 properties were flooded and more than 10,500 homes evacuated as flash flooding led to drainage and sewage systems being overwhelmed.  The flooding caused major disruption to homes and businesses with almost half a million people without a water supply for up to 3 weeks and left many residents unable to return to their homes for up to a year.

More heavy rain on July 20th caused flooding in many parts of England and Wales with some areas hit particularly bad such as Gloucestershire, Cambridgeshire, Wiltshire, Hampshire and Oxfordshire where properties were flooded for the second time in less than a month.

 The impact on the insurance industry

The Environment Agency (EA) estimated the total costs of the 2007 floods to be £4 billion.  Around £3 billion of this loss was covered by insurance, making this one of the costliest events to date for the UK insurance industry.   In terms of insurance claims, the ABI reported around 165,000 claims with 132,000 of those claims for damage to domestic households.   Thankfully this was a rare event and believed to be somewhere between a one in 500 years and a one in a 1000 years event. This estimate though is very much a guess given the amount of data used to base this estimate on and with a changing climate calling into question the assumptions underpinning the analysis.

How flood risk mapping has changed since 2007

Whilst it is not unusual for the UK to experience extreme rainfall in the Summer, a much higher proportion of the flooding of Summer of 2007 was due to surface water flooding rather than any other type of flood risk (e.g. river flooding). By its very nature, surface water flooding is very localised and is caused by large volumes of rain water, making it very difficult to accurately predict exactly where flooding will occur geographically.

At the time, there were no surface water flood maps and insurers did not factor it into their ratings.  Today over 3 million properties are estimated to be at risk of surface water flooding in the UK.

Following the 2007 floods, the Pitt Review found that work was needed to improve the management of flooding from surface water and poor drainage.  It also identified the need for surface water flood maps for England and Wales. Subsequently, JBA Consulting developed the first nationally produced model of surface water flooding to supply to the EA.

The Flood Map for Surface Water (FMfSW) in England and Wales was developed in 2009 and included:

  • an additional rainfall probability
  • the influence of buildings
  • reduction of effective rainfall through taking account of drainage and infiltration
  • a better digital terrain model that incorporated the Environment Agency’s high-quality LIDAR data.

In 2013, an updated Flood Map for Surface Water (uFMfSW) was produced.  The new surface water flood map for England and Wales shows the worst-case flood extents, depths, velocities and hazard ratings for the 30, 100 and 1,000-year return period storm events of one, three and six-hour durations.

The EA maps were not intended to be used for insurance purposes to assess the risk to a particular property but were intended to provide an indication of whether your area may be affected by surface water flooding and to what extent.

Lessons learned for the future?

Recent flooding events have revealed the UK’s vulnerability to extreme rainfall events.  Peter Stott, Head of the Met Office’s climate monitoring and attribution team, believes there is strong evidence that extreme rainfall events are increasing and are likely to become more frequent in future years.

The general scientific consensus is, however, that the summer 2007 floods were not a “climate change event” but rather were a consequence of a combination of unusual (but normal) events such as prolonged heavy rainfall and saturated soil which made it unable to absorb the additional rainfall.

One thing that is clear is that this problem is not going away anytime soon. The NFRR (National Flood Resilience Review) concluded in September 2016 that it was plausible that rainfall experienced over the next ten years could be between 20% and 30% higher than normal.

Insurers are ensuring they are better equipped to deal with the impact of extreme weather events by using data models that are based on up-to-date information and that take account of changing risk patterns to better predict, assess and monitor risk. However, this is not just an insurance issue; it involves government, house builders, local authorities and insurers all working together to ensure the UK becomes more resilient to flooding. With a changing climate and potentially more frequent and more severe flood events in the future, we need to make sure that we take action considering what could happen – failure to adapt is not an option.

Current research indicates that if we are not able to control the average rise in global temperatures then we will subsequently see a significant increase in the risk of flooding. For example, failure to constrain average global temperature rises to within 4 degrees will see the overall risk of UK flooding increase by 150%. It’s a problem that won’t go away and one that needs to be addressed now, not after the next cluster of events.

 

Highlights of the MGAA Conference

Business Insight attended this year’s Managing General Agents’ Association (MGAA) conference which took place on 4 July 2017 in London.  The conference brought together over 600 MGAs, capacity providers, brokers and a selection of data and insurance software providers.

The theme of the event was ‘Evolution and Revolution’ looking at the MGA business model and focusing on how MGAs can continue to grow despite increased competition and regulation.

In his opening speech, Chairman of the MGAA, Charles Manchester talked about how far the MGA sector has come, the advances made and how it is now widely accepted as one of the most innovative and entrepreneurial sectors of the insurance industry.

The panel discussion was around the ‘InsureTech Revolution’ and what it means for MGAs.

Highlights of the conference can be found here.

Business Insight return to BIBA 2017

Business Insight will be exhibiting at BIBA 2017, the British Insurance Brokers’ Association annual conference in Manchester next week.

The conference, which features a panel discussion about planning for a post-Brexit future from the former Director General, John Longworth and Nigel Farage, is one of the Insurance industry’s most prestigious events.

The Business Insight team will be on stand B79 where we will be demonstrating our location matters software solution designed to assist insurers with underwriting, exposure management and risk selection. The solution combines state of the art risk mapping technology with the best of breed perils and geodemographic data to provide insurers with interactive maps displaying property location, risk, perils, policies and claims for a deeper, more powerful insight.

 

Product Update – Data Dimensions

Data Dimensions’ is the latest data product offering recently launched by Business Insight.

Using principal component analysis across a vast database of demographic variables, ‘Data Dimensions’ is a suite of orthogonal or uncorrelated scores by postcode describing different demographic features such as wealth, affluence, family composition, rurality and industry.

Every neighbourhood in the UK has a different set of scores that uniquely describes each location across the range of factors in the ‘Data Dimensions’ product. The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data. Our initial tests against experience data for both motor and household have shown ‘Data Dimensions’ to add considerable value for risk selection, underwriting and pricing.

For more details or to see a demonstration, contact the sales team on 01926 421408

Flood, building on flood plains and the profile of those at risk

It is estimated that there are currently 1 in 6 properties or 4.7 million properties in Great Britain at risk from flooding, with 2.7 million properties at risk from flooding from rivers and sea alone.  Between 2001 and 2011, around 200,000 new homes were built on land that has a significant chance of flooding, either from a river or the sea. During the 1990s, this figure was even higher as there was less focus on flood and no obligation on planners to carry an analysis of flood risk at the time.

After the devastating effects of last winter’s storms and the subsequent costs to the insurance industry, building residential properties on flood plains continues although admittedly not in the same volumes.

Recent figures obtained by the i newspaper under the Freedom of Information Act show permission has been given to build more than 1,200 new homes on flood plains despite official objections from the Environment Agency about the risk of flooding on such sites. With all the publicity and available data relating to flood risk, it does seem slightly unbelievable that construction even at these levels is allowed to proceed, or at least without an obligation on builders to ensure that properties are built to be flood resilient.

New housing built in areas thought to be protected by flood defences may also be more at risk than first thought. Flood defences are built to withstand a certain magnitude of event, e.g. a flood with an estimated return period of 1 in 200 years, yet the underlying techniques modelled from relatively small data samples are based on extreme value theory which is sensitive to the underlying assumptions. I know there are still some people in senior roles in the world that are sceptical about climate change, however it does undermine the accuracy of these models and mean that defences may be more vulnerable than when first built or constructed. A good recent example being in Carlisle which flooded in 2005 (1925 homes and businesses) with flood defences that were breached. The defences were improved at a cost of £38 million yet these failed again in 2015 following a more extreme event than had been considered in the planning.

Profiling those areas hardest hit by flooding
We used our geodemographic risk profiling tool Resonate© to analyse the demographic profile of those affected by the Winter floods in 2015 in Carlisle and areas of Cumbria.   Our analysis revealed that there was an over-representation of properties flooded from working class and disadvantaged rural areas across the distribution of those hit.

Further analysis of these areas revealed a large number of properties flooded were from the Resonate lifestyle group ‘Rural & Village Survivors’ and those worst affected were predominantly from ‘Blue Collar Heartlands’; which are characterised by blue-collar workers in pre-war terraced properties where the proportion of terraced properties is almost thirteen times the UK average percentage. There is a high proportion of this type of neighbourhood in Carlisle.

Looking at all the areas across the UK that have a high risk of flooding does reveal that there is an over-representation of older, disadvantaged and more vulnerable neighbourhoods. In the future, we will no doubt continue to see more occurrences similar to that of Carlisle with poorer and more deprived neighbourhoods being disproportionately hit.

Conclusion
As long as there is still a demand for new houses, building on flood plains will continue. There is an increased demand for new housing particularly in the South East in areas where flood defences do exist, though climate change may limit the level of protection envisaged when some of these defences were built. A geodemographic analysis of the make up of the high-risk flood areas is quite startling – higher volumes of older, more disadvantaged and more vulnerable members of society dominate.

This highlights the important role that insurance plays and how the availability of affordable flood insurance for everyone is essential.  The introduction of Flood Re goes some way towards offering flood-prone properties a degree of cover but does not yet guarantee affordable insurance for everyone. The Government will need to put more investment in maintaining and improving flood defences and will need to look at helping make properties in the highest risk areas more resilient to damage from flooding.

The insurance sector and GDPR implications

Technology is connecting us in ways not seen before. Over a third of the world’s population use social media platforms such as Facebook and there are currently more mobile devices than people on the planet.  The avalanche of data being created not only brings with it analytical challenges to find value but also concerns relating to privacy, who owns the data we generate and a perceived over-reliance on automatic decision making.

The EU’s General Data Protection Regulation (GDPR) due to come into effect in May 2018 is an attempt to address some of the concerns and brings considerable change for European-based organisations in terms of capturing, processing and using personal data. Some of the changes might be viewed as draconian and could have a major impact on the use of data in the insurance industry.

Personal data is defined as “any data that directly or indirectly identifies or makes identifiable a data subject, such as names, identification numbers, location data and online identifiers.”

Designed to improve protection for consumers, the new legislation focuses on how personal data is collected, processed and how long it is held for and includes more obligations for transparency and portability.  Under the new rules, breaches must be reported within 72 hours and organisations face tougher penalties for non-compliance which could be up to 2% of global turnover or up to Euro 20m.

Consent to process the data
Insurance by its very nature involves collecting large amounts of personal data on customers. Under the new regulations, organisations will need to show how and when consent was obtained for processing the data.

Consent must be explicitly obtained rather than assumed and it needs to be obtained every time it is used for a specific purpose.  This means that data collected in one area of the business cannot be used in another area unless explicitly agreed upfront by the customer.

This could be a problem for insurance companies as often data collected at underwriting and claims stages is then used for a variety of different purposes including fraud prevention, marketing, claims management and risk profiling.  Also, some of the individual data collected via credit agencies or aggregators and then reused for another purpose such as the real-time rating and pricing of insurance could potentially fall into this category.

Time limits and erasure
To ensure that data is not held on to for any longer than necessary, use of personal data should be limited to the specific purpose for which the processing was intended.  This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise potential use.  For example, data in relation to historical claims experience.

Customers will have the right to demand that insurers delete their personal data where it is no longer required for its original purpose, or where they have withdrawn their consent.

Right to data portability
Individuals will be able to request copies of their personal data to make transferring to another provider easier. The regulations specify that the data needs to be in a commonly-used format.  This might be problematic for insurers and intermediaries where data may be held on separate systems or in different formats.

Profiling
The GDPR provides safeguards for individuals against decisions based solely on automated processing which includes ‘profiling’. Profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.”

This new right is significant to the insurance industry as the underwriting process relies heavily on the pooling of information, building generalised models of risk to estimate future claims propensity and the profiling of individuals.   There are also other areas where decisions are made based on processes that are automated including claims analysis, fraud prevention and marketing.

Exemptions
The right does not apply to profiling using personal data if any resulting decision is deemed as necessary for entering into or performance of a contract between you and the individual.  The GDPR states that the contract needs to be between the data controller and the data subject. It is not clear about what happens when it concerns the processing of a third party’s personal data. Many insurance policies involve the processing of a third party’s personal data, in the form of a beneficiary under an insurance policy, for example, a second driver under a motor policy.

The other exemption is if the data has been anonymised – as this is no longer classed as personal data because the person cannot be identified from the data itself.

As far as profiling activities for underwriting – this is likely to be permissible as it can be considered necessary for the performance of a contract.  However, profiling for marketing purposes will not be exempt.

How does the Regulation affect the use of big data?
The process of combining large amounts of data from disparate sources and analysing it by automatic or semi-automatic means to spot trends and find hidden patterns and anomalies has a number of benefits for the insurance industry.  These include a greater understanding of risk across a book of business, more accurate pricing and improved competitiveness.  Data providers such as Business Insight are all undoubtedly giving GDPR some thought and building in technology to ensure their data products will be compliant, or at least they should be.

Business Insight
At Business Insight, we invest a significant amount in research and development every year as well as looking to continually future proof our products.   We use a range of different postgraduate analytical, statistical and mathematical techniques in researching and building models from large data sets which help guard against inaccuracies and errors.

We build models from data that has already been anonymised using various anonymisation techniques such as Bayesian Inference Swapping.  We have also been developing methodologies and IP to improve the accuracy and robustness of our perils risk models as well as ensuring compliance with the forthcoming GDPR legislation.  Our next generation of perils models and solution will be unveiled in the Summer.

Challenges ahead
In summary, the GDPR brings with it quite a few changes and challenges to the way data is collected, processed and stored.  Insurance organisations should be taking the time now to review their data management practices and systems to ensure compliance.  New technologies emerging will only serve to increase the pace of data generation and collection.  A lot of thought will need to be given by companies to ensure they remain compliant in terms of what they currently do and new solutions they are thinking of implementing.

In terms of the application of GDPR to big data, there are going to be obstacles to overcome as the legislation will force more of a balance between the potential benefits of analytics and protecting an individual’s right to privacy.  This could have a big impact in some areas and limit some of the analysis currently undertaken.  Whether Brexit eventually results in some of the legislation being softened remains to be seen, though with GDPR taking effect in May next year you will need to start thinking about the implications sooner rather than later.

Location Matters – the next step forward for underwriting UK property insurance

The increasing challenges faced by insurers include driving business growth in a highly competitive market and ensuring customer loyalty.  Remaining competitive involves optimising underwriting performance, an in-depth understanding of exposure to risk and more accurate pricing. What if there was a tool that could help you do all this?

Location Matters© is Business Insight’s new powerful visualisation and risk mapping tool that gives insurers a unique real time view of perils risk exposure by location.   Using the latest mapping technology, it is an extremely powerful visualisation and decision tool, combining market leading property risk models and demographic data into a single, easy to use, affordable system.

Through interactive maps displaying property location, risk, perils, policies and claims data, Location Matters© is designed to help insurers with underwriting decisions at point-of -sale as well as accumulation and exposure management.

Property risk is dependent on a range of factors linked to location; including the local environment, the types and construction of buildings, local crime rates, the demographic make-up of the area and physical hazards such as flooding or storm. By viewing and analysing customers against hazard data by location, underwriters and pricing analysts are able to have a deeper understanding of risk exposure, have insight into accumulations of risk across their entire book of business and as a result, more profitable risk selection.

Location Matters© is the next step forward for underwriting UK property insurance.  It brings together the best of breed perils and geodemographic data to visualise risk at property level and allows an underwriter to log-in to a web browser from any location and interrogate a postcode (or address) to gain a deeper understanding of geographic risk and the make-up of the local area.

Being able to visualise risk at such a granular level provides a greater insight and accuracy for underwriters but also helps strengthen your market position with a more in-depth view of the risk price as you write the business.

The risk mapping software also has a number of other benefits.  For marketing departments, these include having an accurate and in-depth understanding of their target market to generate new pipelines and a clearer view of where to focus their marketing campaigns.  For claims departments, it can be used to assess the validity of individual claims and also to prioritize claims handling resources which in turn helps to strengthen customer loyalty by focusing efforts on legitimate claims and improving the customer retention rate.

Location Matters© – for a more profitable risk selection and greater exposure management.

Find out more here

Long range Winter forecast 2016/17

The past few days have been distinctly chilly and have fuelled speculation of a harsh winter to come.  After last year’s mild and very wet winter, we look at early indications as to whether this is an accurate reflection.

There is an art to reading and understanding seasonal forecasts issued by the various weather services of the world.   Very few of the available forecasts use the same metrics making a consensus very difficult.

The Met Office released their 3-month outlook the beginning of November and in it they highlighted the risk of a cold start to the winter, but they were quick to point out that “This does not necessarily imply that the UK will experience cold and snow – in fact, the most likely outcome is for conditions to be relatively normal on average over the next 3 months.”

We asked our data partners at Weathernet if there were any indicators to suggest we are heading for the severe winter the press is speculating about.  The Weathernet team advise that beyond two weeks ahead, all forecasts should be treated as very speculative.

However, they report that certainly cold days – and night time frosts – are set to persist for at least another week. According to Steve Roberts of Weathernet this is due to a combination of factors and these include ENSO (El Nino Southern Oscillation) in a neutral state, QBO (Quasi-biennial Oscillation) in its easterly phase, SST (Sea Surface Temperatures – around Newfoundland) that are very warm, and the record lack of Arctic Ice.  So, the odds are already stacked significantly in favour of a December that is considerably colder (and drier) than normal.

Beyond then, from late January into February, things are less clear, or certain – but there are some grounds to believe conditions might revert to stormy and wet, leaving winter 2016-17 as a whole only a little colder and drier.

Beyond then, from late January into February, things are less clear, or certain – but there are some grounds to believe conditions might revert to stormy and wet, leaving winter 2016-17 as a whole only a little colder and drier.

If we do see temperatures as low as those of winter 2010/11, the insurance industry should be ready to brace themselves for a large number of Freeze claims.

5 key benefits of data enrichment for the Insurance Industry

resonateData enrichment provides both insurers and brokers with an opportunity to leverage the vast amount of information they already have and combine it with external data sources to improve business acquisition and enable them to more accurately assess and price risk at the point of quote.

In the past, insurers and brokers had little choice but to rely on information collected at the point of quotation, most often provided by the proposer.  But now with increasing levels of new business being shopped for and written online, there is access to a wealth of public and private data which includes data relating to the individual, their location, property, demographic and lifestyle information.

This data can be used to try and predict customer behaviour, analyse trends, uncover new patterns and improve risk exposure.  Real-time data validation at the point of quote allows additional facts relevant to the risk to be discovered. This has a number of key benefits for insurers and brokers which include:

  1. Increased fraud detection rates

Insurers are experiencing unprecedented levels of application fraud activity. ABI research shows that in 2014 insurers uncovered 212,000 attempted dishonest applications for motor insurance, which is equivalent to just over 4,000 every week.  Statistics show that drivers who lie on their initial application are 66% more likely to make a claim in the future, so the more focus insurers and brokers can put on their initial assessment of drivers, the better.  Patterns, trends and anomalies can be spotted quicker and costs savings can be made by earlier assessments of fraud and identifying early cancellation cases.

  1. Improved competitive position

Data enrichment helps to provide insurers with a single customer view by combining public and private data with quote intelligence.  Insurers are, therefore, able to more accurately assess their customer base and be more selective in terms of the risks they want to underwrite. Thus avoiding poor performing risks and more easily identifying their best customers and those with the highest lifetime value for improved profitability.

  1. Enhanced customer loyalty

Data enrichment can provide insurers and brokers with a richer, deeper understanding of their existing customers. Adding valuable business data to individual records in your database can transform your customer data into customer intelligence. A wider knowledge of your customers’ behaviour and lifestyle means that products can be specifically tailored thereby enhancing customer loyalty and retention.

  1. Greater cross-selling opportunities

A better understanding of your customers leads to more relevant targeting and more opportunity to cross-sell complementary products.  By verifying the customer is who they say they are at point of quote and assessing their credit worthiness, those customers with a higher propensity to purchase add-ons can be identified.

  1. Reduced costs in settling claims

The claims process is time-consuming and demands a lot of resources.  Assessing the propensity of the customer to fulfil their credit commitments at application stage, means that scrutiny of the data at claims stage is reduced, enabling claims to be dealt with quicker, requiring less time and resources to be spent in settling claims and ultimately improving profitability.

The future of data enrichment
In today’s technologically driven society, new ways of exploiting data to gain competitive advantage and new data sources will always be found. Insurers will continue to embrace new data sources and the greater visibility and insight this brings.

Business Insight has a range of products designed to support quote enrichment, risk selection and claims validation as well as the pricing and underwriting of insurance.  We have recently built our own data hub and will be launching the next generation of high resolution property level geographic risk models next year. This will allow users access to more accurate perils information at the point of quote. More details to follow on this in our next newsletter.