Product Focus – DNA Dimensions – Uncovering the DNA of every street

DNA Dimensions is the latest in our suite of Risk Insight© products.  It has been designed to provide insurers with a Detailed Neighbourhood Analysis (DNA) across a range of demographic themes. This delivers a deep insight at a level of granularity to improve pricing models and risk selection capability.

DNA Dimensions is a set of orthogonal or uncorrelated risk scores explaining the variation across the vast range of demographic data sources held by Business Insight, including the latest Census information, geodemographic, environmental data and spatial data. DNA Dimensions provides a unique set of scores across a range of themes for every postcode in the UK. Candidly, this can be fed directly into insurer pricing models to explain more variation in the pattern of risk and improve the accuracy of risk pricing.

DNA Dimensions utilises a statistical analysis technique called ‘principal component analysis’ and has been applied to the full range of demographic data assets within Business Insight to uncover the underlying dimensions present down every street.  The range of themes output in the solution are essential to understanding risk such as wealth, affluence, family composition, rurality and industry. These explanatory risk themes also give a detailed insight as well as increasing the understanding of each geographic location.  Every neighbourhood of the UK has been analysed and has been given a different set of scores that uniquely describes each location across the range of factors in the DNA Dimensions product, this helps to understand:

  • The make-up of the local area
  • Affluence
  • Property turnover
  • Levels of urbanisation/ rurality
  • Housing type
  • Life stage
  • Occupation
  • Employment

The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data.  Our initial tests against experience data have shown DNA Dimensions to add considerable value to risk pricing models, indicate potential to help drive better risk selection and enhance underwriting performance.

Business Insight is focused on providing the insurance industry with innovative products that add value and drive business growth. Business Insight invests a significant amount in Research and Development every year and our expertise in statistics, big data processing as well as knowledge of insurance has ensured DNA Dimensions is relevant, precise and effective as an external data feed.

If you would like to find out more please get in touch via your Account Manager or contact our support team on 01926 421408.

 

The Great Storm of 1987 – 30 years on

After the devastating effects of Storms Harvey, Irma and Maria on the US and Caribbean Islands, we revisit the great storm of October 1987.  Experts are already saying that Storms Harvey, Irma and Maria could end up being three of the costliest storms in modern times. AIR Worldwide has put potential insured losses for the three storms in total at an astonishing $155bn. We are lucky in the UK that we don’t get storms of this type hitting our shores. Indeed, major storms causing losses in excess of £1bn are rare events in the UK. On the 16th October 2017, it will be thirty years on from the Great Storm of 1987.

Referred to in the industry as ‘87-J’, the storm took everyone by surprise and at the time, was classed as the UK’s worst storm since 1703. It still remains one of the most severe and costliest windstorms the UK has ever experienced. One in six households made a claim at the time and losses to the industry for commercial and residential cover exceeded £1.3bn.

Striking in the middle of the night, the 1 in 200-year storm left behind a trail of damage and devastation in the South East of England and Northern France with 18 people losing their lives and extensive damage to property and infrastructure. Many houses were without power for several days and fallen trees blocked roads and caused travel chaos. An estimated 15 million trees were uprooted and Seven Oaks famously became One Oak.

The worst affected areas were parts of Greater London, the Home Counties and the East of England. The South East of England experienced unusually strong wind gusts in excess of 81 mph lasting for 3 to 4 hours and gusts of up to 122 mph were recorded at Gorleston, Norfolk.

The exact path and severity of the storm were very difficult to predict using the forecasting methods and data available at the time.  The Met Office’s Michael Fish faced a backlash for dismissing a viewer who had asked about whether the UK could expect a hurricane but at the time it was hard to forecast the precise path the storm would take. The path of the storm and the direction of the wind were very unusual; running from south to north, with the storm striking the more densely populated areas of the South of England.  The South of England has higher concentrations of sums insured and this resulted in a large loss for the Insurance Industry. Subsequently, changes were made to the way forecasts are produced and the National Severe Weather Warning Service was created.

A better insight into windstorm risk

Data modelling and analytical tools to help underwrite and price property risks accurately for natural perils have come a long way since 1987 when data on individual properties was scarce and geographic risk assessed by postal district. Insurers are now much better equipped to gain an in-depth understanding of risk exposure with access to risk models that are based on up-to-date, accurate information and that take account of changing risk patterns.

Business Insight’s ‘Storm Insight’ risk rating model. is based on extensive research, huge volumes of explanatory input data and cutting-edge analytical techniques. Storm Insight utilises the largest source of storm claims information available in the UK, detailed property vulnerability data for every street and over 100 million historic windspeed data points recorded in urban areas across the UK.  We also have access to an archive of actual storm event footprints over the last 150 years to gain insight into rare events such as the 87-J Storm.

What would the industry loss be if 87-J were to happen again?

In 1987 the losses from the great storm on 17th October resulted in over £1 billion in insured losses to domestic property as well as significant damage to commercial property. Things have moved on since then, in terms of housing development, levels of affluence and insured values at risk. Over the last 30 years, there have been significant increases in housing development across the South of England in areas that were in the path of the storm in 1987.

Official figures from ONS show the number of residential properties in England increased by 28% between 1987 and 2017. In London (Outer and Inner) the increase has been 32%. Coupled with that inflation has more than doubled over the last thirty years and, perhaps more significantly, the wealth across the South East of England and London has increased enormously. Many more properties across the housing stock have been extended in 2017 compared with 1987 and the total insured values at risk is of an order of magnitude higher. The level of wealth is also far higher with one in ten households now reported as having assets worth more a £1 million.

If the UK were to encounter the same storm again in October 2017, the loss to the UK Insurance Industry would not be in the same league as recently reported losses in the USA and Caribbean though it would still break all previous UK records. In our view, it is likely that losses to the UK insurance industry for such an event would exceed £6bn.

Product Focus – Commercial and Residential Fire Insight Update

Fire is one of the few perils that consistently meets an insurer’s estimated maximum loss expectation.  Getting a greater understanding of the geographic variation in the risk of fire is becoming more important and something that many insurers are spending more time building into rating area files for property underwriting purposes.

There are many factors that influence the risk of fire ranging from property specific factors relating to the vulnerability of different types of building through to demographic and behavioural factors describing neighbourhoods and streets that are more prone to certain fire related incidents.

Business Insight has been researching and building geographic fire risk models for the last 8 years. Having a risk model that has been well researched and that can accurately differentiate risk across the UK can add considerable value to the accuracy of your buildings and contents rating area files.

Our residential fire model is based on extensive research into residential fires and assesses the relative risk and variation of deliberate and accidental fire claims across the UK.  Our commercial fire model assesses the risk of a fire claim by commercial business category, source & frequency. Both models utilise highly complex computer algorithms and vast quantities of data relating to residential and commercial properties, the local environment and the demographic make-up by area to estimate risk more precisely.

As part of our commitment to ensuring our models are continuously enhanced and kept up-to-date, we have recently recalibrated the residential and commercial fire models with enhanced data to provide a more granular level of detail and a more accurate assessment of risk.

Both models have been validated by a number of insurers using fire claims information and have shown a high degree of discrimination between high and low-risk areas.

Key benefits include:

  • Gaining a deeper understanding of your exposure to fire claims in the UK across your existing book of business.
  • Gaining insight into postcode areas where you have no experience data.
  • Discovering where you need to modify your rates to improve your fire loss ratio.

Contact our sales team for a demonstration on 01926 421408.

AI and machine learning: things to consider

Companies are investing heavily in artificial intelligence and machine learning techniques.  Harnessing the value from data available internally and externally has become a business-critical capability for insurers. 

Using sophisticated methods and algorithms, machine learning uses automation to find patterns in data, not always obvious to the human eye. Data can be mined from a variety of sources to help insurers build a fuller picture of their customers and machine learning can be used in all areas of an insurer’s business from claims processing and underwriting to fraud detection.

An advantage of machine learning is that algorithms can potentially analyse huge amounts of information quickly. Solutions can be recalibrated and redeployed rapidly by automating a process without introducing human error or bias. The desire to uncover hidden patterns and discover something the rest of the market is missing is a key driver for many companies though it is easy to be seduced by the technology and the fear of not wanting to be left behind. There are pitfalls to avoid and sometimes it is all too easy to concentrate on the technology and lose sight of other perhaps more important pieces of the jigsaw.

Neural Networks
Business Insight has been researching machine learning techniques and has developed its own AI platform that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk. The main advantage of the neural network platform we have developed is that it can potentially offer significant improvements in predictive accuracy compared to statistical data models. There can also be significant savings in time to rebuild and redeploy by the reduction in human involvement.

Traditional statistical methods require intensive manual pre-processing of input data to identify perceived potential interactions between variables.  Whereas a neural network needs minimal data preparation and interactions between variables drop out automatically which saves a considerable amount of time in model building. That said, you do need to ensure that you are not blindly seduced by the technology as there are other issues just as important when carrying out analysis of large databases.

Pearls of wisdom
Here are a few observations from what we have learned over the years that may seem blindingly obvious yet often get ignored, specifically:

1) Focus first on data quality
The validity, veracity and completeness of the underlying data you are feeding into the system is paramount. Whether internal data or external data feeds, data quality is essential. The saying ‘garbage in, garbage out’  is often true if the data you are using is of inferior quality. Hidden patterns are not ‘gems’ of knowledge but costly blind alleys if the data you are using is riddled with inaccuracies or is out of date.  Quality external data is becoming more easily accessible to the insurance market and investing in the best quality data will pay dividends over the long term.

2) Ensure the relevance of your input data for what you are trying to achieve
If you are asking the system to predict a particular target outcome you should ask:  Is the data you are utilising fit for purpose, is it relevant or sufficiently meaningful and is it representative relative to what you are trying to achieve?

3) Ensure you have the relevant knowledge and expertise to maximise the results
Though the technology is readily available, having people with a deep knowledge base, domain expertise and experience in this area is not something that is easily accessible in the insurance market. A deep understanding and knowledge of the market, the data and experience of why certain risk drivers happen is often under estimated.

The winners in the market will be those able to address these points focusing not on the technology in isolation but also the data, both internal and external, as well as attracting the best talent with the relevant domain knowledge and expertise to maximise value. Those that make sure they invest in the technology as well as the people and the appropriate data assets to drive their business forward, will be the winners in the years to come.

 

The floods of Summer 2007: 10 years on

Whilst the UK has been enjoying very hot temperatures recently, 10 years ago it was a different story.

The Summer of 2007 was the wettest since rainfall records began in 1766.  Heavy rain triggered two extreme rainfall events; on 25th June and again on July 20th.  The Met Office reported that from May through to July 2007 more than 387mm of rain fell across England and Wales which is double the average for the period. Despite a relatively dry April, by mid-June the ground was saturated and low sunshine levels meant that there was little evaporation.

On 25th June, intense rainfall led to severe flooding in parts of the North East including Sheffield, Doncaster and Hull; areas in which the level of penetration of insurance is low compared to other parts of the UK.  In Hull, over 6,000 properties were flooded and more than 10,500 homes evacuated as flash flooding led to drainage and sewage systems being overwhelmed.  The flooding caused major disruption to homes and businesses with almost half a million people without a water supply for up to 3 weeks and left many residents unable to return to their homes for up to a year.

More heavy rain on July 20th caused flooding in many parts of England and Wales with some areas hit particularly bad such as Gloucestershire, Cambridgeshire, Wiltshire, Hampshire and Oxfordshire where properties were flooded for the second time in less than a month.

 The impact on the insurance industry

The Environment Agency (EA) estimated the total costs of the 2007 floods to be £4 billion.  Around £3 billion of this loss was covered by insurance, making this one of the costliest events to date for the UK insurance industry.   In terms of insurance claims, the ABI reported around 165,000 claims with 132,000 of those claims for damage to domestic households.   Thankfully this was a rare event and believed to be somewhere between a one in 500 years and a one in a 1000 years event. This estimate though is very much a guess given the amount of data used to base this estimate on and with a changing climate calling into question the assumptions underpinning the analysis.

How flood risk mapping has changed since 2007

Whilst it is not unusual for the UK to experience extreme rainfall in the Summer, a much higher proportion of the flooding of Summer of 2007 was due to surface water flooding rather than any other type of flood risk (e.g. river flooding). By its very nature, surface water flooding is very localised and is caused by large volumes of rain water, making it very difficult to accurately predict exactly where flooding will occur geographically.

At the time, there were no surface water flood maps and insurers did not factor it into their ratings.  Today over 3 million properties are estimated to be at risk of surface water flooding in the UK.

Following the 2007 floods, the Pitt Review found that work was needed to improve the management of flooding from surface water and poor drainage.  It also identified the need for surface water flood maps for England and Wales. Subsequently, JBA Consulting developed the first nationally produced model of surface water flooding to supply to the EA.

The Flood Map for Surface Water (FMfSW) in England and Wales was developed in 2009 and included:

  • an additional rainfall probability
  • the influence of buildings
  • reduction of effective rainfall through taking account of drainage and infiltration
  • a better digital terrain model that incorporated the Environment Agency’s high-quality LIDAR data.

In 2013, an updated Flood Map for Surface Water (uFMfSW) was produced.  The new surface water flood map for England and Wales shows the worst-case flood extents, depths, velocities and hazard ratings for the 30, 100 and 1,000-year return period storm events of one, three and six-hour durations.

The EA maps were not intended to be used for insurance purposes to assess the risk to a particular property but were intended to provide an indication of whether your area may be affected by surface water flooding and to what extent.

Lessons learned for the future?

Recent flooding events have revealed the UK’s vulnerability to extreme rainfall events.  Peter Stott, Head of the Met Office’s climate monitoring and attribution team, believes there is strong evidence that extreme rainfall events are increasing and are likely to become more frequent in future years.

The general scientific consensus is, however, that the summer 2007 floods were not a “climate change event” but rather were a consequence of a combination of unusual (but normal) events such as prolonged heavy rainfall and saturated soil which made it unable to absorb the additional rainfall.

One thing that is clear is that this problem is not going away anytime soon. The NFRR (National Flood Resilience Review) concluded in September 2016 that it was plausible that rainfall experienced over the next ten years could be between 20% and 30% higher than normal.

Insurers are ensuring they are better equipped to deal with the impact of extreme weather events by using data models that are based on up-to-date information and that take account of changing risk patterns to better predict, assess and monitor risk. However, this is not just an insurance issue; it involves government, house builders, local authorities and insurers all working together to ensure the UK becomes more resilient to flooding. With a changing climate and potentially more frequent and more severe flood events in the future, we need to make sure that we take action considering what could happen – failure to adapt is not an option.

Current research indicates that if we are not able to control the average rise in global temperatures then we will subsequently see a significant increase in the risk of flooding. For example, failure to constrain average global temperature rises to within 4 degrees will see the overall risk of UK flooding increase by 150%. It’s a problem that won’t go away and one that needs to be addressed now, not after the next cluster of events.

 

Highlights of the MGAA Conference

Business Insight attended this year’s Managing General Agents’ Association (MGAA) conference which took place on 4 July 2017 in London.  The conference brought together over 600 MGAs, capacity providers, brokers and a selection of data and insurance software providers.

The theme of the event was ‘Evolution and Revolution’ looking at the MGA business model and focusing on how MGAs can continue to grow despite increased competition and regulation.

In his opening speech, Chairman of the MGAA, Charles Manchester talked about how far the MGA sector has come, the advances made and how it is now widely accepted as one of the most innovative and entrepreneurial sectors of the insurance industry.

The panel discussion was around the ‘InsureTech Revolution’ and what it means for MGAs.

Highlights of the conference can be found here.

Business Insight return to BIBA 2017

Business Insight will be exhibiting at BIBA 2017, the British Insurance Brokers’ Association annual conference in Manchester next week.

The conference, which features a panel discussion about planning for a post-Brexit future from the former Director General, John Longworth and Nigel Farage, is one of the Insurance industry’s most prestigious events.

The Business Insight team will be on stand B79 where we will be demonstrating our location matters software solution designed to assist insurers with underwriting, exposure management and risk selection. The solution combines state of the art risk mapping technology with the best of breed perils and geodemographic data to provide insurers with interactive maps displaying property location, risk, perils, policies and claims for a deeper, more powerful insight.

 

Product Update – Data Dimensions

Data Dimensions’ is the latest data product offering recently launched by Business Insight.

Using principal component analysis across a vast database of demographic variables, ‘Data Dimensions’ is a suite of orthogonal or uncorrelated scores by postcode describing different demographic features such as wealth, affluence, family composition, rurality and industry.

Every neighbourhood in the UK has a different set of scores that uniquely describes each location across the range of factors in the ‘Data Dimensions’ product. The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data. Our initial tests against experience data for both motor and household have shown ‘Data Dimensions’ to add considerable value for risk selection, underwriting and pricing.

For more details or to see a demonstration, contact the sales team on 01926 421408

Flood, building on flood plains and the profile of those at risk

It is estimated that there are currently 1 in 6 properties or 4.7 million properties in Great Britain at risk from flooding, with 2.7 million properties at risk from flooding from rivers and sea alone.  Between 2001 and 2011, around 200,000 new homes were built on land that has a significant chance of flooding, either from a river or the sea. During the 1990s, this figure was even higher as there was less focus on flood and no obligation on planners to carry an analysis of flood risk at the time.

After the devastating effects of last winter’s storms and the subsequent costs to the insurance industry, building residential properties on flood plains continues although admittedly not in the same volumes.

Recent figures obtained by the i newspaper under the Freedom of Information Act show permission has been given to build more than 1,200 new homes on flood plains despite official objections from the Environment Agency about the risk of flooding on such sites. With all the publicity and available data relating to flood risk, it does seem slightly unbelievable that construction even at these levels is allowed to proceed, or at least without an obligation on builders to ensure that properties are built to be flood resilient.

New housing built in areas thought to be protected by flood defences may also be more at risk than first thought. Flood defences are built to withstand a certain magnitude of event, e.g. a flood with an estimated return period of 1 in 200 years, yet the underlying techniques modelled from relatively small data samples are based on extreme value theory which is sensitive to the underlying assumptions. I know there are still some people in senior roles in the world that are sceptical about climate change, however it does undermine the accuracy of these models and mean that defences may be more vulnerable than when first built or constructed. A good recent example being in Carlisle which flooded in 2005 (1925 homes and businesses) with flood defences that were breached. The defences were improved at a cost of £38 million yet these failed again in 2015 following a more extreme event than had been considered in the planning.

Profiling those areas hardest hit by flooding
We used our geodemographic risk profiling tool Resonate© to analyse the demographic profile of those affected by the Winter floods in 2015 in Carlisle and areas of Cumbria.   Our analysis revealed that there was an over-representation of properties flooded from working class and disadvantaged rural areas across the distribution of those hit.

Further analysis of these areas revealed a large number of properties flooded were from the Resonate lifestyle group ‘Rural & Village Survivors’ and those worst affected were predominantly from ‘Blue Collar Heartlands’; which are characterised by blue-collar workers in pre-war terraced properties where the proportion of terraced properties is almost thirteen times the UK average percentage. There is a high proportion of this type of neighbourhood in Carlisle.

Looking at all the areas across the UK that have a high risk of flooding does reveal that there is an over-representation of older, disadvantaged and more vulnerable neighbourhoods. In the future, we will no doubt continue to see more occurrences similar to that of Carlisle with poorer and more deprived neighbourhoods being disproportionately hit.

Conclusion
As long as there is still a demand for new houses, building on flood plains will continue. There is an increased demand for new housing particularly in the South East in areas where flood defences do exist, though climate change may limit the level of protection envisaged when some of these defences were built. A geodemographic analysis of the make up of the high-risk flood areas is quite startling – higher volumes of older, more disadvantaged and more vulnerable members of society dominate.

This highlights the important role that insurance plays and how the availability of affordable flood insurance for everyone is essential.  The introduction of Flood Re goes some way towards offering flood-prone properties a degree of cover but does not yet guarantee affordable insurance for everyone. The Government will need to put more investment in maintaining and improving flood defences and will need to look at helping make properties in the highest risk areas more resilient to damage from flooding.

The insurance sector and GDPR implications

Technology is connecting us in ways not seen before. Over a third of the world’s population use social media platforms such as Facebook and there are currently more mobile devices than people on the planet.  The avalanche of data being created not only brings with it analytical challenges to find value but also concerns relating to privacy, who owns the data we generate and a perceived over-reliance on automatic decision making.

The EU’s General Data Protection Regulation (GDPR) due to come into effect in May 2018 is an attempt to address some of the concerns and brings considerable change for European-based organisations in terms of capturing, processing and using personal data. Some of the changes might be viewed as draconian and could have a major impact on the use of data in the insurance industry.

Personal data is defined as “any data that directly or indirectly identifies or makes identifiable a data subject, such as names, identification numbers, location data and online identifiers.”

Designed to improve protection for consumers, the new legislation focuses on how personal data is collected, processed and how long it is held for and includes more obligations for transparency and portability.  Under the new rules, breaches must be reported within 72 hours and organisations face tougher penalties for non-compliance which could be up to 2% of global turnover or up to Euro 20m.

Consent to process the data
Insurance by its very nature involves collecting large amounts of personal data on customers. Under the new regulations, organisations will need to show how and when consent was obtained for processing the data.

Consent must be explicitly obtained rather than assumed and it needs to be obtained every time it is used for a specific purpose.  This means that data collected in one area of the business cannot be used in another area unless explicitly agreed upfront by the customer.

This could be a problem for insurance companies as often data collected at underwriting and claims stages is then used for a variety of different purposes including fraud prevention, marketing, claims management and risk profiling.  Also, some of the individual data collected via credit agencies or aggregators and then reused for another purpose such as the real-time rating and pricing of insurance could potentially fall into this category.

Time limits and erasure
To ensure that data is not held on to for any longer than necessary, use of personal data should be limited to the specific purpose for which the processing was intended.  This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise potential use.  For example, data in relation to historical claims experience.

Customers will have the right to demand that insurers delete their personal data where it is no longer required for its original purpose, or where they have withdrawn their consent.

Right to data portability
Individuals will be able to request copies of their personal data to make transferring to another provider easier. The regulations specify that the data needs to be in a commonly-used format.  This might be problematic for insurers and intermediaries where data may be held on separate systems or in different formats.

Profiling
The GDPR provides safeguards for individuals against decisions based solely on automated processing which includes ‘profiling’. Profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.”

This new right is significant to the insurance industry as the underwriting process relies heavily on the pooling of information, building generalised models of risk to estimate future claims propensity and the profiling of individuals.   There are also other areas where decisions are made based on processes that are automated including claims analysis, fraud prevention and marketing.

Exemptions
The right does not apply to profiling using personal data if any resulting decision is deemed as necessary for entering into or performance of a contract between you and the individual.  The GDPR states that the contract needs to be between the data controller and the data subject. It is not clear about what happens when it concerns the processing of a third party’s personal data. Many insurance policies involve the processing of a third party’s personal data, in the form of a beneficiary under an insurance policy, for example, a second driver under a motor policy.

The other exemption is if the data has been anonymised – as this is no longer classed as personal data because the person cannot be identified from the data itself.

As far as profiling activities for underwriting – this is likely to be permissible as it can be considered necessary for the performance of a contract.  However, profiling for marketing purposes will not be exempt.

How does the Regulation affect the use of big data?
The process of combining large amounts of data from disparate sources and analysing it by automatic or semi-automatic means to spot trends and find hidden patterns and anomalies has a number of benefits for the insurance industry.  These include a greater understanding of risk across a book of business, more accurate pricing and improved competitiveness.  Data providers such as Business Insight are all undoubtedly giving GDPR some thought and building in technology to ensure their data products will be compliant, or at least they should be.

Business Insight
At Business Insight, we invest a significant amount in research and development every year as well as looking to continually future proof our products.   We use a range of different postgraduate analytical, statistical and mathematical techniques in researching and building models from large data sets which help guard against inaccuracies and errors.

We build models from data that has already been anonymised using various anonymisation techniques such as Bayesian Inference Swapping.  We have also been developing methodologies and IP to improve the accuracy and robustness of our perils risk models as well as ensuring compliance with the forthcoming GDPR legislation.  Our next generation of perils models and solution will be unveiled in the Summer.

Challenges ahead
In summary, the GDPR brings with it quite a few changes and challenges to the way data is collected, processed and stored.  Insurance organisations should be taking the time now to review their data management practices and systems to ensure compliance.  New technologies emerging will only serve to increase the pace of data generation and collection.  A lot of thought will need to be given by companies to ensure they remain compliant in terms of what they currently do and new solutions they are thinking of implementing.

In terms of the application of GDPR to big data, there are going to be obstacles to overcome as the legislation will force more of a balance between the potential benefits of analytics and protecting an individual’s right to privacy.  This could have a big impact in some areas and limit some of the analysis currently undertaken.  Whether Brexit eventually results in some of the legislation being softened remains to be seen, though with GDPR taking effect in May next year you will need to start thinking about the implications sooner rather than later.