Product Focus – Theft model rebuild

resonateABI figures show that theft from households accounts for 13% of all claims received. Although the volume of theft claims has been falling in the last decade, it is still significant and amounted to over £440 million in the UK over the last year on property related claims. Having an accurate perils rating model that can differentiate risk at a highly granular level can make a considerable difference to improving loss ratios and boosting profitability.

The Business Insight residential theft model ‘Theft Insight’ predicts the relative risk and variation of domestic burglary across the UK and is currently used across the industry by sixteen major property insurers.

Business Insight also has a commercial property theft risk model specifically for commercial property insurers.  Both models are based on extensive research into crime patterns using the latest available data and take account of the changing economic landscape of the UK. This covers a cross-section of inner cities, large towns and suburban neighbourhoods through to small towns and more rural areas.  Built from high resolution spatial and demographic data and calibrated using sophisticated mathematical techniques, the models produce estimates of risk on a street by street level across the UK.

At Business Insight, we know our products need to add value to insurance company pricing and they also need to beat insurers own in-house actuarial models for an insurance company to licence our products as external data feeds.  Consequently, we invest significantly in R&D to ensure that our products help insurers maintain a competitive advantage.

Some vendors build a peril risk model which is a static product with little or no further refinement. Once built, the predictive accuracy of a perils risk model degrades over time so the continuity of development and focus on improvement and refinement is very important.

We are currently working on rebuilding our theft models using AI techniques, refreshed data and experimenting with a new level of geography that ensures the anonymity of people residing in those locations but that is also more powerful than current postcode versions. This will provide a deeper insight into crime and theft patterns across the UK and a higher level of predictive capability.

Contact the sales team for more information on 01926 421408.

The future of insurance – a brave new world

resonateTechnology is already shaping the future of insurance from autonomous vehicles and advanced driver assistance systems (ADAS), to inter-connected homes, artificial intelligence (AI) and machine learning.

One of the biggest challenges the insurance industry faces is adapting to this brave new world and maximising the opportunities that the new technology creates. Established insurers face a huge threat from agile start-ups able to better harness the new technologies. Some of the new ‘tech’ may or may not live up to the billing and some will be certain to drive rapid change. Data and analytics, what we collect and how we extract value from the data, is one area already in motion.

The big data challenge
Big data technologies and analytics are making it easier for organisations to capture large datasets but many still struggle to generate meaningful insights from the vast amount of data.  The challenge is to convert the data into meaningful information and then connect it with and across datasets in a way that enables enrichment and deep insight.

Deep risk insights
In terms of risk management, where insurers are seeing the real value is using data and analytics to gain a deeper insight which allows for better, more profitable decision making. Using artificial intelligence and machine learning, patterns and trends can be identified that would otherwise have stayed hidden.  Technologies around data have emerged to handle the exponentially growing volumes, improve velocity to support real-time analytics, and integrate a greater variety of internal and external data. Twenty years ago, many insurers couldn’t even rate a risk by postcode, particularly those distributing through brokers, due to legacy systems and IT restrictions. Now, pricing can be based on data relating to the specific individual in real time. Personalised pricing has allowed insurers to be more targeted in the selection of customers, to proactively cross and up-sell and to target opportunities in new segments or markets with confidence.

Why the hype surrounding AI?
Machine learning techniques such as neural networks have been around for a long time and were in use in the industry during the 1990’s for predictive analytics, so what’s changed?  There are three main factors; firstly, computing power has significantly improved.  According to Moore’s law, computing power effectively doubles every couple of years.  This means that algorithms are now able to crunch much more data within timescales that were previously out of reach.  Secondly, the volume, availability, and granularity of data has also radically improved.   Thirdly, the efficiency and capability of the algorithms embedded deep within neural networks have also markedly improved. These three factors combined have resulted in these types of techniques coming more into focus recently for applications within insurance.

Insurers have been using AI technologies to improve their efficiency and speed up internal operations in terms of automating processes for claims and underwriting.  AI and neural networks can also be employed to help gain a deeper insight and to differentiate risk in a much more granular way.

Business Insight has developed its own AI platform called ‘Perspective’. It is a neural network that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk and offering significant improvements in predictive accuracy compared to statistical data models.

Changing customer needs
Behavioural analytics and advanced data analysis can also help insurance companies gain a deeper understanding of their customer base for the development of personalised products and solutions.

Millennials, having grown up with smartphones and being used to digital interactions, want the ability to compare products quickly and easily and find value for money at the click of a button. They want the product best suited to them and their lifestyle and these are the things they are looking for from an insurer.  This is where data and technology will need to be harnessed effectively by insurers to create products for the next generation of customers. It is this need to adapt and evolve to match customer requirements and buying preferences that has led Aviva to recently launch their ‘Ask it Never’ initiative.  Aimed at Millennials, the idea is to eliminate the need for applicants to have to spend time answering lengthy questionnaires by pre-populating the fields using big data to streamline the application process, saving the customer time and making the service more efficient.

Agility and change need to be embraced by traditional insurers otherwise some may end up going the same way as Kodak, a market leading company that resisted change and saw its market share fall off a cliff when digital photography came along.

Product Focus – DNA Dimensions – Uncovering the DNA of every street

DNA Dimensions is the latest in our suite of Risk Insight© products.  It has been designed to provide insurers with a Detailed Neighbourhood Analysis (DNA) across a range of demographic themes. This delivers a deep insight at a level of granularity to improve pricing models and risk selection capability.

DNA Dimensions is a set of orthogonal or uncorrelated risk scores explaining the variation across the vast range of demographic data sources held by Business Insight, including the latest Census information, geodemographic, environmental data and spatial data. DNA Dimensions provides a unique set of scores across a range of themes for every postcode in the UK. Candidly, this can be fed directly into insurer pricing models to explain more variation in the pattern of risk and improve the accuracy of risk pricing.

DNA Dimensions utilises a statistical analysis technique called ‘principal component analysis’ and has been applied to the full range of demographic data assets within Business Insight to uncover the underlying dimensions present down every street.  The range of themes output in the solution are essential to understanding risk such as wealth, affluence, family composition, rurality and industry. These explanatory risk themes also give a detailed insight as well as increasing the understanding of each geographic location.  Every neighbourhood of the UK has been analysed and has been given a different set of scores that uniquely describes each location across the range of factors in the DNA Dimensions product, this helps to understand:

  • The make-up of the local area
  • Affluence
  • Property turnover
  • Levels of urbanisation/ rurality
  • Housing type
  • Life stage
  • Occupation
  • Employment

The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data.  Our initial tests against experience data have shown DNA Dimensions to add considerable value to risk pricing models, indicate potential to help drive better risk selection and enhance underwriting performance.

Business Insight is focused on providing the insurance industry with innovative products that add value and drive business growth. Business Insight invests a significant amount in Research and Development every year and our expertise in statistics, big data processing as well as knowledge of insurance has ensured DNA Dimensions is relevant, precise and effective as an external data feed.

If you would like to find out more please get in touch via your Account Manager or contact our support team on 01926 421408.

 

The Great Storm of 1987 – 30 years on

After the devastating effects of Storms Harvey, Irma and Maria on the US and Caribbean Islands, we revisit the great storm of October 1987.  Experts are already saying that Storms Harvey, Irma and Maria could end up being three of the costliest storms in modern times. AIR Worldwide has put potential insured losses for the three storms in total at an astonishing $155bn. We are lucky in the UK that we don’t get storms of this type hitting our shores. Indeed, major storms causing losses in excess of £1bn are rare events in the UK. On the 16th October 2017, it will be thirty years on from the Great Storm of 1987.

Referred to in the industry as ‘87-J’, the storm took everyone by surprise and at the time, was classed as the UK’s worst storm since 1703. It still remains one of the most severe and costliest windstorms the UK has ever experienced. One in six households made a claim at the time and losses to the industry for commercial and residential cover exceeded £1.3bn.

Striking in the middle of the night, the 1 in 200-year storm left behind a trail of damage and devastation in the South East of England and Northern France with 18 people losing their lives and extensive damage to property and infrastructure. Many houses were without power for several days and fallen trees blocked roads and caused travel chaos. An estimated 15 million trees were uprooted and Seven Oaks famously became One Oak.

The worst affected areas were parts of Greater London, the Home Counties and the East of England. The South East of England experienced unusually strong wind gusts in excess of 81 mph lasting for 3 to 4 hours and gusts of up to 122 mph were recorded at Gorleston, Norfolk.

The exact path and severity of the storm were very difficult to predict using the forecasting methods and data available at the time.  The Met Office’s Michael Fish faced a backlash for dismissing a viewer who had asked about whether the UK could expect a hurricane but at the time it was hard to forecast the precise path the storm would take. The path of the storm and the direction of the wind were very unusual; running from south to north, with the storm striking the more densely populated areas of the South of England.  The South of England has higher concentrations of sums insured and this resulted in a large loss for the Insurance Industry. Subsequently, changes were made to the way forecasts are produced and the National Severe Weather Warning Service was created.

A better insight into windstorm risk

Data modelling and analytical tools to help underwrite and price property risks accurately for natural perils have come a long way since 1987 when data on individual properties was scarce and geographic risk assessed by postal district. Insurers are now much better equipped to gain an in-depth understanding of risk exposure with access to risk models that are based on up-to-date, accurate information and that take account of changing risk patterns.

Business Insight’s ‘Storm Insight’ risk rating model. is based on extensive research, huge volumes of explanatory input data and cutting-edge analytical techniques. Storm Insight utilises the largest source of storm claims information available in the UK, detailed property vulnerability data for every street and over 100 million historic windspeed data points recorded in urban areas across the UK.  We also have access to an archive of actual storm event footprints over the last 150 years to gain insight into rare events such as the 87-J Storm.

What would the industry loss be if 87-J were to happen again?

In 1987 the losses from the great storm on 17th October resulted in over £1 billion in insured losses to domestic property as well as significant damage to commercial property. Things have moved on since then, in terms of housing development, levels of affluence and insured values at risk. Over the last 30 years, there have been significant increases in housing development across the South of England in areas that were in the path of the storm in 1987.

Official figures from ONS show the number of residential properties in England increased by 28% between 1987 and 2017. In London (Outer and Inner) the increase has been 32%. Coupled with that inflation has more than doubled over the last thirty years and, perhaps more significantly, the wealth across the South East of England and London has increased enormously. Many more properties across the housing stock have been extended in 2017 compared with 1987 and the total insured values at risk is of an order of magnitude higher. The level of wealth is also far higher with one in ten households now reported as having assets worth more a £1 million.

If the UK were to encounter the same storm again in October 2017, the loss to the UK Insurance Industry would not be in the same league as recently reported losses in the USA and Caribbean though it would still break all previous UK records. In our view, it is likely that losses to the UK insurance industry for such an event would exceed £6bn.

AI and machine learning: things to consider

Companies are investing heavily in artificial intelligence and machine learning techniques.  Harnessing the value from data available internally and externally has become a business-critical capability for insurers. 

Using sophisticated methods and algorithms, machine learning uses automation to find patterns in data, not always obvious to the human eye. Data can be mined from a variety of sources to help insurers build a fuller picture of their customers and machine learning can be used in all areas of an insurer’s business from claims processing and underwriting to fraud detection.

An advantage of machine learning is that algorithms can potentially analyse huge amounts of information quickly. Solutions can be recalibrated and redeployed rapidly by automating a process without introducing human error or bias. The desire to uncover hidden patterns and discover something the rest of the market is missing is a key driver for many companies though it is easy to be seduced by the technology and the fear of not wanting to be left behind. There are pitfalls to avoid and sometimes it is all too easy to concentrate on the technology and lose sight of other perhaps more important pieces of the jigsaw.

Neural Networks
Business Insight has been researching machine learning techniques and has developed its own AI platform that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk. The main advantage of the neural network platform we have developed is that it can potentially offer significant improvements in predictive accuracy compared to statistical data models. There can also be significant savings in time to rebuild and redeploy by the reduction in human involvement.

Traditional statistical methods require intensive manual pre-processing of input data to identify perceived potential interactions between variables.  Whereas a neural network needs minimal data preparation and interactions between variables drop out automatically which saves a considerable amount of time in model building. That said, you do need to ensure that you are not blindly seduced by the technology as there are other issues just as important when carrying out analysis of large databases.

Pearls of wisdom
Here are a few observations from what we have learned over the years that may seem blindingly obvious yet often get ignored, specifically:

1) Focus first on data quality
The validity, veracity and completeness of the underlying data you are feeding into the system is paramount. Whether internal data or external data feeds, data quality is essential. The saying ‘garbage in, garbage out’  is often true if the data you are using is of inferior quality. Hidden patterns are not ‘gems’ of knowledge but costly blind alleys if the data you are using is riddled with inaccuracies or is out of date.  Quality external data is becoming more easily accessible to the insurance market and investing in the best quality data will pay dividends over the long term.

2) Ensure the relevance of your input data for what you are trying to achieve
If you are asking the system to predict a particular target outcome you should ask:  Is the data you are utilising fit for purpose, is it relevant or sufficiently meaningful and is it representative relative to what you are trying to achieve?

3) Ensure you have the relevant knowledge and expertise to maximise the results
Though the technology is readily available, having people with a deep knowledge base, domain expertise and experience in this area is not something that is easily accessible in the insurance market. A deep understanding and knowledge of the market, the data and experience of why certain risk drivers happen is often under estimated.

The winners in the market will be those able to address these points focusing not on the technology in isolation but also the data, both internal and external, as well as attracting the best talent with the relevant domain knowledge and expertise to maximise value. Those that make sure they invest in the technology as well as the people and the appropriate data assets to drive their business forward, will be the winners in the years to come.

 

Product Update – Data Dimensions

Data Dimensions’ is the latest data product offering recently launched by Business Insight.

Using principal component analysis across a vast database of demographic variables, ‘Data Dimensions’ is a suite of orthogonal or uncorrelated scores by postcode describing different demographic features such as wealth, affluence, family composition, rurality and industry.

Every neighbourhood in the UK has a different set of scores that uniquely describes each location across the range of factors in the ‘Data Dimensions’ product. The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data. Our initial tests against experience data for both motor and household have shown ‘Data Dimensions’ to add considerable value for risk selection, underwriting and pricing.

For more details or to see a demonstration, contact the sales team on 01926 421408

The insurance sector and GDPR implications

Technology is connecting us in ways not seen before. Over a third of the world’s population use social media platforms such as Facebook and there are currently more mobile devices than people on the planet.  The avalanche of data being created not only brings with it analytical challenges to find value but also concerns relating to privacy, who owns the data we generate and a perceived over-reliance on automatic decision making.

The EU’s General Data Protection Regulation (GDPR) due to come into effect in May 2018 is an attempt to address some of the concerns and brings considerable change for European-based organisations in terms of capturing, processing and using personal data. Some of the changes might be viewed as draconian and could have a major impact on the use of data in the insurance industry.

Personal data is defined as “any data that directly or indirectly identifies or makes identifiable a data subject, such as names, identification numbers, location data and online identifiers.”

Designed to improve protection for consumers, the new legislation focuses on how personal data is collected, processed and how long it is held for and includes more obligations for transparency and portability.  Under the new rules, breaches must be reported within 72 hours and organisations face tougher penalties for non-compliance which could be up to 2% of global turnover or up to Euro 20m.

Consent to process the data
Insurance by its very nature involves collecting large amounts of personal data on customers. Under the new regulations, organisations will need to show how and when consent was obtained for processing the data.

Consent must be explicitly obtained rather than assumed and it needs to be obtained every time it is used for a specific purpose.  This means that data collected in one area of the business cannot be used in another area unless explicitly agreed upfront by the customer.

This could be a problem for insurance companies as often data collected at underwriting and claims stages is then used for a variety of different purposes including fraud prevention, marketing, claims management and risk profiling.  Also, some of the individual data collected via credit agencies or aggregators and then reused for another purpose such as the real-time rating and pricing of insurance could potentially fall into this category.

Time limits and erasure
To ensure that data is not held on to for any longer than necessary, use of personal data should be limited to the specific purpose for which the processing was intended.  This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise potential use.  For example, data in relation to historical claims experience.

Customers will have the right to demand that insurers delete their personal data where it is no longer required for its original purpose, or where they have withdrawn their consent.

Right to data portability
Individuals will be able to request copies of their personal data to make transferring to another provider easier. The regulations specify that the data needs to be in a commonly-used format.  This might be problematic for insurers and intermediaries where data may be held on separate systems or in different formats.

Profiling
The GDPR provides safeguards for individuals against decisions based solely on automated processing which includes ‘profiling’. Profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.”

This new right is significant to the insurance industry as the underwriting process relies heavily on the pooling of information, building generalised models of risk to estimate future claims propensity and the profiling of individuals.   There are also other areas where decisions are made based on processes that are automated including claims analysis, fraud prevention and marketing.

Exemptions
The right does not apply to profiling using personal data if any resulting decision is deemed as necessary for entering into or performance of a contract between you and the individual.  The GDPR states that the contract needs to be between the data controller and the data subject. It is not clear about what happens when it concerns the processing of a third party’s personal data. Many insurance policies involve the processing of a third party’s personal data, in the form of a beneficiary under an insurance policy, for example, a second driver under a motor policy.

The other exemption is if the data has been anonymised – as this is no longer classed as personal data because the person cannot be identified from the data itself.

As far as profiling activities for underwriting – this is likely to be permissible as it can be considered necessary for the performance of a contract.  However, profiling for marketing purposes will not be exempt.

How does the Regulation affect the use of big data?
The process of combining large amounts of data from disparate sources and analysing it by automatic or semi-automatic means to spot trends and find hidden patterns and anomalies has a number of benefits for the insurance industry.  These include a greater understanding of risk across a book of business, more accurate pricing and improved competitiveness.  Data providers such as Business Insight are all undoubtedly giving GDPR some thought and building in technology to ensure their data products will be compliant, or at least they should be.

Business Insight
At Business Insight, we invest a significant amount in research and development every year as well as looking to continually future proof our products.   We use a range of different postgraduate analytical, statistical and mathematical techniques in researching and building models from large data sets which help guard against inaccuracies and errors.

We build models from data that has already been anonymised using various anonymisation techniques such as Bayesian Inference Swapping.  We have also been developing methodologies and IP to improve the accuracy and robustness of our perils risk models as well as ensuring compliance with the forthcoming GDPR legislation.  Our next generation of perils models and solution will be unveiled in the Summer.

Challenges ahead
In summary, the GDPR brings with it quite a few changes and challenges to the way data is collected, processed and stored.  Insurance organisations should be taking the time now to review their data management practices and systems to ensure compliance.  New technologies emerging will only serve to increase the pace of data generation and collection.  A lot of thought will need to be given by companies to ensure they remain compliant in terms of what they currently do and new solutions they are thinking of implementing.

In terms of the application of GDPR to big data, there are going to be obstacles to overcome as the legislation will force more of a balance between the potential benefits of analytics and protecting an individual’s right to privacy.  This could have a big impact in some areas and limit some of the analysis currently undertaken.  Whether Brexit eventually results in some of the legislation being softened remains to be seen, though with GDPR taking effect in May next year you will need to start thinking about the implications sooner rather than later.

5 key benefits of data enrichment for the Insurance Industry

resonateData enrichment provides both insurers and brokers with an opportunity to leverage the vast amount of information they already have and combine it with external data sources to improve business acquisition and enable them to more accurately assess and price risk at the point of quote.

In the past, insurers and brokers had little choice but to rely on information collected at the point of quotation, most often provided by the proposer.  But now with increasing levels of new business being shopped for and written online, there is access to a wealth of public and private data which includes data relating to the individual, their location, property, demographic and lifestyle information.

This data can be used to try and predict customer behaviour, analyse trends, uncover new patterns and improve risk exposure.  Real-time data validation at the point of quote allows additional facts relevant to the risk to be discovered. This has a number of key benefits for insurers and brokers which include:

  1. Increased fraud detection rates

Insurers are experiencing unprecedented levels of application fraud activity. ABI research shows that in 2014 insurers uncovered 212,000 attempted dishonest applications for motor insurance, which is equivalent to just over 4,000 every week.  Statistics show that drivers who lie on their initial application are 66% more likely to make a claim in the future, so the more focus insurers and brokers can put on their initial assessment of drivers, the better.  Patterns, trends and anomalies can be spotted quicker and costs savings can be made by earlier assessments of fraud and identifying early cancellation cases.

  1. Improved competitive position

Data enrichment helps to provide insurers with a single customer view by combining public and private data with quote intelligence.  Insurers are, therefore, able to more accurately assess their customer base and be more selective in terms of the risks they want to underwrite. Thus avoiding poor performing risks and more easily identifying their best customers and those with the highest lifetime value for improved profitability.

  1. Enhanced customer loyalty

Data enrichment can provide insurers and brokers with a richer, deeper understanding of their existing customers. Adding valuable business data to individual records in your database can transform your customer data into customer intelligence. A wider knowledge of your customers’ behaviour and lifestyle means that products can be specifically tailored thereby enhancing customer loyalty and retention.

  1. Greater cross-selling opportunities

A better understanding of your customers leads to more relevant targeting and more opportunity to cross-sell complementary products.  By verifying the customer is who they say they are at point of quote and assessing their credit worthiness, those customers with a higher propensity to purchase add-ons can be identified.

  1. Reduced costs in settling claims

The claims process is time-consuming and demands a lot of resources.  Assessing the propensity of the customer to fulfil their credit commitments at application stage, means that scrutiny of the data at claims stage is reduced, enabling claims to be dealt with quicker, requiring less time and resources to be spent in settling claims and ultimately improving profitability.

The future of data enrichment
In today’s technologically driven society, new ways of exploiting data to gain competitive advantage and new data sources will always be found. Insurers will continue to embrace new data sources and the greater visibility and insight this brings.

Business Insight has a range of products designed to support quote enrichment, risk selection and claims validation as well as the pricing and underwriting of insurance.  We have recently built our own data hub and will be launching the next generation of high resolution property level geographic risk models next year. This will allow users access to more accurate perils information at the point of quote. More details to follow on this in our next newsletter.

Using big data in the UK to classify residential neighbourhoods

resonateBig Data analytics is having an impact in many areas of industry. In the recent race for the US Presidency, it played a key part in Trump’s success. While the media in the US was consistently predicting a Clinton victory, behind the scenes it is reported the Trump campaign were employing an army of Data Scientists to crunch huge amounts of social media data using Artificial Intelligence (AI) (machine learning techniques) to work out who the marginal voters were. Looking at what people were doing, saying and communicating, they homed in on what key issues were most important for particular individuals, classifying them and then working out what messages they needed to target them with. Whether or not this was decisive is unclear, though it certainly will have had some influence on the numbers and the Trump victory.

Modern analytics allows us to combine large amounts of data from lots of different sources and use machine learning and AI techniques to convert this vast amount of data into meaningful insights to base informed decisions upon. Business Insight has built its own AI machine learning platform called ‘PERSPECTIVE’ to crunch huge amounts of data to produce estimates of risk or likely outcomes. Big data analytics in the insurance industry has a number of benefits for insurers; one of which is helping to understand customers better, others include helping to improve pricing, rating and underwriting through a greater understanding of risk.  It can also tease out hidden patterns and provide insights that may otherwise stay hidden.

The amount of data may have increased enormously in recent times though making sense of large volumes of demographic data and pigeonholing people is not something new.  A geodemographic classification assigns geographic areas to categories based on the similarities across a vast range of different variables. It is a structured way of making sense of complex and very large socio-economic datasets.

Streets and neighbourhoods can be classified into types such as ‘Affluent Achievers’ or ‘Comfortable Greys’ in wealthy areas through to types such as ‘Breadline & Benefits’ in more deprived areas. The products are based on the assumption that people living in similar housing and sharing similar characteristics across a range of factors relating to age, affluence, family composition and life stage are likely to have similar wants, needs and exhibit similar behaviour.

Geodemographic classifications have been around and used in industry for a very long time. The origins in the UK can be traced back to Charles Booth who analysed the 1891 UK Census and produced a classification of streets throughout London with neighbourhood types such as ‘Lowest Class. Vicious, Semi Criminal’ – not labels that would be palatable nowadays, even if accurate. With the arrival of the computer and as access to large amounts of data increased, we saw the emergence of commercial classifications in the 1970s with PRISM developed by Claritas in the USA and ACORN (A Classification of Residential Neighbourhoods) developed by CACI in the UK. The first versions developed in the 1970s were built from Census data and used by marketing departments to target product offerings more accurately. Since then the complexity, amount of different data sources used and the level of granularity has increased dramatically, as has the number of different commercial uses that the products are used for from advertising and target marketing through to analysing crime patterns and health resource requirements.

There are many different ‘lifestyle’ or ‘neighbourhood’ classifications though most are general purpose, i.e. they have been built with no specific industry or use in mind so can be useful across a range of industry sectors and for a range of purposes. In contrast, the ‘Resonate’ classification developed by Business Insight has been built with the insurance industry in mind. So for underwriting and pricing insurance, Resonate will offer more discrimination than the general purpose systems available in the market. With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.

With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.  Clever use of data and analytics can give companies a competitive edge, help to ensure you are not selected against and as we have seen, sometimes, lead to surprising results!

Click here for more details on Resonate or contact us at 01926 421408.

BREXIT? A view on the result using RESONATE Lifestyle data

Brexit squareWith the Brexit vote about to take place this week we thought it would be interesting and also a bit of fun to use our RESONATE geodemographic classification system to give an indication of the likely result.

Looking at every type of neighbourhood in the UK we estimated which way each lifestyle category is likely to vote based on media reporting and analysis. This was then scaled by the distribution of voters within each category in every street across the United Kingdom and Northern Ireland.

The analysis seems to indicate that the battlegrounds of undecided voters are concentrated in areas categorised as ‘Mature Families & Traditional Values’‘Wealthy Families in Village, Small Towns and Rural Locations’ and ‘Modern Families, Modest Means’.

Targeted communications aimed at these neighbourhoods might have reaped better rewards for either side in their respective campaigns. The analysis predicted a 53% vote to Leave. Although a high turnout is expected, there will be variation across the geodemographic categories which has not been considered and also there will be some statistical error within this analysis. So it could still be a vote either way.

For more information about RESONATE Lifestyle & demographic data, contact us on 01926 421408.