Product Focus – DNA Dimensions – Uncovering the DNA of every street

DNA Dimensions is the latest in our suite of Risk Insight© products.  It has been designed to provide insurers with a Detailed Neighbourhood Analysis (DNA) across a range of demographic themes. This delivers a deep insight at a level of granularity to improve pricing models and risk selection capability.

DNA Dimensions is a set of orthogonal or uncorrelated risk scores explaining the variation across the vast range of demographic data sources held by Business Insight, including the latest Census information, geodemographic, environmental data and spatial data. DNA Dimensions provides a unique set of scores across a range of themes for every postcode in the UK. Candidly, this can be fed directly into insurer pricing models to explain more variation in the pattern of risk and improve the accuracy of risk pricing.

DNA Dimensions utilises a statistical analysis technique called ‘principal component analysis’ and has been applied to the full range of demographic data assets within Business Insight to uncover the underlying dimensions present down every street.  The range of themes output in the solution are essential to understanding risk such as wealth, affluence, family composition, rurality and industry. These explanatory risk themes also give a detailed insight as well as increasing the understanding of each geographic location.  Every neighbourhood of the UK has been analysed and has been given a different set of scores that uniquely describes each location across the range of factors in the DNA Dimensions product, this helps to understand:

  • The make-up of the local area
  • Affluence
  • Property turnover
  • Levels of urbanisation/ rurality
  • Housing type
  • Life stage
  • Occupation
  • Employment

The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data.  Our initial tests against experience data have shown DNA Dimensions to add considerable value to risk pricing models, indicate potential to help drive better risk selection and enhance underwriting performance.

Business Insight is focused on providing the insurance industry with innovative products that add value and drive business growth. Business Insight invests a significant amount in Research and Development every year and our expertise in statistics, big data processing as well as knowledge of insurance has ensured DNA Dimensions is relevant, precise and effective as an external data feed.

If you would like to find out more please get in touch via your Account Manager or contact our support team on 01926 421408.

 

The Great Storm of 1987 – 30 years on

After the devastating effects of Storms Harvey, Irma and Maria on the US and Caribbean Islands, we revisit the great storm of October 1987.  Experts are already saying that Storms Harvey, Irma and Maria could end up being three of the costliest storms in modern times. AIR Worldwide has put potential insured losses for the three storms in total at an astonishing $155bn. We are lucky in the UK that we don’t get storms of this type hitting our shores. Indeed, major storms causing losses in excess of £1bn are rare events in the UK. On the 16th October 2017, it will be thirty years on from the Great Storm of 1987.

Referred to in the industry as ‘87-J’, the storm took everyone by surprise and at the time, was classed as the UK’s worst storm since 1703. It still remains one of the most severe and costliest windstorms the UK has ever experienced. One in six households made a claim at the time and losses to the industry for commercial and residential cover exceeded £1.3bn.

Striking in the middle of the night, the 1 in 200-year storm left behind a trail of damage and devastation in the South East of England and Northern France with 18 people losing their lives and extensive damage to property and infrastructure. Many houses were without power for several days and fallen trees blocked roads and caused travel chaos. An estimated 15 million trees were uprooted and Seven Oaks famously became One Oak.

The worst affected areas were parts of Greater London, the Home Counties and the East of England. The South East of England experienced unusually strong wind gusts in excess of 81 mph lasting for 3 to 4 hours and gusts of up to 122 mph were recorded at Gorleston, Norfolk.

The exact path and severity of the storm were very difficult to predict using the forecasting methods and data available at the time.  The Met Office’s Michael Fish faced a backlash for dismissing a viewer who had asked about whether the UK could expect a hurricane but at the time it was hard to forecast the precise path the storm would take. The path of the storm and the direction of the wind were very unusual; running from south to north, with the storm striking the more densely populated areas of the South of England.  The South of England has higher concentrations of sums insured and this resulted in a large loss for the Insurance Industry. Subsequently, changes were made to the way forecasts are produced and the National Severe Weather Warning Service was created.

A better insight into windstorm risk

Data modelling and analytical tools to help underwrite and price property risks accurately for natural perils have come a long way since 1987 when data on individual properties was scarce and geographic risk assessed by postal district. Insurers are now much better equipped to gain an in-depth understanding of risk exposure with access to risk models that are based on up-to-date, accurate information and that take account of changing risk patterns.

Business Insight’s ‘Storm Insight’ risk rating model. is based on extensive research, huge volumes of explanatory input data and cutting-edge analytical techniques. Storm Insight utilises the largest source of storm claims information available in the UK, detailed property vulnerability data for every street and over 100 million historic windspeed data points recorded in urban areas across the UK.  We also have access to an archive of actual storm event footprints over the last 150 years to gain insight into rare events such as the 87-J Storm.

What would the industry loss be if 87-J were to happen again?

In 1987 the losses from the great storm on 17th October resulted in over £1 billion in insured losses to domestic property as well as significant damage to commercial property. Things have moved on since then, in terms of housing development, levels of affluence and insured values at risk. Over the last 30 years, there have been significant increases in housing development across the South of England in areas that were in the path of the storm in 1987.

Official figures from ONS show the number of residential properties in England increased by 28% between 1987 and 2017. In London (Outer and Inner) the increase has been 32%. Coupled with that inflation has more than doubled over the last thirty years and, perhaps more significantly, the wealth across the South East of England and London has increased enormously. Many more properties across the housing stock have been extended in 2017 compared with 1987 and the total insured values at risk is of an order of magnitude higher. The level of wealth is also far higher with one in ten households now reported as having assets worth more a £1 million.

If the UK were to encounter the same storm again in October 2017, the loss to the UK Insurance Industry would not be in the same league as recently reported losses in the USA and Caribbean though it would still break all previous UK records. In our view, it is likely that losses to the UK insurance industry for such an event would exceed £6bn.

AI and machine learning: things to consider

Companies are investing heavily in artificial intelligence and machine learning techniques.  Harnessing the value from data available internally and externally has become a business-critical capability for insurers. 

Using sophisticated methods and algorithms, machine learning uses automation to find patterns in data, not always obvious to the human eye. Data can be mined from a variety of sources to help insurers build a fuller picture of their customers and machine learning can be used in all areas of an insurer’s business from claims processing and underwriting to fraud detection.

An advantage of machine learning is that algorithms can potentially analyse huge amounts of information quickly. Solutions can be recalibrated and redeployed rapidly by automating a process without introducing human error or bias. The desire to uncover hidden patterns and discover something the rest of the market is missing is a key driver for many companies though it is easy to be seduced by the technology and the fear of not wanting to be left behind. There are pitfalls to avoid and sometimes it is all too easy to concentrate on the technology and lose sight of other perhaps more important pieces of the jigsaw.

Neural Networks
Business Insight has been researching machine learning techniques and has developed its own AI platform that can take large volumes of records across many variables as data feeds before iteratively learning from the data, uncovering hidden patterns and forming an optimal solution. The software can take a vast number of input data points and hundreds of corresponding risk factors per case before constructing a more accurate estimate of risk. The main advantage of the neural network platform we have developed is that it can potentially offer significant improvements in predictive accuracy compared to statistical data models. There can also be significant savings in time to rebuild and redeploy by the reduction in human involvement.

Traditional statistical methods require intensive manual pre-processing of input data to identify perceived potential interactions between variables.  Whereas a neural network needs minimal data preparation and interactions between variables drop out automatically which saves a considerable amount of time in model building. That said, you do need to ensure that you are not blindly seduced by the technology as there are other issues just as important when carrying out analysis of large databases.

Pearls of wisdom
Here are a few observations from what we have learned over the years that may seem blindingly obvious yet often get ignored, specifically:

1) Focus first on data quality
The validity, veracity and completeness of the underlying data you are feeding into the system is paramount. Whether internal data or external data feeds, data quality is essential. The saying ‘garbage in, garbage out’  is often true if the data you are using is of inferior quality. Hidden patterns are not ‘gems’ of knowledge but costly blind alleys if the data you are using is riddled with inaccuracies or is out of date.  Quality external data is becoming more easily accessible to the insurance market and investing in the best quality data will pay dividends over the long term.

2) Ensure the relevance of your input data for what you are trying to achieve
If you are asking the system to predict a particular target outcome you should ask:  Is the data you are utilising fit for purpose, is it relevant or sufficiently meaningful and is it representative relative to what you are trying to achieve?

3) Ensure you have the relevant knowledge and expertise to maximise the results
Though the technology is readily available, having people with a deep knowledge base, domain expertise and experience in this area is not something that is easily accessible in the insurance market. A deep understanding and knowledge of the market, the data and experience of why certain risk drivers happen is often under estimated.

The winners in the market will be those able to address these points focusing not on the technology in isolation but also the data, both internal and external, as well as attracting the best talent with the relevant domain knowledge and expertise to maximise value. Those that make sure they invest in the technology as well as the people and the appropriate data assets to drive their business forward, will be the winners in the years to come.

 

Product Update – Data Dimensions

Data Dimensions’ is the latest data product offering recently launched by Business Insight.

Using principal component analysis across a vast database of demographic variables, ‘Data Dimensions’ is a suite of orthogonal or uncorrelated scores by postcode describing different demographic features such as wealth, affluence, family composition, rurality and industry.

Every neighbourhood in the UK has a different set of scores that uniquely describes each location across the range of factors in the ‘Data Dimensions’ product. The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data. Our initial tests against experience data for both motor and household have shown ‘Data Dimensions’ to add considerable value for risk selection, underwriting and pricing.

For more details or to see a demonstration, contact the sales team on 01926 421408

The insurance sector and GDPR implications

Technology is connecting us in ways not seen before. Over a third of the world’s population use social media platforms such as Facebook and there are currently more mobile devices than people on the planet.  The avalanche of data being created not only brings with it analytical challenges to find value but also concerns relating to privacy, who owns the data we generate and a perceived over-reliance on automatic decision making.

The EU’s General Data Protection Regulation (GDPR) due to come into effect in May 2018 is an attempt to address some of the concerns and brings considerable change for European-based organisations in terms of capturing, processing and using personal data. Some of the changes might be viewed as draconian and could have a major impact on the use of data in the insurance industry.

Personal data is defined as “any data that directly or indirectly identifies or makes identifiable a data subject, such as names, identification numbers, location data and online identifiers.”

Designed to improve protection for consumers, the new legislation focuses on how personal data is collected, processed and how long it is held for and includes more obligations for transparency and portability.  Under the new rules, breaches must be reported within 72 hours and organisations face tougher penalties for non-compliance which could be up to 2% of global turnover or up to Euro 20m.

Consent to process the data
Insurance by its very nature involves collecting large amounts of personal data on customers. Under the new regulations, organisations will need to show how and when consent was obtained for processing the data.

Consent must be explicitly obtained rather than assumed and it needs to be obtained every time it is used for a specific purpose.  This means that data collected in one area of the business cannot be used in another area unless explicitly agreed upfront by the customer.

This could be a problem for insurance companies as often data collected at underwriting and claims stages is then used for a variety of different purposes including fraud prevention, marketing, claims management and risk profiling.  Also, some of the individual data collected via credit agencies or aggregators and then reused for another purpose such as the real-time rating and pricing of insurance could potentially fall into this category.

Time limits and erasure
To ensure that data is not held on to for any longer than necessary, use of personal data should be limited to the specific purpose for which the processing was intended.  This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise potential use.  For example, data in relation to historical claims experience.

Customers will have the right to demand that insurers delete their personal data where it is no longer required for its original purpose, or where they have withdrawn their consent.

Right to data portability
Individuals will be able to request copies of their personal data to make transferring to another provider easier. The regulations specify that the data needs to be in a commonly-used format.  This might be problematic for insurers and intermediaries where data may be held on separate systems or in different formats.

Profiling
The GDPR provides safeguards for individuals against decisions based solely on automated processing which includes ‘profiling’. Profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.”

This new right is significant to the insurance industry as the underwriting process relies heavily on the pooling of information, building generalised models of risk to estimate future claims propensity and the profiling of individuals.   There are also other areas where decisions are made based on processes that are automated including claims analysis, fraud prevention and marketing.

Exemptions
The right does not apply to profiling using personal data if any resulting decision is deemed as necessary for entering into or performance of a contract between you and the individual.  The GDPR states that the contract needs to be between the data controller and the data subject. It is not clear about what happens when it concerns the processing of a third party’s personal data. Many insurance policies involve the processing of a third party’s personal data, in the form of a beneficiary under an insurance policy, for example, a second driver under a motor policy.

The other exemption is if the data has been anonymised – as this is no longer classed as personal data because the person cannot be identified from the data itself.

As far as profiling activities for underwriting – this is likely to be permissible as it can be considered necessary for the performance of a contract.  However, profiling for marketing purposes will not be exempt.

How does the Regulation affect the use of big data?
The process of combining large amounts of data from disparate sources and analysing it by automatic or semi-automatic means to spot trends and find hidden patterns and anomalies has a number of benefits for the insurance industry.  These include a greater understanding of risk across a book of business, more accurate pricing and improved competitiveness.  Data providers such as Business Insight are all undoubtedly giving GDPR some thought and building in technology to ensure their data products will be compliant, or at least they should be.

Business Insight
At Business Insight, we invest a significant amount in research and development every year as well as looking to continually future proof our products.   We use a range of different postgraduate analytical, statistical and mathematical techniques in researching and building models from large data sets which help guard against inaccuracies and errors.

We build models from data that has already been anonymised using various anonymisation techniques such as Bayesian Inference Swapping.  We have also been developing methodologies and IP to improve the accuracy and robustness of our perils risk models as well as ensuring compliance with the forthcoming GDPR legislation.  Our next generation of perils models and solution will be unveiled in the Summer.

Challenges ahead
In summary, the GDPR brings with it quite a few changes and challenges to the way data is collected, processed and stored.  Insurance organisations should be taking the time now to review their data management practices and systems to ensure compliance.  New technologies emerging will only serve to increase the pace of data generation and collection.  A lot of thought will need to be given by companies to ensure they remain compliant in terms of what they currently do and new solutions they are thinking of implementing.

In terms of the application of GDPR to big data, there are going to be obstacles to overcome as the legislation will force more of a balance between the potential benefits of analytics and protecting an individual’s right to privacy.  This could have a big impact in some areas and limit some of the analysis currently undertaken.  Whether Brexit eventually results in some of the legislation being softened remains to be seen, though with GDPR taking effect in May next year you will need to start thinking about the implications sooner rather than later.

5 key benefits of data enrichment for the Insurance Industry

resonateData enrichment provides both insurers and brokers with an opportunity to leverage the vast amount of information they already have and combine it with external data sources to improve business acquisition and enable them to more accurately assess and price risk at the point of quote.

In the past, insurers and brokers had little choice but to rely on information collected at the point of quotation, most often provided by the proposer.  But now with increasing levels of new business being shopped for and written online, there is access to a wealth of public and private data which includes data relating to the individual, their location, property, demographic and lifestyle information.

This data can be used to try and predict customer behaviour, analyse trends, uncover new patterns and improve risk exposure.  Real-time data validation at the point of quote allows additional facts relevant to the risk to be discovered. This has a number of key benefits for insurers and brokers which include:

  1. Increased fraud detection rates

Insurers are experiencing unprecedented levels of application fraud activity. ABI research shows that in 2014 insurers uncovered 212,000 attempted dishonest applications for motor insurance, which is equivalent to just over 4,000 every week.  Statistics show that drivers who lie on their initial application are 66% more likely to make a claim in the future, so the more focus insurers and brokers can put on their initial assessment of drivers, the better.  Patterns, trends and anomalies can be spotted quicker and costs savings can be made by earlier assessments of fraud and identifying early cancellation cases.

  1. Improved competitive position

Data enrichment helps to provide insurers with a single customer view by combining public and private data with quote intelligence.  Insurers are, therefore, able to more accurately assess their customer base and be more selective in terms of the risks they want to underwrite. Thus avoiding poor performing risks and more easily identifying their best customers and those with the highest lifetime value for improved profitability.

  1. Enhanced customer loyalty

Data enrichment can provide insurers and brokers with a richer, deeper understanding of their existing customers. Adding valuable business data to individual records in your database can transform your customer data into customer intelligence. A wider knowledge of your customers’ behaviour and lifestyle means that products can be specifically tailored thereby enhancing customer loyalty and retention.

  1. Greater cross-selling opportunities

A better understanding of your customers leads to more relevant targeting and more opportunity to cross-sell complementary products.  By verifying the customer is who they say they are at point of quote and assessing their credit worthiness, those customers with a higher propensity to purchase add-ons can be identified.

  1. Reduced costs in settling claims

The claims process is time-consuming and demands a lot of resources.  Assessing the propensity of the customer to fulfil their credit commitments at application stage, means that scrutiny of the data at claims stage is reduced, enabling claims to be dealt with quicker, requiring less time and resources to be spent in settling claims and ultimately improving profitability.

The future of data enrichment
In today’s technologically driven society, new ways of exploiting data to gain competitive advantage and new data sources will always be found. Insurers will continue to embrace new data sources and the greater visibility and insight this brings.

Business Insight has a range of products designed to support quote enrichment, risk selection and claims validation as well as the pricing and underwriting of insurance.  We have recently built our own data hub and will be launching the next generation of high resolution property level geographic risk models next year. This will allow users access to more accurate perils information at the point of quote. More details to follow on this in our next newsletter.

Using big data in the UK to classify residential neighbourhoods

resonateBig Data analytics is having an impact in many areas of industry. In the recent race for the US Presidency, it played a key part in Trump’s success. While the media in the US was consistently predicting a Clinton victory, behind the scenes it is reported the Trump campaign were employing an army of Data Scientists to crunch huge amounts of social media data using Artificial Intelligence (AI) (machine learning techniques) to work out who the marginal voters were. Looking at what people were doing, saying and communicating, they homed in on what key issues were most important for particular individuals, classifying them and then working out what messages they needed to target them with. Whether or not this was decisive is unclear, though it certainly will have had some influence on the numbers and the Trump victory.

Modern analytics allows us to combine large amounts of data from lots of different sources and use machine learning and AI techniques to convert this vast amount of data into meaningful insights to base informed decisions upon. Business Insight has built its own AI machine learning platform called ‘PERSPECTIVE’ to crunch huge amounts of data to produce estimates of risk or likely outcomes. Big data analytics in the insurance industry has a number of benefits for insurers; one of which is helping to understand customers better, others include helping to improve pricing, rating and underwriting through a greater understanding of risk.  It can also tease out hidden patterns and provide insights that may otherwise stay hidden.

The amount of data may have increased enormously in recent times though making sense of large volumes of demographic data and pigeonholing people is not something new.  A geodemographic classification assigns geographic areas to categories based on the similarities across a vast range of different variables. It is a structured way of making sense of complex and very large socio-economic datasets.

Streets and neighbourhoods can be classified into types such as ‘Affluent Achievers’ or ‘Comfortable Greys’ in wealthy areas through to types such as ‘Breadline & Benefits’ in more deprived areas. The products are based on the assumption that people living in similar housing and sharing similar characteristics across a range of factors relating to age, affluence, family composition and life stage are likely to have similar wants, needs and exhibit similar behaviour.

Geodemographic classifications have been around and used in industry for a very long time. The origins in the UK can be traced back to Charles Booth who analysed the 1891 UK Census and produced a classification of streets throughout London with neighbourhood types such as ‘Lowest Class. Vicious, Semi Criminal’ – not labels that would be palatable nowadays, even if accurate. With the arrival of the computer and as access to large amounts of data increased, we saw the emergence of commercial classifications in the 1970s with PRISM developed by Claritas in the USA and ACORN (A Classification of Residential Neighbourhoods) developed by CACI in the UK. The first versions developed in the 1970s were built from Census data and used by marketing departments to target product offerings more accurately. Since then the complexity, amount of different data sources used and the level of granularity has increased dramatically, as has the number of different commercial uses that the products are used for from advertising and target marketing through to analysing crime patterns and health resource requirements.

There are many different ‘lifestyle’ or ‘neighbourhood’ classifications though most are general purpose, i.e. they have been built with no specific industry or use in mind so can be useful across a range of industry sectors and for a range of purposes. In contrast, the ‘Resonate’ classification developed by Business Insight has been built with the insurance industry in mind. So for underwriting and pricing insurance, Resonate will offer more discrimination than the general purpose systems available in the market. With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.

With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.  Clever use of data and analytics can give companies a competitive edge, help to ensure you are not selected against and as we have seen, sometimes, lead to surprising results!

Click here for more details on Resonate or contact us at 01926 421408.

BREXIT? A view on the result using RESONATE Lifestyle data

Brexit squareWith the Brexit vote about to take place this week we thought it would be interesting and also a bit of fun to use our RESONATE geodemographic classification system to give an indication of the likely result.

Looking at every type of neighbourhood in the UK we estimated which way each lifestyle category is likely to vote based on media reporting and analysis. This was then scaled by the distribution of voters within each category in every street across the United Kingdom and Northern Ireland.

The analysis seems to indicate that the battlegrounds of undecided voters are concentrated in areas categorised as ‘Mature Families & Traditional Values’‘Wealthy Families in Village, Small Towns and Rural Locations’ and ‘Modern Families, Modest Means’.

Targeted communications aimed at these neighbourhoods might have reaped better rewards for either side in their respective campaigns. The analysis predicted a 53% vote to Leave. Although a high turnout is expected, there will be variation across the geodemographic categories which has not been considered and also there will be some statistical error within this analysis. So it could still be a vote either way.

For more information about RESONATE Lifestyle & demographic data, contact us on 01926 421408.

Product Focus – BRICKS

BRICKS imageDo you know how accurate the reinstatement values of the buildings insurance policies on your book are? Do you know which are under insured?

BRICKS© is a new model that calculates the reinstatement values to rebuild typical properties by postcode unit and address across the UK.  Based on detailed research, reliable data and a sophisticated quantity surveyor pricing model, BRICKS© provides insurance professionals with accurate rebuilding estimates.

The model estimates the rebuilding costs across a vast range of different property types spanning one bedroom bungalows through to six bedroom detached properties. The demographics, property characteristics and level of affluence in each postcode are all assessed to gauge the quality as well as the size of a total rebuild.

For an insurance intermediary, this can help in advising clients on an appropriate minimum rebuilding cost with a quick, readily available estimate of the level of cover sufficient for the risk.

For the insurer, it can help to ensure they are more competitive for risks where current reinstatement values are too high and are not missing out on additional premium where reinstatement values are too low.  For reinsurance, it also provides a more accurate assessment of the aggregate risk across your book and a more accurate measure of exposure to feed into property catastrophe models.

It has been tested, validated and subsequently licensed by several insurers. They found a startling uplift on testing BRICKS© against their own experience. It also showed added value over similar products available in the market.

Easy to use and supplied in a number of formats to fit seamlessly into your existing systems, BRICKS©allows you to simply input the location and property details and quickly discover what the typical rebuilding cost or reinstatement value should be for that particular property.

Benefits include:

  • Gain a quick, economical and precise check for under-insurance.
  • Advise your customers on an appropriate level of cover as a basis for your quotes to ensure the best service is being delivered to your client base.
  • Find out where policyholders are under-insured to justifiably propose an increase in cover on renewal.
  • More accurate rebuilding cost estimates assist in understanding in more depth the risks on your book to help with reinsurance needs and solvency II requirements.
  • Discover where you need to adjust your notional sums insured for bedroom rated products to make sure you are competitive in cases where your current estimates are too high.

For more information on how BRICKS can add value, please get in touch with us on 01926 421408.

How insurers can assess flood risk more accurately

road closure image square.jpgThe impact of Storm Desmond on the 4th and 5th December 2015 with gale force winds and unprecedented rainfall has resulted in severe flooding in Cumbria and the North of England. UK insurers will be faced with a large bill as a result with initial market loss estimates being put at up to £500 million.

The bad news for insurers is that these extreme weather events are not going to go away.  With around 5,200 residential properties flooded, the estimated bill to the insurance industry in terms of household claims is thought to be around £174 million.  Every model that has been produced indicates that we are going to be experiencing more severe weather conditions in the future.  So what can insurers do to mitigate their flood risk exposure?

Property risk in the insurance industry depends on a range of factors linked to the physical location. The local environment, the types and construction of buildings, local crime rates, the demographic make-up of the population, physical hazards such as flooding, storm or extreme cold weather – all need to be considered when assessing each risk.

There have been some significant developments over the last few years in terms of the data and tools available and insurance companies are now able to make insightful decisions based on reliable data and risk mapping software.   Things have moved on from rating at the level of postcodes to using more sophisticated methods which allows rating at an individual address level for risks, enabling insurers to understand their exposure to a much greater depth.

Business Insight’s risk mapping tool, Location Matters©, combines state-of-the-art risk mapping technology with the best of breed perils and geodemographic data to provide insurers with a powerful insight and a deeper understanding of geographic risk and the make-up of the local area.

It is the only mapping software designed specifically for the insurance industry that features a complete set of best of breed perils risk models including the market leading Business Insight data models for the UK property insurance market and JBA Flood data.

Location Matters© provides today’s insurance professional with the highest resolution data to give them a greater insight to risk, help drive a more profitable risk selection and improve exposure management through insight into accumulations of risk across their book of business.  It can also be used post an event within Claims Departments to assess the validity of individual claims and for better allocation of resources.

We are still learning about the future implications of climate change, but one thing seems certain – weather conditions look unlikely to stabilise.  The potential influence of climate change cannot be ignored as even small increases in the frequency and severity of events can translate in to significant numbers of claims.

So it is more important than ever for insurers to invest in technology and data models that are based on up-to-date, reliable information that take in to account changing risk patterns to gain a deeper insight into risk.

Find out more about Location Matters© here.