Product Update – Data Dimensions

Data Dimensions’ is the latest data product offering recently launched by Business Insight.

Using principal component analysis across a vast database of demographic variables, ‘Data Dimensions’ is a suite of orthogonal or uncorrelated scores by postcode describing different demographic features such as wealth, affluence, family composition, rurality and industry.

Every neighbourhood in the UK has a different set of scores that uniquely describes each location across the range of factors in the ‘Data Dimensions’ product. The scores can be easily included in risk pricing and rating models to increase accuracy and to fill gaps where insurers have little or no experience data. Our initial tests against experience data for both motor and household have shown ‘Data Dimensions’ to add considerable value for risk selection, underwriting and pricing.

For more details or to see a demonstration, contact the sales team on 01926 421408

Flood, building on flood plains and the profile of those at risk

It is estimated that there are currently 1 in 6 properties or 4.7 million properties in Great Britain at risk from flooding, with 2.7 million properties at risk from flooding from rivers and sea alone.  Between 2001 and 2011, around 200,000 new homes were built on land that has a significant chance of flooding, either from a river or the sea. During the 1990s, this figure was even higher as there was less focus on flood and no obligation on planners to carry an analysis of flood risk at the time.

After the devastating effects of last winter’s storms and the subsequent costs to the insurance industry, building residential properties on flood plains continues although admittedly not in the same volumes.

Recent figures obtained by the i newspaper under the Freedom of Information Act show permission has been given to build more than 1,200 new homes on flood plains despite official objections from the Environment Agency about the risk of flooding on such sites. With all the publicity and available data relating to flood risk, it does seem slightly unbelievable that construction even at these levels is allowed to proceed, or at least without an obligation on builders to ensure that properties are built to be flood resilient.

New housing built in areas thought to be protected by flood defences may also be more at risk than first thought. Flood defences are built to withstand a certain magnitude of event, e.g. a flood with an estimated return period of 1 in 200 years, yet the underlying techniques modelled from relatively small data samples are based on extreme value theory which is sensitive to the underlying assumptions. I know there are still some people in senior roles in the world that are sceptical about climate change, however it does undermine the accuracy of these models and mean that defences may be more vulnerable than when first built or constructed. A good recent example being in Carlisle which flooded in 2005 (1925 homes and businesses) with flood defences that were breached. The defences were improved at a cost of £38 million yet these failed again in 2015 following a more extreme event than had been considered in the planning.

Profiling those areas hardest hit by flooding
We used our geodemographic risk profiling tool Resonate© to analyse the demographic profile of those affected by the Winter floods in 2015 in Carlisle and areas of Cumbria.   Our analysis revealed that there was an over-representation of properties flooded from working class and disadvantaged rural areas across the distribution of those hit.

Further analysis of these areas revealed a large number of properties flooded were from the Resonate lifestyle group ‘Rural & Village Survivors’ and those worst affected were predominantly from ‘Blue Collar Heartlands’; which are characterised by blue-collar workers in pre-war terraced properties where the proportion of terraced properties is almost thirteen times the UK average percentage. There is a high proportion of this type of neighbourhood in Carlisle.

Looking at all the areas across the UK that have a high risk of flooding does reveal that there is an over-representation of older, disadvantaged and more vulnerable neighbourhoods. In the future, we will no doubt continue to see more occurrences similar to that of Carlisle with poorer and more deprived neighbourhoods being disproportionately hit.

Conclusion
As long as there is still a demand for new houses, building on flood plains will continue. There is an increased demand for new housing particularly in the South East in areas where flood defences do exist, though climate change may limit the level of protection envisaged when some of these defences were built. A geodemographic analysis of the make up of the high-risk flood areas is quite startling – higher volumes of older, more disadvantaged and more vulnerable members of society dominate.

This highlights the important role that insurance plays and how the availability of affordable flood insurance for everyone is essential.  The introduction of Flood Re goes some way towards offering flood-prone properties a degree of cover but does not yet guarantee affordable insurance for everyone. The Government will need to put more investment in maintaining and improving flood defences and will need to look at helping make properties in the highest risk areas more resilient to damage from flooding.

The insurance sector and GDPR implications

Technology is connecting us in ways not seen before. Over a third of the world’s population use social media platforms such as Facebook and there are currently more mobile devices than people on the planet.  The avalanche of data being created not only brings with it analytical challenges to find value but also concerns relating to privacy, who owns the data we generate and a perceived over-reliance on automatic decision making.

The EU’s General Data Protection Regulation (GDPR) due to come into effect in May 2018 is an attempt to address some of the concerns and brings considerable change for European-based organisations in terms of capturing, processing and using personal data. Some of the changes might be viewed as draconian and could have a major impact on the use of data in the insurance industry.

Personal data is defined as “any data that directly or indirectly identifies or makes identifiable a data subject, such as names, identification numbers, location data and online identifiers.”

Designed to improve protection for consumers, the new legislation focuses on how personal data is collected, processed and how long it is held for and includes more obligations for transparency and portability.  Under the new rules, breaches must be reported within 72 hours and organisations face tougher penalties for non-compliance which could be up to 2% of global turnover or up to Euro 20m.

Consent to process the data
Insurance by its very nature involves collecting large amounts of personal data on customers. Under the new regulations, organisations will need to show how and when consent was obtained for processing the data.

Consent must be explicitly obtained rather than assumed and it needs to be obtained every time it is used for a specific purpose.  This means that data collected in one area of the business cannot be used in another area unless explicitly agreed upfront by the customer.

This could be a problem for insurance companies as often data collected at underwriting and claims stages is then used for a variety of different purposes including fraud prevention, marketing, claims management and risk profiling.  Also, some of the individual data collected via credit agencies or aggregators and then reused for another purpose such as the real-time rating and pricing of insurance could potentially fall into this category.

Time limits and erasure
To ensure that data is not held on to for any longer than necessary, use of personal data should be limited to the specific purpose for which the processing was intended.  This change is likely to impact the insurance industry which up to now has sought to hold on to personal data for as long as possible to maximise potential use.  For example, data in relation to historical claims experience.

Customers will have the right to demand that insurers delete their personal data where it is no longer required for its original purpose, or where they have withdrawn their consent.

Right to data portability
Individuals will be able to request copies of their personal data to make transferring to another provider easier. The regulations specify that the data needs to be in a commonly-used format.  This might be problematic for insurers and intermediaries where data may be held on separate systems or in different formats.

Profiling
The GDPR provides safeguards for individuals against decisions based solely on automated processing which includes ‘profiling’. Profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.”

This new right is significant to the insurance industry as the underwriting process relies heavily on the pooling of information, building generalised models of risk to estimate future claims propensity and the profiling of individuals.   There are also other areas where decisions are made based on processes that are automated including claims analysis, fraud prevention and marketing.

Exemptions
The right does not apply to profiling using personal data if any resulting decision is deemed as necessary for entering into or performance of a contract between you and the individual.  The GDPR states that the contract needs to be between the data controller and the data subject. It is not clear about what happens when it concerns the processing of a third party’s personal data. Many insurance policies involve the processing of a third party’s personal data, in the form of a beneficiary under an insurance policy, for example, a second driver under a motor policy.

The other exemption is if the data has been anonymised – as this is no longer classed as personal data because the person cannot be identified from the data itself.

As far as profiling activities for underwriting – this is likely to be permissible as it can be considered necessary for the performance of a contract.  However, profiling for marketing purposes will not be exempt.

How does the Regulation affect the use of big data?
The process of combining large amounts of data from disparate sources and analysing it by automatic or semi-automatic means to spot trends and find hidden patterns and anomalies has a number of benefits for the insurance industry.  These include a greater understanding of risk across a book of business, more accurate pricing and improved competitiveness.  Data providers such as Business Insight are all undoubtedly giving GDPR some thought and building in technology to ensure their data products will be compliant, or at least they should be.

Business Insight
At Business Insight, we invest a significant amount in research and development every year as well as looking to continually future proof our products.   We use a range of different postgraduate analytical, statistical and mathematical techniques in researching and building models from large data sets which help guard against inaccuracies and errors.

We build models from data that has already been anonymised using various anonymisation techniques such as Bayesian Inference Swapping.  We have also been developing methodologies and IP to improve the accuracy and robustness of our perils risk models as well as ensuring compliance with the forthcoming GDPR legislation.  Our next generation of perils models and solution will be unveiled in the Summer.

Challenges ahead
In summary, the GDPR brings with it quite a few changes and challenges to the way data is collected, processed and stored.  Insurance organisations should be taking the time now to review their data management practices and systems to ensure compliance.  New technologies emerging will only serve to increase the pace of data generation and collection.  A lot of thought will need to be given by companies to ensure they remain compliant in terms of what they currently do and new solutions they are thinking of implementing.

In terms of the application of GDPR to big data, there are going to be obstacles to overcome as the legislation will force more of a balance between the potential benefits of analytics and protecting an individual’s right to privacy.  This could have a big impact in some areas and limit some of the analysis currently undertaken.  Whether Brexit eventually results in some of the legislation being softened remains to be seen, though with GDPR taking effect in May next year you will need to start thinking about the implications sooner rather than later.

Location Matters – the next step forward for underwriting UK property insurance

The increasing challenges faced by insurers include driving business growth in a highly competitive market and ensuring customer loyalty.  Remaining competitive involves optimising underwriting performance, an in-depth understanding of exposure to risk and more accurate pricing. What if there was a tool that could help you do all this?

Location Matters© is Business Insight’s new powerful visualisation and risk mapping tool that gives insurers a unique real time view of perils risk exposure by location.   Using the latest mapping technology, it is an extremely powerful visualisation and decision tool, combining market leading property risk models and demographic data into a single, easy to use, affordable system.

Through interactive maps displaying property location, risk, perils, policies and claims data, Location Matters© is designed to help insurers with underwriting decisions at point-of -sale as well as accumulation and exposure management.

Property risk is dependent on a range of factors linked to location; including the local environment, the types and construction of buildings, local crime rates, the demographic make-up of the area and physical hazards such as flooding or storm. By viewing and analysing customers against hazard data by location, underwriters and pricing analysts are able to have a deeper understanding of risk exposure, have insight into accumulations of risk across their entire book of business and as a result, more profitable risk selection.

Location Matters© is the next step forward for underwriting UK property insurance.  It brings together the best of breed perils and geodemographic data to visualise risk at property level and allows an underwriter to log-in to a web browser from any location and interrogate a postcode (or address) to gain a deeper understanding of geographic risk and the make-up of the local area.

Being able to visualise risk at such a granular level provides a greater insight and accuracy for underwriters but also helps strengthen your market position with a more in-depth view of the risk price as you write the business.

The risk mapping software also has a number of other benefits.  For marketing departments, these include having an accurate and in-depth understanding of their target market to generate new pipelines and a clearer view of where to focus their marketing campaigns.  For claims departments, it can be used to assess the validity of individual claims and also to prioritize claims handling resources which in turn helps to strengthen customer loyalty by focusing efforts on legitimate claims and improving the customer retention rate.

Location Matters© – for a more profitable risk selection and greater exposure management.

Find out more here

Long range Winter forecast 2016/17

The past few days have been distinctly chilly and have fuelled speculation of a harsh winter to come.  After last year’s mild and very wet winter, we look at early indications as to whether this is an accurate reflection.

There is an art to reading and understanding seasonal forecasts issued by the various weather services of the world.   Very few of the available forecasts use the same metrics making a consensus very difficult.

The Met Office released their 3-month outlook the beginning of November and in it they highlighted the risk of a cold start to the winter, but they were quick to point out that “This does not necessarily imply that the UK will experience cold and snow – in fact, the most likely outcome is for conditions to be relatively normal on average over the next 3 months.”

We asked our data partners at Weathernet if there were any indicators to suggest we are heading for the severe winter the press is speculating about.  The Weathernet team advise that beyond two weeks ahead, all forecasts should be treated as very speculative.

However, they report that certainly cold days – and night time frosts – are set to persist for at least another week. According to Steve Roberts of Weathernet this is due to a combination of factors and these include ENSO (El Nino Southern Oscillation) in a neutral state, QBO (Quasi-biennial Oscillation) in its easterly phase, SST (Sea Surface Temperatures – around Newfoundland) that are very warm, and the record lack of Arctic Ice.  So, the odds are already stacked significantly in favour of a December that is considerably colder (and drier) than normal.

Beyond then, from late January into February, things are less clear, or certain – but there are some grounds to believe conditions might revert to stormy and wet, leaving winter 2016-17 as a whole only a little colder and drier.

Beyond then, from late January into February, things are less clear, or certain – but there are some grounds to believe conditions might revert to stormy and wet, leaving winter 2016-17 as a whole only a little colder and drier.

If we do see temperatures as low as those of winter 2010/11, the insurance industry should be ready to brace themselves for a large number of Freeze claims.

5 key benefits of data enrichment for the Insurance Industry

resonateData enrichment provides both insurers and brokers with an opportunity to leverage the vast amount of information they already have and combine it with external data sources to improve business acquisition and enable them to more accurately assess and price risk at the point of quote.

In the past, insurers and brokers had little choice but to rely on information collected at the point of quotation, most often provided by the proposer.  But now with increasing levels of new business being shopped for and written online, there is access to a wealth of public and private data which includes data relating to the individual, their location, property, demographic and lifestyle information.

This data can be used to try and predict customer behaviour, analyse trends, uncover new patterns and improve risk exposure.  Real-time data validation at the point of quote allows additional facts relevant to the risk to be discovered. This has a number of key benefits for insurers and brokers which include:

  1. Increased fraud detection rates

Insurers are experiencing unprecedented levels of application fraud activity. ABI research shows that in 2014 insurers uncovered 212,000 attempted dishonest applications for motor insurance, which is equivalent to just over 4,000 every week.  Statistics show that drivers who lie on their initial application are 66% more likely to make a claim in the future, so the more focus insurers and brokers can put on their initial assessment of drivers, the better.  Patterns, trends and anomalies can be spotted quicker and costs savings can be made by earlier assessments of fraud and identifying early cancellation cases.

  1. Improved competitive position

Data enrichment helps to provide insurers with a single customer view by combining public and private data with quote intelligence.  Insurers are, therefore, able to more accurately assess their customer base and be more selective in terms of the risks they want to underwrite. Thus avoiding poor performing risks and more easily identifying their best customers and those with the highest lifetime value for improved profitability.

  1. Enhanced customer loyalty

Data enrichment can provide insurers and brokers with a richer, deeper understanding of their existing customers. Adding valuable business data to individual records in your database can transform your customer data into customer intelligence. A wider knowledge of your customers’ behaviour and lifestyle means that products can be specifically tailored thereby enhancing customer loyalty and retention.

  1. Greater cross-selling opportunities

A better understanding of your customers leads to more relevant targeting and more opportunity to cross-sell complementary products.  By verifying the customer is who they say they are at point of quote and assessing their credit worthiness, those customers with a higher propensity to purchase add-ons can be identified.

  1. Reduced costs in settling claims

The claims process is time-consuming and demands a lot of resources.  Assessing the propensity of the customer to fulfil their credit commitments at application stage, means that scrutiny of the data at claims stage is reduced, enabling claims to be dealt with quicker, requiring less time and resources to be spent in settling claims and ultimately improving profitability.

The future of data enrichment
In today’s technologically driven society, new ways of exploiting data to gain competitive advantage and new data sources will always be found. Insurers will continue to embrace new data sources and the greater visibility and insight this brings.

Business Insight has a range of products designed to support quote enrichment, risk selection and claims validation as well as the pricing and underwriting of insurance.  We have recently built our own data hub and will be launching the next generation of high resolution property level geographic risk models next year. This will allow users access to more accurate perils information at the point of quote. More details to follow on this in our next newsletter.

Using big data in the UK to classify residential neighbourhoods

resonateBig Data analytics is having an impact in many areas of industry. In the recent race for the US Presidency, it played a key part in Trump’s success. While the media in the US was consistently predicting a Clinton victory, behind the scenes it is reported the Trump campaign were employing an army of Data Scientists to crunch huge amounts of social media data using Artificial Intelligence (AI) (machine learning techniques) to work out who the marginal voters were. Looking at what people were doing, saying and communicating, they homed in on what key issues were most important for particular individuals, classifying them and then working out what messages they needed to target them with. Whether or not this was decisive is unclear, though it certainly will have had some influence on the numbers and the Trump victory.

Modern analytics allows us to combine large amounts of data from lots of different sources and use machine learning and AI techniques to convert this vast amount of data into meaningful insights to base informed decisions upon. Business Insight has built its own AI machine learning platform called ‘PERSPECTIVE’ to crunch huge amounts of data to produce estimates of risk or likely outcomes. Big data analytics in the insurance industry has a number of benefits for insurers; one of which is helping to understand customers better, others include helping to improve pricing, rating and underwriting through a greater understanding of risk.  It can also tease out hidden patterns and provide insights that may otherwise stay hidden.

The amount of data may have increased enormously in recent times though making sense of large volumes of demographic data and pigeonholing people is not something new.  A geodemographic classification assigns geographic areas to categories based on the similarities across a vast range of different variables. It is a structured way of making sense of complex and very large socio-economic datasets.

Streets and neighbourhoods can be classified into types such as ‘Affluent Achievers’ or ‘Comfortable Greys’ in wealthy areas through to types such as ‘Breadline & Benefits’ in more deprived areas. The products are based on the assumption that people living in similar housing and sharing similar characteristics across a range of factors relating to age, affluence, family composition and life stage are likely to have similar wants, needs and exhibit similar behaviour.

Geodemographic classifications have been around and used in industry for a very long time. The origins in the UK can be traced back to Charles Booth who analysed the 1891 UK Census and produced a classification of streets throughout London with neighbourhood types such as ‘Lowest Class. Vicious, Semi Criminal’ – not labels that would be palatable nowadays, even if accurate. With the arrival of the computer and as access to large amounts of data increased, we saw the emergence of commercial classifications in the 1970s with PRISM developed by Claritas in the USA and ACORN (A Classification of Residential Neighbourhoods) developed by CACI in the UK. The first versions developed in the 1970s were built from Census data and used by marketing departments to target product offerings more accurately. Since then the complexity, amount of different data sources used and the level of granularity has increased dramatically, as has the number of different commercial uses that the products are used for from advertising and target marketing through to analysing crime patterns and health resource requirements.

There are many different ‘lifestyle’ or ‘neighbourhood’ classifications though most are general purpose, i.e. they have been built with no specific industry or use in mind so can be useful across a range of industry sectors and for a range of purposes. In contrast, the ‘Resonate’ classification developed by Business Insight has been built with the insurance industry in mind. So for underwriting and pricing insurance, Resonate will offer more discrimination than the general purpose systems available in the market. With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.

With the ever increasing volumes of data and available computing power, risk models and lifestyle classifications are becoming more focused and more accurate.  Clever use of data and analytics can give companies a competitive edge, help to ensure you are not selected against and as we have seen, sometimes, lead to surprising results!

Click here for more details on Resonate or contact us at 01926 421408.

The next big climate risk?

supervolcanoes and climate change1816 – the year with no Summer, dramatic climate change and a worldwide recession.  What caused it and could it happen again?

Exactly 200 years ago there was a dramatic change in the earth’s climate. It snowed heavily in July, the River Thames was frozen over, crops failed and there was a worldwide recession. Similar events are believed to have also brought about a dramatic change in climate during the Middle Ages often referred to as the ‘Little Ice Age’. What caused this and more importantly could it happen again?

Scientist believe that the ‘Little Ice Age’ was caused by the cooling effect of a large volcanic eruption or ‘super eruption’.   The last recorded supervolcanic eruption was 201 years ago at Mount Tambora in Sumatra, Indonesia in 1815. It had a massive impact on the world economy due to damage to human life, property, machinery and agriculture and severely impacted the world’s climate for many years afterwards.

Tens of thousands of people were killed by the eruption and in subsequent months, thousands more people died in the surrounding areas from starvation due to the resulting crop failures and disease.    Twenty-four hours after the eruption, the ash cloud that had formed is reported to have covered an area approximately the size of Australia.  This ash cloud took years to clear, changing the climate dramatically and causing a ‘volcanic winter’ that blocked out the sun for between six to eight years.

With the resulting lowering of temperatures, 1816 became known as the year without Summer; with snow drifts on hills until late July and it is reported that the Thames was completely frozen over in September.  There was a subsequent worldwide recession. A similar event happening today would be catastrophic.

supervolcanoesYellowstone National Park, Wyoming, USA has one of a number of active supervolcanoes, although the last eruption is believed to have been over 70,000 years ago.   The United States Geological Survey (USGS) conducted a study on Yellowstone. The study used a program called ‘Ash 3D’ to model the effects of a Yellowstone super eruption, focusing on how much ash would fall, how far it would travel and the major effects it would have on infrastructure.

The study found that cities up to 300 miles away from Yellowstone would be covered by up to three feet of ash.  With the sun not able to penetrate the thick blanket of ash and particles in the atmosphere, the average global temperature would drop by an estimated 10ºC for about a decade which would have a dramatic impact on Earth.  Scientists think that a succession of large volcano eruptions had a similar impact on the climate in the Middle Ages when very severe winters were more frequent.

NASA have also been using state-of-the-art climate models to simulate the response to a major volcanic eruption.  They found that some types of evergreen and deciduous trees virtually disappeared for a number of years due to the lack of sunlight.  However, despite all the scaremongering from the press, the Earth’s climate is more resilient than once thought to a supervolcanic eruption. The research from NASA showed that the climate returned to near normal conditions within a decade in most simulations.

Scientists are continuing to monitor the pressure of underground magma.  From these observations, they have concluded that a large scale eruption is not imminent.  Using various factors and calculations, they suggest a confidence of at least 99.9% that 21st century society will not experience a Yellowstone super eruption.

Insurance and fire risk – 350 years on from the Great Fire of London

fire riskSeptember 2016 marks the 350th anniversary of the Great Fire of London.  The fire, which started in the early hours of Sunday 2nd September 1666 on Pudding Lane and lasted several days, devastated London.

Over 13,000 buildings were destroyed in the fire, including many homes, commercial buildings and other well-known landmarks such as St. Paul’s Cathedral, the Royal Exchange and Newgate Prison.  Miraculously, there was little loss of human life.

As the long and arduous task of rebuilding London commenced, to try and ensure that London would not face such devastation from a fire again, a number of changes were made to laws and Parliament set up the Fire Court.

The Court was established to settle differences arising between landlords and tenants in relation to burnt buildings and decide who should pay.  A year later, physician Nicholas Barbon set up the first insurance company, the Fire Office, whose sole purpose was to insure houses against loss due to fire.

The ABI have calculated that if that particular area of London were to be hit by a similar fire today, repairing the damage caused would cost somewhere in the region of £37 billion.

The insurance industry has come a long way since 1667 but is still dependent on a proper understanding of risks. With ABI figures showing that the average claim for domestic fire damage is around £11,000 and the average claim for commercial fire around £25,000, fire is an important peril for insurance companies to consider.

To help insurance companies better understand their exposure to fire claims and likely accumulations of risk in urban locations, Business Insight has a range of data enrichment models and a mapping and accumulation management application called ‘Location Matters’. The Fire Insight data enrichment models help to assess the relative risk and variation of deliberate and accidental fire claims across the UK; both for commercial property insurance and for home insurance. The models utilise highly complex computer algorithms and vast quantities of data relating to residential and commercial property, the local environment and the demographic make-up by area to estimate risk more precisely.

Accumulation management with ‘Location Matters’ enables an insurer to monitor policy accumulations by location to gain greater insight and understanding of risk exposure, allowing insurers to answer the question ‘should another Great Fire ever happen in London again, what is my probable maximum loss’?

To find out more contact our sales team on 01926 421408.

Product Focus – Escape of Water

perils and escape of waterEscape of Water (Non Freeze) claims currently account for around 25% of domestic claim costs, so having an accurate measure of escape of water risk is vital for insurers.

The cost of insurance claims resulting from escape of water claims such as plumbing equipment failure, and burst pipes and leaks can be significant.   Business Insight’s Escape of Water (Non-Freeze) model has been designed to predict the relative risk of escape of water claims across the UK.

Working closely with a number of insurers and data partners, Business Insight has utilised PhD level mathematical modelling to analyse highly detailed datasets against historic claims patterns to estimate risk by postcode. Over 100 million data records, 26 million properties, 1.7 million postcodes and heavy computing power has resulted in the most detailed project undertaken into this type of insured peril in the UK insurance industry.

Comprehensive information relating to property, the typical demographic make-up of the street and other key predictors has been combined to more precisely calculate the risk of an escape of water claim.  The output provides insurers with a deeper insight into the risk of an escape of water claim for enhanced risk selection and better pricing accuracy.

The model has been independently validated by a number of insurers against their experience data and has shown a high degree of predictive discrimination and potential for use as a rating factor.

Benefits include:

  • Better assessment of risk by location.
  • More precise pricing and rating.
  • Gaining insight into postcode areas where you have no experience data.
  • Discovering where you need to modify your rates to reduce exposure in higher risk areas and to optimise your profitability.

To find out more, please contact us on 01926 421408.