What Insurers Should Really Feel Bad About

Martina Conlon

We work with lots of insurer IT departments that seem to feel bad about quite a few things. They are embarrassed that they have a 4 batch policy system in production, or that they don’t use a fraud analytics, or that they order replacement parts for a few of their servers on eBay. The most common one I hear lately is that they don’t have a big data program, understanding of big data or a business community that would know what to do with it.

Well, stop feeling bad – this far more common in our industry than you think. There are about 600 P&C insurers, and many of them share these same or similar issues – even some of the top 20 players.

What you should feel bad about is if you have not outlined the technology vision for your company to advance beyond these limitations and address business and IT needs in the longer term. Whether you have a transformation project in progress, on deck or already complete, it is critical to outline the to-be state of your technology and how you plan to get there. Work with the business to define a technology strategy that is driven by business goals and IT guiding principles. Document it, even on one page, and communicate it throughout the organization. A shared understanding of the to-be technology state will help govern technology projects over time, and will help develop better IT/business alignment.

And if you have already defined your technology vision, even at the highest level, feel good.

Research Council: Big Data’s Big Challenges

Martina Conlon

At last week’s Research Council meeting, I led a spirited discussion on big data and analytics in insurance. Like every big data discussion, we spent a healthy amount of time on the definition of big data; as a group of technology professionals, we obsessed over the definition, with a number of participants offering variations. However, volume, velocity, and variety were recurring terms in most of the definitions.

We advised the group to think more broadly, to consider big data as the opportunity to leverage existing and new data in considerable volumes without the technological barriers that existed just a few years ago. We reviewed Novarica research on the adoption of big data and predictive analytics, and had an active discussion on what the attendees were implementing or planning in the space. Several carriers are focusing on telematics, working with partners in the planning or pilot phase. Quite a few are using web activity and click streams for customer service and marketing support, while others are assembling holistic customer views or pairing prospects to products by using matching models and historical underwriting data. One insurer shared their experience in piloting big data technology for management and analysis of system log files. Most smaller and midsize carrier attendees are still focusing on core system replacements and have yet to invest in big data.

Regardless of the state of big data within participants’ organizations, most agreed that building the business case will continue to be a big challenge until the market leaders effectively communicate the benefits and make the initiatives table stakes rather than an optional R&D effort.

HBR on Big Data and Data-Driven Leadership v. HIPPOs

Matthew Josefowicz

If you’re thinking about how Big Data and Analytics will impact your organization, the single biggest issue is whether your company’s leadership will embrace data-driven decision-making to drive change.

As we put it in our recent Big Data and Analytics Report:

Follow business need. While technologists and data specialists can get excited about the potential value of integrating new data sources and analytics capabilities, analytics initiatives that are not requested by the business will fail because business will not take advantage of them. This was the ignominious end of many data warehousing projects in the late 90’s and early 2000’s, and the scars are still fresh for many companies. The first step for insurance data and analytics specialists (beyond R&D initiatives like pilot programs or proof of concept experiments) must be to encourage business executives to think about the potential value of the outcome, and gauge the willingness of senior leaders to drive change.

Today I came across a great blog post at Harvard Business Review by Andrew McAfee and Erik Brynjolfsson from MIT on the necessary changes to management culture that are required to thrive in a Big Data world.

It contrasts data-driven leadership with traditional models of following the HIPPO (Highest Paid Person’s Opinion). I highly recommend this post, especially the discussion that starts on the bottom of page 3 and continues through the end. http://hbr.org/2012/10/big-data-the-management-revolution/ar/1

If you’re unfamiliar with Brynjolfsson and McAfee’s work, I also recommend their recent book, Race Against the Machine.

If I had five million dollars, I’d buy you much better data and analytics capabilities…

Matthew Josefowicz

Ever wonder what insurer CIOs would do with an extra $5 M in next year’s IT budget? So did we. So we asked 120 of them. Here’s what they said:

 

  • Despite the intense focus on data and analytics in insurance IT spending and project prioritization, more than 30% of insurer CIOs would spend more in this area if they had the budget. Clearly, this will be an area of continued high investment across the industry for the next few years.
  • There is a huge backlog of demand for core systems projects at insurers under $1 billion in annual premium, and a general belief that these projects can be accomplished for investments in the single-digit millions. If business conditions improve and budgets can be made available, we expect continued high volumes of activity in this area.
  • 1 in 5 insurers would invest in improved mobile capabilities, to improve both customer and agent experience.
  • Many CIOs have ideas on how to use technology to move the business forward that may not be captured in the traditional governance and prioritization process. Business leaders should make sure they are asking the question and incorporating the answers into deepening their understanding of their companies’ challenges.
  • Money is not the only scarce resource CIOs need – in many cases, business time and attention is even more scarce than cash. As one respondent put it: “No matter how much more money we had, the organization could not support the delivery of any more services. The capacity for the business support of IT projects is all but tapped out at this point.”

Check out our new report, which contains over 100 verbatim responses from insurer CIOs to this question.

The Economist on Big Data in Insurance

Matthew Josefowicz

The Economist has a new article on the transformational role of 3rd-party data in insurance, Insurance data: Very personal finance. It’s a good article and worth a look just to see how this issue is being presented in the general business press.

For more on this, see our new reports on:

and our blog posts on:

New Report and Webinar on Data Services for Insurers

Martina Conlon

Like many other industries, insurance is undergoing a paradigm shift, from operating in an age of information scarcity to one of information super-abundance. Old data is moving faster, new data is proliferating and internal data is more accessible. Insurers no longer need to devote significant resources to gathering data about prospective risks and claims. Instead more data than ever is now available on-demand, and at lower cost, from commercial sources. In addition to claims, credit, consumer and cost information, we can now collect information on buying behaviors, geospatial and location information, social media and internet usage, and more. Our electronic trails have been digitized, formatted, standardized, analyzed and modeled, and are up for sale. As intimidating as this may sound to the individual, it is a great opportunity for insurers to use this data and insight to make more informed and better business decisions.

In our new report, Novarica Market Navigator: Data Services for US Insurers 2012 (Q1), we profile 15 vendors that offers a broad range of data to US insurers. More and more frequently, the CIOs we work with are being challenged to incorporate this external data into core systems and BI environments. This report is designed to give them an overview of the types of data, cost models, and technical details of major services providers.

I’ll be presenting and discussing our research in this area on a webinar on Monday, March 26 at 1 pm ET. Pre-register here.

On Data Standards – Benefits and Challenges

Martina Conlon

The standard ACORD XML transaction format provides big benefit to the industry. It’s a major piece of the solution to enable automated new business processes – providing significant efficiencies to carriers in PL and small to mid commercial markets. Adoption has been strong for portal and AMS transport vendors, which has helped drive carrier adoption. Unfortunately, core system vendors are hesitant to adopt them, which stunts general adoption. Plug and play, SOA and external interfaces could be accelerated and enabled if most vendors chose to support these standards, or even a mapping to these standards.

But ACORD XML also presents some challenges to the industry. Any insurance data standard is going need to be extended as the value and accessibility of new data elements is uncovered. Carriers face the challenge of having to extend the standard themselves, or go through the lengthier formal extension process with the standards body. Accessible and affordable education and support could help in making well designed extensions quickly. If a carrier chooses to add “vehicle color” as an underwriting attribute, it would be great if they could contact a support resource to be advised on the best placement.

Some standards (or starter data models) lack depth in certain lines of business, or subject areas. Knowing that the standard is comprehensive inspires confidence from the carriers and vendors. Too many standard organizations or model vendors prematurely release these assets. Upon review, the carriers identify the holes and label them well-intentioned but incomplete – and don’t bother to revisit the standard or models at a later time.

So, with these challenges, what would help drive the use of data standards and industry models? Finish them before your release them, and then provide accessible, affordable support.

Perhaps more dependence on applications deployed in the cloud or delivered as SaaS will also help drive adoption. For example, Informatica just introduced an integration service that translates ACORD XML to salesforce.com. As integration needs increase, especially around very generic applications, data standards have a better chance or being embraced.

First steps towards 2-question underwriting…

Matthew Josefowicz

Fascinating article in the WSJ last week about an analytical program that lets life insurers build underwriting models based on public and commercially available consumer information rather than questions they actually ask insureds. If you haven’t read it, I highly recommend it.

This is a great early example of how paradigm shifts in information technology have the potentially to completely revolutionize pieces of the insurance business.

As I’ve been saying in presentations and webinars for a while, insurer underwriters in the near future will rely less on information provided by the prospect or agent and more on information already extant from third-party data sources, with the result that they’ll be able to shift resources from the gather phase to the analyze phase of the information life-cycle.

I now believe more than ever that we’re not far from the days of 2 question underwriting: What is your social security number, and can I look up everything else about you?

More Novarica Research on this topic:

Business Intelligence Challenges: Déjà vu all over again

Martina Conlon

Back in the consulting heydays of the 90s, I led many data warehousing efforts where we found one of the biggest challenges was source system data quality.  At one healthcare insurer we uncovered many data issues, like using a birth date of 01-01-11 for any member who qualified for Medicare coverage.  They were willing to give up the important data point of member age in order to track additional insurance coverage — the volume was insignificant, but it was one example of the common data quality issues uncovered at the time.

Fast-forward twenty years.  I am with Novarica and have just completed a survey and report on the state and challenges of BI today, and find that source system data quality is still a top challenge.  Why is this still such a problem?  Much of the data in today’s insurance processes is sourced from external data sources that provide clean and consistently formatted data through web services.   The trend is towards more and more data being sourced by third parties, not people who participate in the transactions (brokers, carrier staff, customers). Many carriers now have robust agent portals in place that enforce data integrity checks, formatting standards and other validations.   Also, a lot of insurance data has already been exposed outside of the carrier organization in agency download and print documents where it is regularly reviewed and corrected if needed.  Standards like the ACORD XML transaction format provided us with standard definitions, vocabulary and format for exchange.  All of these things should have improved data quality.

So where’s the problem? It is likely the same old suspects we had twenty years ago.   Some of it is probably bad data entry at some point in the process — garbage in, garbage out.  However, I believe that many of the issues are due to “workarounds” used to overcome limitations of legacy source systems.  Carriers still bastardize their data by reusing one data element for some unintended purpose, like the healthcare insurer using a dummy birth date to indicate Medicare coverage.  This leads us to yet another reason to migrate to modern, extensible core systems – where you easily add Medicare coverage data and retain the birth date.

BI has come a long way in the past 20 years – the technology offers powerful capabilities in reporting, analytics and predictive modeling.  When it’s coupled with accurate, easy-to-access source data, it can help carriers develop and adjust business strategy quickly.  A modern core system can inherently improve business processes and shorten product time to market, but the accessible and extensible data it provides can also deliver business insights and competitive advantage.   If you’d like to read more about the BI state, challenges and plans for carriers today, check out our new report.

The clock is ticking on cost-basis reporting…

Thanks to the 2008 Emergency Economic Stabilization Act, at the start of 2011 many financial firms will be required to report the cost basis of publicly-traded securities to both taxpayers and the IRS. To comply, firms such as broker-dealers and fund custodians will need to update their systems.This new law was created by a serendipitous combination; the IRS was concerned that the cost-basis numbers investors put on their Schedule D’s were, perhaps, a little too high, while the Federal government increased an already-insatiable thirst for revenue. The IRS estimated that under-reporting was costing some $7 billion annually. Voila – tax cost basis reporting was born.

Many firms have been trying to figure out how they build their own system, or planning on letting their custodian handle it. Wrong and wronger. Others were waiting for detailed IRS regulations. Wrongest. The reality of tax lot accounting, corporate actions and wash sales has overwhelmed most in-house IT departments. In-house solutions are estimated to be running in the $3 million to $5 million range. Plus, the law clearly states that the broker dealers themselves will be ultimately responsible, not the custodians.

For those firms looking for purchased solutions (which include SunGard’s new Cost Basis Reporting Engine, Scivantage’s Maxit and WKFS’ Gainskeeper), they need to get busy selecting a vendor and testing it out. Ditto for those firms that are truly well along in building an in-house solution. But for firms that are late to the party and behind the curve, get out the check book and get ready to write a large check to either a vendor or to the IRS (up to $350,000 for unintentional errors, more for failure to comply).

Because on January 2, 2011, broker dealers better be tracking individual tax lot cost basis details. And in 2012 it will be mutual fund companies, with everyone else on the system by 2013. Happy New Years, y’all.

Bookmark and Share