Commercial/Specialty Underwriting Automation: Cui Bono?

Matthew Josefowicz

Good article recently in I&T on Commercial Insurers and Underwriting Automation., covering some recent studies by various industry analysts. Here’s a quote:

Complex risks are still much more hand underwriting and will be for the foreseeable future,” says Matt Josefowicz, managing director at Novarica. “It’s all about empowering those underwriters with more communications tools and more data. A lot of the tech investment for underwriting in the specialty and large commercial side involves bringing all the information needed to make decisions to the underwriter’s fingertips as quickly as possible.

Complex-risk underwriters present a challenge when implementing new technologies, Josefowicz explains. “The individual underwriting desks have a lot of political power,” he says. When dealing with high-value cases, these experts have a great deal of specialized knowledge and tend to call the shots for which technologies they want to use.

One of the main questions in automating commercial and specialty is in answering the question Cui Bono? – “to whose benefit?” As we discussed in our report on Centralized and Federated IT Models, it’s hard to drive IT strategy centrally when the political power in an organization is federated. Commercial and Specialty CIOs need to work closely with their business leaders to make sure they are addressing their key data and technology issues. If the P&L heads can’t be convinced of the local value of an IT initiative, appeals to a weak central power are rarely successful.

For more on business and technology trends in Specialty Lines, see our recent report.

Straight Through Chicago

Last week’s LIMRA/ LOMA Retirement Conference in Chicago provided an interesting overview and update for what is happening in the industry today. Jim McCool from Charles Schwab noted the importance of having carriers move to establish trust with consumers, and the need to de-clutter and simplify products and business models. He highlighted the example of Apple as a company that has taken a potentially complex space and made it elegantly simple with a terrific user experience that inspires trust and confidence.

This was a great build on a presentation I had an opportunity to deliver at the conference on Straight Through Processing.

The reality in the United States is that 10,000 Baby Boomers are now reaching retirement every day, something that will persist for the foreseeable future. The opportunity for carriers to prepare for this is now. Further, with low interest rates and continued cost pressure, finding ways to reduce operational expenses while improving customer experience (for both agents and customers) is critical.

Another reality is that customer experiences are increasing being set by companies like Apple, Facebook, Google and Amazon. They have perfected ways to make complex things simple, easy to use, innovative and “delightful” to customers. With expectations set there, business practices that are dependent on paper and rooted in the 1950′s are increasing arcane and inaccessible to agents and customers alike. The need to drive toward electronic applications and electronic signatures is crucial for carriers across lines of business. It is both a crucial step toward better customer experience now … and a precursor to bring able to deliver on meaningful mobile capabilities.

This was an opportunity to highlight findings from a recent Electronic Signatures Executive Brief we published.

When asked if there was a potential crisis due to aging in the producer community, the executive panelists at the conference’s main session noted that there is. Allianz, Schwab and Wells Fargo all acknowledged the problem and highlighted approaches they are taking to prepare for a new generation of advisors.

In some places, the agent / advisor community is actually aging faster than the general population at large. This also highlights the importance of creating better and more compelling user experiences for both producers and end clients. Moving to simplify business process, allowing for the electronic execution of transactions and “going mobile” are all key to this. Carriers will continue to need to compete for advisor “mind share” which will require experiences that can be concurrently compelling to multiple generations of users. All of this, of course, ties back to the Hot Topics we see for insurers in the near future.

The Apple analogy continues to resonate, particularly if carriers want to truly remain relevant in a highly competitive environment.

While there are certainly complexities inherent to the life insurance, annuity and retirement plans segments of financial services, the future is clear: STP is moving from being innovative to becoming a “cost of doing business”. Hope is not a strategy and indecision is not a winning game plan.

New Brief: Wearable Technology and Insurance

Tom Benton

Over the last two years, fitness tracking bands, smartwatches and Google Glass have fueled the next wave of consumer electronics:  wearable technology. Financial services firms and insurers are already starting to find innovative ways to use wearables. In my new brief, Wearable Technology and Insurance , I outline three key capabilities and some examples of how these enable innovative applications for insurers and financial services firms. 

In some respects, “wearables” are not new – after all, the Dick Tracy comic strip introduced their iconic “wrist radio” just after World War II.  What is new is that smartphone adoption and more efficient smaller batteries are enabling new devices and applications.

I currently have two wearables on my wrist – a fitness tracking band (the Fitbit Flex that I have been wearing since June 2013) and a smartwatch (a Pebble – I was one of the 69,000 or so who backed the project on Kickstarter back in May 2012, but I started wearing it regularly earlier this year).  I am seeing more and more people wearing these devices and with the recent introduction of Android Wear, Google’s extension to the Android operating system for wearable devices, we can expect 2014 to be the “year of the wearable”.

As wearables gain adoption by consumers, innovative insurers will find ways to use them in engaging customers.  Others should consider how wearables will fit into mobile and customer communication strategies.  Wearables are on the way – how can you leverage them for customer interactions?  Read the brief and let me know your ideas.

Security Update: XP and Heartbleed

Tom Benton

Two interesting security items are in the news this week.  One is Microsoft’s end of support for the XP operating system on April 8, an operating system introduced nearly 13 years ago and originally slated to lose support in 2007 when Windows Vista was introduced (see Rob’s McIsaac’s post on this here).

In several discussions I had earlier this year with insurers, few were concerned about the XP support issue, but some admitted they had XP clients running and no immediate plan to upgrade or replace them.  However, CIOs may want to consider finding and resolving XP issues, since some XP machines may only be running because a critical software application will not run on later versions of Windows.  These need to be identified and their risks mitigated.  Also, even though its malware security support for XP has been extended by Microsoft until 2015, XP still presents a security risk due to old software ( as far back as IE 8 or so, for example) and will likely be a targeted by auditors.

The other issue is the Heartbleed vulnerability to OpenSSL.  Apparently this vulnerability has been in place for some time, and threatens passwords and other sensitive information sent via Internet communications.  This could include SSL-protected websites maintained by insurers, so CIOs need to work with CISOs and system administrators to ensure sites are updated to fix the issue.

By now, insurance carrier CIOs should have asked if OpenSSL is in use for any of their company-owned servers and for their service providers.  They also should have put a plan in place to update the sites, develop a communication plan for employees and any who externally access those sites, and clearly communicated the status to internal staff and external stakeholders.  However, there are two more steps that should be taken at this time.

First, this is a good opportunity to educate employees about good IT security practices in general, including the need for strong passwords and taking care with providing information online.  However, note that for this particular vulnerability, it is not advised for users to rush into changing passwords on all websites, since this will not be effective until each site applies an update to OpenSSL.  CNET is keeping a current list of the update status for 100 popular sites.  Mashable has a good article on what users should do to protect themselves from Heartbleed.

Next, take time to review IT Security policies and procedures.  As mentioned in my update on security earlier this year , vulnerabilities and threats are being constantly identified, so CIOs and CISOs need to consider more frequent audits and outside help with securing networks and servers.  Choosing to ignore or minimize the importance of the end of XP support or the Heartbleed issue may lead to unpleasant consequences and difficult conversations to explain them.  Also, review your overall IT security plan to make sure it is up to date and that any weaknesses are being addressed.

Ironically, while XP is an issues long known with plenty of time to prepare, Heartbleed was unknown but in existence for a long time (reportedly two years) with little notice until it was revealed this week.  CIOs and CISOs need to be prepared to respond to many different types of threats, making solid IT Security planning a key imperative for insurance IT leaders.

Crowdsourcing Predictive Models: Who Wins?

Martina Conlon

Analysis of data, creation of predictive models, and the ability to take action based on the outcome of those models have always been at the core of the insurance industry. However, there seems to be a peak in interest in the predictive modeling space right now from our research council and clients. Carriers are realizing how effective these scoring models can be across the insurance lifecycle and want in.

From our research (http://www.novarica.com/analytics_big_data_2013/) we know that predictive models can help marketing departments in lead development, cross selling and campaign targeting. R&D departments can use custom or standard predictive models as part of rate making and distribution can use predictive models to better target prospective agents and geographies. Predictive risk models can improve underwriting consistency, transparency, automate segments of the underwriting process, and ensure that the right underwriter sees the right submissions, all the while driving profitability. Using predictive models for claims triage and expert adjuster assignment can have a big impact on claims severity. Insurers use scoring models to gain insight into which claims are candidate for fraud investigation, subrogation, litigation, and settlement as well as more accurate and automated loss reserving.

Although the opportunities abound with predictive models, obstacles slow down adoption, especially for small and mid-size insurers. Potential high cost combined with uncertain return, priority given to other projects, limited internal data volume, the lack of data scientists, and the lack of business sponsorship are among the biggest challenges. Luckily, the vendor community servicing the space is active and expanding, and they are here to help insurers overcome these obstacles. A variety of insurance specific data warehouses, analytics tools, third party data and predictive models can be purchased. Actuarial and specialized consulting firms offer data scientists with insurance domain experience to provide you with the expertise that you lack in house. These vendors are also communicating their successes to business stakeholders and they are paying attention.

And today – a colleague asked me, “Have you heard of Kaggle?” Kaggle is predictive analytics solution provider for the energy industry, but Kaggle also hosts a marketplace for data science competitions for all industries, data science forums and job posting boards. Allstate has an open competition with them for development of a predictive model to predict which quote/coverages will be purchased when a prospect is presented with several options. Data scientists from across the world are working on this right now, competing for $50,000 in prize money. Allstate conducted a similar competition last year around claims with a much smaller prize where they gained substantial benefits and insights from the submitted models, feedback and concepts. And many other Kaggle competitions have no cash prize, just recognition within the community or a job offer.

So one may think – here’s an option to make predictive modeling more accessible to smaller and mid-size insurers. But to date, crowd sourcing of predictive models has been most successful with companies that have a strong analytics practices already. According to industry press, Allstate’s predictive modeling team felt that the infusion of new ideas and approaches was extremely valuable and enabled them to significantly improve their models. Unfortunately, Kaggle won’t likely be a silver bullet for smaller insurers. Kaggle doesn’t offer solution to many of the obstacles mentioned above. However, it does offer one more way for small companies to gain access to a predictive modeling community and skilled data science resources – which may level the playing field just a little bit.

Goodbye, Old Friend

Rob McIsaac

Well, we are at the end of a mighty 13 year run. Microsoft will be pulling the plug on Windows XP life support early next month. All indications are that this is no April Fool’s joke.

All indications also are that someone would have to be fooling themselves to think that continuing to use it now would be a good idea. I have a solitary machine running the venerable OS. It refuses to run Win-7 and so the end has come. In a few weeks the network interface will be disabled and it will revert to being a glorified, isolated, word processor. The sneaker network lives on via a hacker proof thumb drive.

Of course I’m fortunate. I only have one machine to worry about and no dedicated apps that are tied to antique software stack components. The Windows 8.1 machine I’m now running is great and wasn’t much more expensive than 2 year’s worth of extended support from Redmond. Most insurance carriers don’t have such an easy set of choices.

The migration from Windows NT to XP was slow and painful, carrying with it some notable challenges and costs. The journey wasn’t engineered in the shadow of a financial crisis that has had a long, lingering, hangover; it simply faced normal cost headwinds and technical challenges on non-portable code. The contrast between old and new was also stark, with improvement galore, which generally excited users.

This time around, the improvements are significant but harder to see from the UI. In fact the UI is polarizing, so it alone does not create a big push to bring it into use. Perhaps worse, given the long and successful run of XP, is the sheer number of applications that run on it natively and won’t transition smoothly or cheaply. Reliance on old browser version, old software, old databases and other incompatibilities make it daunting to explain why migration is a good idea. It also makes the transition expensive to execute.

Good luck making all those old Access databases, for example, work in a new environment.

Of course, hand wringing won’t be helpful. Developing and delivering on a migration plan, in concert with key vendors, is really the only possible path forward. A range of solutions are possible, including isolating equipment and virtualizing some “legacy” applications. There won’t be a silver bullet on this, however. Looking at this, and a range of other security related concerns, was highlighted in a recent executive brief (IT Security Issues Update) published by my colleague Tom Benton.

One thing for CIOs and their teams to consider, if they haven’t done it already, is building an educational program around this issue. Making remediation part of a broader effort to improve functionality, reduce risk and reduce support cost over time, can also help win critical organizational and executive support. Mixing in some sugar may make it easier to swallow some strong medicine. This one is worth it since failing to address it now could lead to a much bigger problem in the not so distant future.

New Report: Insurer IT Services Providers

Thuy Osman

Rob McIsaac and I recently published a Novarica Market Navigator report on Insurer IT Services Providers. The report gives an overview of some of the major IT services providers to North American insurers and contains a brief profile of each provider, including information about the company’s experience with different types of clients in different functional areas. Providers profiled in the report are: Accenture, Agile Technologies, Capgemini, CastleBay Consulting, CGI, Cognizant, CSC, Dell Services, Deloitte, Edgewater, EY, HCL, HP, HTC, IBM, iGATE Patni, Infosys, L&T Infotech, MajescoMastek, MphasiS, msg global solutions, NIIT Technologies, NTT Data, PwC, Return on Intelligence, Slalom Consulting, Syntel, TCS, ValueMomentum, Vertex, Virtusa, Wipro and Zensar.

With the market becoming more competitive, having a technology partner that can provide the right level of resources to support business initiatives is a crucial tool for CIOs. Novarica’s recent report Insurance IT Outsourcing Update (January 2014), based on a survey of 95 insurer CIOs, found that outsourcing is a part of nearly every insurer CIO’s toolset. 85% of respondents report at least some IT outsourcing. Instead of simply outsourcing for cost reduction, which was the trend in the past, insurers are now outsourcing to meet peaks in demand, get specialized skills and enable new capabilities.

This makes it even more important for CIOs to evaluate service providers not only on the number of resources available, but the type of skills and level of experience the provider has in a particular functional area. Careful evaluation will ensure that CIOs find the right partner to support the organization’s strategy for growth going forward.

Please note that this report is focused on North America, and presents only North American (US/Canada) resources and client experience numbers from these vendors, most of which are global. Each profile gives a summary of the provider’s capabilities and experience to help insurers sort through their many potential partner options, and Novarica’s team can help insurers assess potential partners in more detail through our retained advisory service.

Unintended Consequences

Rob McIsaac

An article in this week’s Business Week magazine reminded me of the impact that unintended consequences can have on projects and programs throughout financial services. Regardless of how one views the Affordable Care Act, one of the clear points of debate and discussion has related to the speed and breadth of program adoption. Following the decidedly flawed rollout of the Federal exchanges late last year, it seemed that the pace of “uptake” would have been considerably slower than program sponsors had anticipated.

A reality now, however, is that adoption is running along at a pretty good clip. One of the drivers helping this along now are professional tax preparation companies like Jackson-Hewitt and H&R Block (http://www.businessweek.com/articles/2014-02-20/obamacares-arms-length-allies-h-and-r-block-and-jackson-hewitt)

How can that be?

Well, it turns out that most of the info required to apply for health insurance is included in the tax return preparation process. Once the IRS forms are completed, it only takes about 6 minutes to apply for insurance, which these companies offer as a service to their customers. It isn’t a political judgment on their end, just actions driven by the economics associated with providing good service to their customers. As they are quick to point out, it is also part of the tax code, and their job is to get the best returns possible for their clients.

Didn’t see that one coming!

Which begs the question how many other good things, that may be unintended consequences, do we miss in managing technology for insurance carriers? Once when deploying customer service capabilities for life and annuity clients at a major carrier, we also deployed the portal into call centers to allow CSR’s to see the same screens that customers could. Nice idea. The unintended consequence, however, was that we dramatically improved our Business Continuity capabilities.

How? We achieved this by taking a variety of systems on different platforms and with different user experiences, and browser enabling them. It was a remarkable, if accidental, addition to our technology capabilities, which created real and measurable long term value.

Are there other unintended consequences that can appear as technology projects unfold? Absolutely there are. One of the challenges that CIO’s and their teams need to keep in mind is how they can foster the appropriate level of situational awareness, and openness of mind, to recognize these opportunities when they emerge.

This isn’t to say that all unintended consequences are “good things”, of course. There are many instances where the result of changes turns into poorly performing systems or capabilities that fall far short of the original objectives framed during the design phase of an initiative.

The key point remains, however. Not all surprises have to be portents of bad news. As new technologies emerge and business processes evolve, there may be new opportunities to see multiplied value for carriers making the appropriate investments in new and innovative ideas. This is potentially an area for IT organizations to add significant, strategic, value.

The Target Data Breach and Lessons for Insurer CIOs

Rob McIsaac

While the Target stores “hack” was not the biggest in recent history, it is certainly one of the most visible and offers some important object lessons for insurance company CIOs. As we have now learned, malicious software was installed on company servers in late 2013, providing a gateway through which hackers were able to gather significant personal and confidential data on Target’s customers. The theft of this data has had significant adverse consequences for the company’s earnings, the trust of their customers, and the career plans of a number of high-profile individuals. The full scope the damage will be hard to quantify and the period over which recovery takes place will likely be measured in years rather than days or months. This is all pretty significant for a company that appears to have been ahead of the curve in thinking about these issues and had made significant investments in both people and technology to inoculate themselves from attacks prior to last year’s holiday shopping season.

What went wrong?

The origins of the issue of course extend well beyond Target’s environment. The debit and credit card infrastructure in the United States is substantially behind what is used in other markets (e.g., Europe). This problem has been well known for a number of years but participants in the ecosystem (e.g., retailers, card processors, banks) believed that this was someone else’s problem, certainly not their own. Since the issues emerged most obviously during the Great Recession, the natural tendency was to kick the issue down the road and hope, in the meantime, that the security risks could be otherwise managed. The sense seemed to be that it would take some future, seminal, event that would create a cause for action. That time may be upon us with this incident, although that is little consolation to impacted customers. As noted in a recent Business Week article, Target had actually made significant investments to get ahead of security concerns. Unfortunately, it also appears that they made notable errors in implementation which allowed the events to unfold with shockingly adverse consequences.

Are insurance companies immune to these kinds of events? Of course not … and they increasingly will face these types of challenges as transaction volumes grow and the speed of transaction processing accelerates. Implementing tools that allow for transaction analysis of varying types is increasingly the domain of all players and financial services, banks and insurance companies included. As the systems are deployed one of the issues CIOs and their teams face is the need to install them so that they effectively operate within their own environments. Failing to tune these systems may produce false positives that will overwhelm the staffs responsible for final intervention in determining whether or not activity is malicious. This issue applies across all monitoring systems used by carriers, including things such as financial transactions, e-mail traffic and security access. Just having expensive software installed is of little comfort if it has not been effectively tuned and is being appropriately monitored in order to assess results. Fine tuning these systems can be as much art as science but is critical if a financial institution is going to achieve a project’s operational objectives.

One of the interesting aspects of the Target case is that these were apparently lessons that the company had learned. The solution they implemented for transaction monitoring was very sophisticated and the company had spent considerable money both implementing it and in creating supporting infrastructure to analyze the results. According to the BW article, they had created a sizable organization in India to monitor the activity. The article goes on to note that this team did their job, effectively creating the appropriate level of alert and communication back to the corporate home office. The corporate home office was also well equipped to handle these types of incidents having created a command center that was specifically designed to deal with security related incidents.

For some unknown reason, however, the communications between the teams in India and the United States went unnoticed or were ignored. What has become clear in recent weeks is that the failure had nothing to do with technology and everything to do with the process and human beings managing the process that were created to support that technology.

There are a range of possibilities for why this happens; speculation on those causes is of little value at this point. The main message is that any process is only as good as its weakest link. This is hardly new news but it does reiterate the importance of real testing, in real world circumstances, to understand how components will react and interact. The parallels between incident management, such as this case illustrates, and disaster recovery events are reasonably significant. Practice exercises get people and technology working together to understand both the “happy path” toward resolving issues as well as to highlight areas where real world circumstances may deviate from a carefully crafted script. The magic for IT organizations can be in understanding those unanticipated events, and creating the mental shelf space for teams to be able to deal with those events by freeing them from needing to focus on relatively simple or mundane tasks. It remains to be seen exactly how the process in this incident broke down.

One element to consider is how tools are used in separate geographic locations as well as how communication process loops can be “closed” to assure confirmation of high priority concerns. Another to consider is the difference in cultural norms which can exist between teams in different parts of a country (to say nothing of different parts of the world) when they are attempting to share information. The nuance and subtlety which are an embedded part of the English language are important for CIOs and their teams to consider, particularly as they move to take advantage of resources in different geographic locations. While this is hardly news too many organizations, the importance of making sure this is well understood throughout the chain of command is directly correlated with the practical importance of the information being shared.

Target is hardly the first company in financial services to learn this lesson, but their logo made for a Business Week cover that will be hard to forget.

For insurance company CIOs, this is also a reminder that there are a range of security threats which need to be dealt with in the near-term. With Windows XP now in the final weeks of support, many CIOs face the unenviable task of selecting from a series of less than optimal choices for risk mitigation. Doing nothing is not an option! Recently, my colleague Tom Benton published a brief on IT Security issues specifically related to insurance carriers. In light of the targeting of Target, this is something that carries renewed important for all of us.

Information security is a messy business. It is not something that can be ignored, given the costs both financial and reputational terms. Even as plans for expanding channels and touch points became clear in our Survey of 2014 Carrier IT Budgets, the security challenges may be expanding faster than oeverall spending levels. We live in interesting times. Good hunting!

The UK CIO agenda is still solidly focused on digital transformation

Catherine Stagg-Macey

After spending some time out of the analyst sector in insurance, I am now back with a remit of P&C insurance in UK and Europe. With some time away, I was intrigued as to what might have changed in my absence. Always the optimist, I was expecting some cool and exciting things to have happened.

Perhaps they have – I’ve only been on the job two week so I will get back to you on the cool and the exciting (I can say that telematics is hotting up in the UK so watch for more on that). What I do notice from my conversations is that the theme of previous years persists – how to digitally enable the internal and external face of the insurer.

All the major insurers are invested in this with many focused initially on the claims area. By my count, the 50% of the top 20 insurers have made their choice of core systems vendor. Direct Line Group is close to completing their 5-year claims consolidation program. Aviva is early on in the process of consolidating all the business onto a single PAS and claims solution. Zurich Financial Services UK is mid-process in their claims consolidation. Companies like TowerGate and LV= have already tackled the claims legacy estate and are shifting focus to the front-end administration. This week saw another insurer, Admiral announce http://online.wsj.com/article/PR-CO-20140314-903268.html that they will implement a new core PAS/Claims system.

A cautionary note for suppliers in this market. These programs suck up significant money and organization capacity. One CIO said to me “we are absolutely maxed out on our capacity for change – and my staff are exhausted.” This CIO’s company is still facing several more years of transformation.

This theme of digital transformation persists in the UK as it does for much of the global insurance market. The time of core system replacement has arrived and is here to stay for several years. This isn’t the cool and sexy stuff of social media or big data (which may well have to wait for some change capacity to be freed up.) This is the mundane but necessary and complex task of using technology to expand the reach and performance of the insurer. The time for legacy is over.