Commercial/Specialty Underwriting Automation: Cui Bono?

Matthew Josefowicz

Good article recently in I&T on Commercial Insurers and Underwriting Automation., covering some recent studies by various industry analysts. Here’s a quote:

Complex risks are still much more hand underwriting and will be for the foreseeable future,” says Matt Josefowicz, managing director at Novarica. “It’s all about empowering those underwriters with more communications tools and more data. A lot of the tech investment for underwriting in the specialty and large commercial side involves bringing all the information needed to make decisions to the underwriter’s fingertips as quickly as possible.

Complex-risk underwriters present a challenge when implementing new technologies, Josefowicz explains. “The individual underwriting desks have a lot of political power,” he says. When dealing with high-value cases, these experts have a great deal of specialized knowledge and tend to call the shots for which technologies they want to use.

One of the main questions in automating commercial and specialty is in answering the question Cui Bono? – “to whose benefit?” As we discussed in our report on Centralized and Federated IT Models, it’s hard to drive IT strategy centrally when the political power in an organization is federated. Commercial and Specialty CIOs need to work closely with their business leaders to make sure they are addressing their key data and technology issues. If the P&L heads can’t be convinced of the local value of an IT initiative, appeals to a weak central power are rarely successful.

For more on business and technology trends in Specialty Lines, see our recent report.

Crowdsourcing Predictive Models: Who Wins?

Martina Conlon

Analysis of data, creation of predictive models, and the ability to take action based on the outcome of those models have always been at the core of the insurance industry. However, there seems to be a peak in interest in the predictive modeling space right now from our research council and clients. Carriers are realizing how effective these scoring models can be across the insurance lifecycle and want in.

From our research ( we know that predictive models can help marketing departments in lead development, cross selling and campaign targeting. R&D departments can use custom or standard predictive models as part of rate making and distribution can use predictive models to better target prospective agents and geographies. Predictive risk models can improve underwriting consistency, transparency, automate segments of the underwriting process, and ensure that the right underwriter sees the right submissions, all the while driving profitability. Using predictive models for claims triage and expert adjuster assignment can have a big impact on claims severity. Insurers use scoring models to gain insight into which claims are candidate for fraud investigation, subrogation, litigation, and settlement as well as more accurate and automated loss reserving.

Although the opportunities abound with predictive models, obstacles slow down adoption, especially for small and mid-size insurers. Potential high cost combined with uncertain return, priority given to other projects, limited internal data volume, the lack of data scientists, and the lack of business sponsorship are among the biggest challenges. Luckily, the vendor community servicing the space is active and expanding, and they are here to help insurers overcome these obstacles. A variety of insurance specific data warehouses, analytics tools, third party data and predictive models can be purchased. Actuarial and specialized consulting firms offer data scientists with insurance domain experience to provide you with the expertise that you lack in house. These vendors are also communicating their successes to business stakeholders and they are paying attention.

And today – a colleague asked me, “Have you heard of Kaggle?” Kaggle is predictive analytics solution provider for the energy industry, but Kaggle also hosts a marketplace for data science competitions for all industries, data science forums and job posting boards. Allstate has an open competition with them for development of a predictive model to predict which quote/coverages will be purchased when a prospect is presented with several options. Data scientists from across the world are working on this right now, competing for $50,000 in prize money. Allstate conducted a similar competition last year around claims with a much smaller prize where they gained substantial benefits and insights from the submitted models, feedback and concepts. And many other Kaggle competitions have no cash prize, just recognition within the community or a job offer.

So one may think – here’s an option to make predictive modeling more accessible to smaller and mid-size insurers. But to date, crowd sourcing of predictive models has been most successful with companies that have a strong analytics practices already. According to industry press, Allstate’s predictive modeling team felt that the infusion of new ideas and approaches was extremely valuable and enabled them to significantly improve their models. Unfortunately, Kaggle won’t likely be a silver bullet for smaller insurers. Kaggle doesn’t offer solution to many of the obstacles mentioned above. However, it does offer one more way for small companies to gain access to a predictive modeling community and skilled data science resources – which may level the playing field just a little bit.

Goodbye, Old Friend

Rob McIsaac

Well, we are at the end of a mighty 13 year run. Microsoft will be pulling the plug on Windows XP life support early next month. All indications are that this is no April Fool’s joke.

All indications also are that someone would have to be fooling themselves to think that continuing to use it now would be a good idea. I have a solitary machine running the venerable OS. It refuses to run Win-7 and so the end has come. In a few weeks the network interface will be disabled and it will revert to being a glorified, isolated, word processor. The sneaker network lives on via a hacker proof thumb drive.

Of course I’m fortunate. I only have one machine to worry about and no dedicated apps that are tied to antique software stack components. The Windows 8.1 machine I’m now running is great and wasn’t much more expensive than 2 year’s worth of extended support from Redmond. Most insurance carriers don’t have such an easy set of choices.

The migration from Windows NT to XP was slow and painful, carrying with it some notable challenges and costs. The journey wasn’t engineered in the shadow of a financial crisis that has had a long, lingering, hangover; it simply faced normal cost headwinds and technical challenges on non-portable code. The contrast between old and new was also stark, with improvement galore, which generally excited users.

This time around, the improvements are significant but harder to see from the UI. In fact the UI is polarizing, so it alone does not create a big push to bring it into use. Perhaps worse, given the long and successful run of XP, is the sheer number of applications that run on it natively and won’t transition smoothly or cheaply. Reliance on old browser version, old software, old databases and other incompatibilities make it daunting to explain why migration is a good idea. It also makes the transition expensive to execute.

Good luck making all those old Access databases, for example, work in a new environment.

Of course, hand wringing won’t be helpful. Developing and delivering on a migration plan, in concert with key vendors, is really the only possible path forward. A range of solutions are possible, including isolating equipment and virtualizing some “legacy” applications. There won’t be a silver bullet on this, however. Looking at this, and a range of other security related concerns, was highlighted in a recent executive brief (IT Security Issues Update) published by my colleague Tom Benton.

One thing for CIOs and their teams to consider, if they haven’t done it already, is building an educational program around this issue. Making remediation part of a broader effort to improve functionality, reduce risk and reduce support cost over time, can also help win critical organizational and executive support. Mixing in some sugar may make it easier to swallow some strong medicine. This one is worth it since failing to address it now could lead to a much bigger problem in the not so distant future.

New Report: Insurer IT Services Providers

Thuy Osman

Rob McIsaac and I recently published a Novarica Market Navigator report on Insurer IT Services Providers. The report gives an overview of some of the major IT services providers to North American insurers and contains a brief profile of each provider, including information about the company’s experience with different types of clients in different functional areas. Providers profiled in the report are: Accenture, Agile Technologies, Capgemini, CastleBay Consulting, CGI, Cognizant, CSC, Dell Services, Deloitte, Edgewater, EY, HCL, HP, HTC, IBM, iGATE Patni, Infosys, L&T Infotech, MajescoMastek, MphasiS, msg global solutions, NIIT Technologies, NTT Data, PwC, Return on Intelligence, Slalom Consulting, Syntel, TCS, ValueMomentum, Vertex, Virtusa, Wipro and Zensar.

With the market becoming more competitive, having a technology partner that can provide the right level of resources to support business initiatives is a crucial tool for CIOs. Novarica’s recent report Insurance IT Outsourcing Update (January 2014), based on a survey of 95 insurer CIOs, found that outsourcing is a part of nearly every insurer CIO’s toolset. 85% of respondents report at least some IT outsourcing. Instead of simply outsourcing for cost reduction, which was the trend in the past, insurers are now outsourcing to meet peaks in demand, get specialized skills and enable new capabilities.

This makes it even more important for CIOs to evaluate service providers not only on the number of resources available, but the type of skills and level of experience the provider has in a particular functional area. Careful evaluation will ensure that CIOs find the right partner to support the organization’s strategy for growth going forward.

Please note that this report is focused on North America, and presents only North American (US/Canada) resources and client experience numbers from these vendors, most of which are global. Each profile gives a summary of the provider’s capabilities and experience to help insurers sort through their many potential partner options, and Novarica’s team can help insurers assess potential partners in more detail through our retained advisory service.

Unintended Consequences

Rob McIsaac

An article in this week’s Business Week magazine reminded me of the impact that unintended consequences can have on projects and programs throughout financial services. Regardless of how one views the Affordable Care Act, one of the clear points of debate and discussion has related to the speed and breadth of program adoption. Following the decidedly flawed rollout of the Federal exchanges late last year, it seemed that the pace of “uptake” would have been considerably slower than program sponsors had anticipated.

A reality now, however, is that adoption is running along at a pretty good clip. One of the drivers helping this along now are professional tax preparation companies like Jackson-Hewitt and H&R Block (

How can that be?

Well, it turns out that most of the info required to apply for health insurance is included in the tax return preparation process. Once the IRS forms are completed, it only takes about 6 minutes to apply for insurance, which these companies offer as a service to their customers. It isn’t a political judgment on their end, just actions driven by the economics associated with providing good service to their customers. As they are quick to point out, it is also part of the tax code, and their job is to get the best returns possible for their clients.

Didn’t see that one coming!

Which begs the question how many other good things, that may be unintended consequences, do we miss in managing technology for insurance carriers? Once when deploying customer service capabilities for life and annuity clients at a major carrier, we also deployed the portal into call centers to allow CSR’s to see the same screens that customers could. Nice idea. The unintended consequence, however, was that we dramatically improved our Business Continuity capabilities.

How? We achieved this by taking a variety of systems on different platforms and with different user experiences, and browser enabling them. It was a remarkable, if accidental, addition to our technology capabilities, which created real and measurable long term value.

Are there other unintended consequences that can appear as technology projects unfold? Absolutely there are. One of the challenges that CIO’s and their teams need to keep in mind is how they can foster the appropriate level of situational awareness, and openness of mind, to recognize these opportunities when they emerge.

This isn’t to say that all unintended consequences are “good things”, of course. There are many instances where the result of changes turns into poorly performing systems or capabilities that fall far short of the original objectives framed during the design phase of an initiative.

The key point remains, however. Not all surprises have to be portents of bad news. As new technologies emerge and business processes evolve, there may be new opportunities to see multiplied value for carriers making the appropriate investments in new and innovative ideas. This is potentially an area for IT organizations to add significant, strategic, value.

The Target Data Breach and Lessons for Insurer CIOs

Rob McIsaac

While the Target stores “hack” was not the biggest in recent history, it is certainly one of the most visible and offers some important object lessons for insurance company CIOs. As we have now learned, malicious software was installed on company servers in late 2013, providing a gateway through which hackers were able to gather significant personal and confidential data on Target’s customers. The theft of this data has had significant adverse consequences for the company’s earnings, the trust of their customers, and the career plans of a number of high-profile individuals. The full scope the damage will be hard to quantify and the period over which recovery takes place will likely be measured in years rather than days or months. This is all pretty significant for a company that appears to have been ahead of the curve in thinking about these issues and had made significant investments in both people and technology to inoculate themselves from attacks prior to last year’s holiday shopping season.

What went wrong?

The origins of the issue of course extend well beyond Target’s environment. The debit and credit card infrastructure in the United States is substantially behind what is used in other markets (e.g., Europe). This problem has been well known for a number of years but participants in the ecosystem (e.g., retailers, card processors, banks) believed that this was someone else’s problem, certainly not their own. Since the issues emerged most obviously during the Great Recession, the natural tendency was to kick the issue down the road and hope, in the meantime, that the security risks could be otherwise managed. The sense seemed to be that it would take some future, seminal, event that would create a cause for action. That time may be upon us with this incident, although that is little consolation to impacted customers. As noted in a recent Business Week article, Target had actually made significant investments to get ahead of security concerns. Unfortunately, it also appears that they made notable errors in implementation which allowed the events to unfold with shockingly adverse consequences.

Are insurance companies immune to these kinds of events? Of course not … and they increasingly will face these types of challenges as transaction volumes grow and the speed of transaction processing accelerates. Implementing tools that allow for transaction analysis of varying types is increasingly the domain of all players and financial services, banks and insurance companies included. As the systems are deployed one of the issues CIOs and their teams face is the need to install them so that they effectively operate within their own environments. Failing to tune these systems may produce false positives that will overwhelm the staffs responsible for final intervention in determining whether or not activity is malicious. This issue applies across all monitoring systems used by carriers, including things such as financial transactions, e-mail traffic and security access. Just having expensive software installed is of little comfort if it has not been effectively tuned and is being appropriately monitored in order to assess results. Fine tuning these systems can be as much art as science but is critical if a financial institution is going to achieve a project’s operational objectives.

One of the interesting aspects of the Target case is that these were apparently lessons that the company had learned. The solution they implemented for transaction monitoring was very sophisticated and the company had spent considerable money both implementing it and in creating supporting infrastructure to analyze the results. According to the BW article, they had created a sizable organization in India to monitor the activity. The article goes on to note that this team did their job, effectively creating the appropriate level of alert and communication back to the corporate home office. The corporate home office was also well equipped to handle these types of incidents having created a command center that was specifically designed to deal with security related incidents.

For some unknown reason, however, the communications between the teams in India and the United States went unnoticed or were ignored. What has become clear in recent weeks is that the failure had nothing to do with technology and everything to do with the process and human beings managing the process that were created to support that technology.

There are a range of possibilities for why this happens; speculation on those causes is of little value at this point. The main message is that any process is only as good as its weakest link. This is hardly new news but it does reiterate the importance of real testing, in real world circumstances, to understand how components will react and interact. The parallels between incident management, such as this case illustrates, and disaster recovery events are reasonably significant. Practice exercises get people and technology working together to understand both the “happy path” toward resolving issues as well as to highlight areas where real world circumstances may deviate from a carefully crafted script. The magic for IT organizations can be in understanding those unanticipated events, and creating the mental shelf space for teams to be able to deal with those events by freeing them from needing to focus on relatively simple or mundane tasks. It remains to be seen exactly how the process in this incident broke down.

One element to consider is how tools are used in separate geographic locations as well as how communication process loops can be “closed” to assure confirmation of high priority concerns. Another to consider is the difference in cultural norms which can exist between teams in different parts of a country (to say nothing of different parts of the world) when they are attempting to share information. The nuance and subtlety which are an embedded part of the English language are important for CIOs and their teams to consider, particularly as they move to take advantage of resources in different geographic locations. While this is hardly news too many organizations, the importance of making sure this is well understood throughout the chain of command is directly correlated with the practical importance of the information being shared.

Target is hardly the first company in financial services to learn this lesson, but their logo made for a Business Week cover that will be hard to forget.

For insurance company CIOs, this is also a reminder that there are a range of security threats which need to be dealt with in the near-term. With Windows XP now in the final weeks of support, many CIOs face the unenviable task of selecting from a series of less than optimal choices for risk mitigation. Doing nothing is not an option! Recently, my colleague Tom Benton published a brief on IT Security issues specifically related to insurance carriers. In light of the targeting of Target, this is something that carries renewed important for all of us.

Information security is a messy business. It is not something that can be ignored, given the costs both financial and reputational terms. Even as plans for expanding channels and touch points became clear in our Survey of 2014 Carrier IT Budgets, the security challenges may be expanding faster than oeverall spending levels. We live in interesting times. Good hunting!

More Health Insurance Website Woes… and Lessons Learned

Tom Benton

By now we are all painfully aware of the problems with the launch of the Federal health insurance website  NBC News recently quoted a government spokeswoman saying that the website successfully completes transactions only nine times out of ten.

The Federal website is not the only exchange with serious problems.  The Washington Post is reporting that over the weekend (on February 23), the state of Maryland fired the vendor building their online health insurance marketplace.  Reportedly, Maryland chose the vendor in early 2012 in part due their track record with developing software that processes Medicare and Medicaid claims, reasoning that since the vendor planned to use off-the-shelf components, the project would be more likely to meet the tight deadlines – it had to be operational on October 1 2013 for the initial enrollment period ending March 31, 2014.

However, problems started almost immediately.  A Washington Post investigation uncovered that state officials failed to take action when an outside auditor warned that early deadlines were being missed, and that Maryland had not left enough time to adequately “complete, verify and test” the system.  On top of that, the contracted vendor, based in North Dakota, hired another vendor, based in Maryland, for all the technical services related to operating the website, for nearly the same amount as originally contracted to develop the site.  The second vendor proved to not have enough resources and hired multiple subcontractors.  The two vendors are now suing each other.

Some of the issues revealed by this unfortunate situation are ones commonly seen in insurance core system projects.  Among them:

  • Unrealistic expectations – state officials set an aggressive timeline and did not allow enough time for testing once early deadlines were missed, according to auditors.
  • Poor communication – Maryland claims that the vendor misled them about the maturity of the available software, and the vendor hired a subcontractor that did not clearly understand the resource requirements, leading the subcontractor to hire multiple sub-subcontractors.
  • Poor governance – the project lacked overall accountability and did not have a process in place to reassess the project to correct issues raised during the outside audit.
  • Lack of attention on testing – though not specifically mentioned in the article, the auditors questioned the time allowed for testing.  As with most major projects, testing was likely trimmed back and not adequate for validating the major functions of the system.

The result of the poor execution of the project is that Maryland is now forced to find a vendor to rescue the system, while at the same time processing applications through more manual work and increasing the amount of call center support.  State officials recently revealed that Medicaid participants cannot be properly identified by the system, costing the state over $30M.  As any CIO who has lived through a failed implementation would tell you, you will always spend more money fixing issues than if you had taken the time and resources to get them right in the first place.

CIO Wish List

Matthew Josefowicz

If you subscribe to Best’s Review, check out this month’s Technology article on pages 70-72, which features interviews with CIOs Kate Miller of Unum, Greg Tranter of Hanover, Michael Fergang of Grange,  Rick Roy of CUNA Mutual, and myself, as well as stats from our US Insurer IT Budgets and Projects 2014 report.

The article notes, “Data, analytics, mobile, and self-servicing capabilities are among the items on insurance chief information officers wish lists.”

Here’s a couple of my quotes from the article:

Underinvested business resources needed to develop and implement new systems continue to plague carriers, [Josefowicz] said. “Even if you have all the IT dollars you want, you can’t deliver an effective system unless you have a time investment from users and those who will benefit from it.



The question shouldn’t be how much are carriers spending on IT, “but rather what effect is it having and how is it driving down the overall expenses and the expansion of the business,” [Josefowicz] added. “It’s a challenge for many business leaders to think that way because they’re used to viewing IT as purely an expenses. But it’s really an enabler, just as expert staff is an enabler.”

Check out the full article in the December issue of Best’s Review.

New Article: Grandma Wants Online Billpay

Rob Rubin

In my most recent weekly post on, I discuss what checking account features are most important by those who request “senior” accounts on Low fees and convenient branch access are the primary drivers for which accounts seniors select, but their interest in online billpay is evident. While most expect that seniors don’t use digital channels for banking, our research shows that they do use online banking features, but are less likely to use their mobile phones to perform banking activities.

See full article here:

CIO To Do List: Have Better Meetings

Tom Benton

A common complaint I heard as a CIO from my staff was that we had “too many meetings.” In addition to a weekly staff meeting, we had project reviews, meetings with vendors, lunch meetings to review issues, performance review meetings, HR benefits meetings, and, yes, even meetings to discuss agendas for upcoming important meetings.  It felt at times like we were in the business of producing meetings instead of solutions for our business and services for our customers.

Our weekly staff meeting had become a time of going around the circle with my direct reports and some others from their teams reviewing every project and issue, but we rarely would start on time or complete the meeting within an hour. It was the least favorite part of everyone’s week. After reading some articles on effective meetings, I began having “stand up” meetings (no chairs), shortened the time to a half hour, started on time with no exceptions allowed and limited topics to only those that required collaboration within IT. In short, we got focused on meeting an objective in each staff meeting: making sure that everyone was getting whatever help they needed on their projects and other issues.

Some of the most effective CEOs hold meetings differently in order to get results. Jeff Bezos keeps an empty chair in the room to remind attendees that the customer should always be the focus. Richard Branson bans Powerpoint presentations, and sometimes has his meetings poolside or at some other non-boardroom location. Steve Jobs was famous for sending away the person least able to contribute to the meeting. (These and other great top leadership meeting tips are available at

An article from lists 12 helpful tips for making your meetings more effective. The tips include not only the standard “have an agenda” and “start on time” suggestions, but also the ideas of “stand up” meetings, starting at an odd time, and bringing a token of some kind to pass around to the next speaker and others. Try something new with your meetings, but keep a focus on getting results from them. Your staff will thank you.