Matthew Josefowicz on reports that Aon is in talks to sell its employee benefits outsourcing group.
We have written previously about the ever increasing importance of data in Insurance. A related area of interest to insurers is the growth of predictive analytics. Modern predictive analytics solutions are capable of providing deep insight into a wide range of business areas such as underwriting risk, product profitability, and financial projections. However, maturity and adoption of predictive analytics solutions vary widely among insurers. As more carriers prioritize data strategy, usage of this potentially disruptive technology will grow rapidly. Data is a major component of Novarica’s “Hot Topics” for insurers, which include social, mobile, analytics, big data, cloud, digital, and Internet of Things/drones. Data is being utilized to speed up underwriting, utilizing external third party data (e.g. prescription information, telematics information for driving), improve actuarial models (e.g. data collected from drones, the National Weather Service), and help to process claims (e.g. data generated from devices, commercial vehicles, health devices). Over 25% of insurers ran big data programs last year in order to gain insights from large volumes of data with high variety (structured and unstructured) and velocity. This article from the New York Times discusses the increasing concern of regulators, mostly in Europe and the UK, that access to large amounts of data may ultimately lead to a decrease in competition by freezing out smaller firms who can’t get at as much data as large firms like Amazon, Google and Facebook. The article mentions the case of IBM, which is combining internal data with customer data in order to train Watson AI software for a wide variety of tasks in fields ranging from medicine to finance. Some insurance carriers are working with IBM’s Watson software to develop underwriting, claims, and actuarial modeling. Data will continue to grow in importance even as it grows in volume. It is inevitable that regulators will start looking more at data and access to it as we move forward into the 2020s.
An interesting article came out over the weekend that delves into the consolidation that has taken place among publicly traded life insurance companies, and contrasts this trend with the relatively stable number of mutual carriers that are in the market today. We are now the better part of two decades past the period when there was a significant demutualization effort which included notable, name-brand, national carriers. In that period, we have weathered multiple recessions, one of them the worst economic downturn since the 1930s, and emerged into a world that has experienced persistent low interest rates. Taken as a whole, these factors have produced a series of economic outcomes which were outside of the planning corridors that many carriers executed against. As the article suggests, carriers face some very interesting challenges going forward. For those with long tail liabilities such as life and annuity contracts, the conflicts associated with quarterly earnings reports and maximizing shareholder value appear to be particularly daunting.
There is more to this story, however, which may suggest some additional advantages for mutual carriers. Almost without exception, life carriers are grappling with aging technology platforms which may date as far back as the Kennedy administration. The blocks of business on these platforms are themselves old, and may be closed to new business. But because they were at the heart of these businesses over multiple decades they have become, through the magic of cost accounting, blocks of business which absorb significant overhead for carriers. For many companies, these platforms represent a significant drag in terms of being able to implement new products and services effectively. At the same time, however, these platforms, if they are walled off, can become quite stable and relatively inexpensive to operate. This can meaningfully influence both operational and financial outcomes for carriers.
We recently unearthed a 1995 chronicle from MIT which provides a fascinating view of the first 35 years of policy administration utilization in North America. The fact that many of the systems that were deemed to be aging in that 22-year-old report are still being used by carriers should give cause for concern to some!
In any case, as carriers plot their technology strategy for the future, addressing these old systems and blocks of business running on them will become increasingly critical. The investments and planning horizon required to make them successful may be easier for mutually owned companies to execute than it will be for their publicly traded competitors given their respective focus on long- versus short-term results.
Even as market competitive threats loom large, it is not just a technology challenge that many life insurance carriers face. There is an accounting and a reporting issue which carriers would be well advised to consider as they put their strategic plans in place.
Just before the end of the last calendar year, the New York State Department of Financial Services announced changes to its new cybersecurity regulations, pushing back the date they will take effect to March 2017 from January 2017. In December, we held a working group on the imminent New York State cybersecurity regulations, then due to become effective on January 1, 2017, with no penalties for not complying until July 1, 2017. One of the attendees who had participated in a number of recent AIA calls and an in-person meeting on the law said that New York State was considering an additional 6 month delay beyond the 6 months after the law goes into effect to mandate deployment of multi-factor authentication, which was a huge issue for most carriers. Within that draft, encryption in-transit and at-rest was not going be required to be deployed for 5 years; however compensating controls would be expected in the interim. The conversation covered the cost to comply, how to make decisions on what to deploy vs. what can be skipped, and cloud; does cloud increase or decrease risk. There was a discussion of “accumulation risk” caused by a cloud; a hack of the cloud could automatically trigger a security event for everyone in the cloud. There was a large discussion around the responsibilities of carrier partners, whether they are MGA’s or agents on the distribution side or outsourcers and other service providers on the service side. There was a clear consensus that the carrier is responsible for security if they are manufacturing products that provide coverage (even if someone else has the right to underwrite and bind the policy). We had a good conversation around what will need to be reported to the CEO and Board (a high level dashboard supported by details). There were areas of concern around reporting; it would need to include both successful and unsuccessful security events. Things like attempted phishing attacks through email (even if blocked at the firewall) would have need to be reported under the regulations.
There was also a discussion around European security laws and how they overlap or are different with New York State laws. The revised regulations responded to these types of concerns and include easing some specific timelines and requirements, especially around encrypting data and multi-factor authentication. They also provide more time for compliance, expanding the transition window from six months to as long as two years for most items. The effective date will now be March 1, 2017. Although the easing of the regulations will take some pressure off, the need to do a NIST assessment, and the requirement to put in proper technical solutions, processes, procedures, metrics and reporting all remain.
Cybersecurity is back in the news this week, with Yahoo’s announcement that more than 1 billion user accounts, many of them containing sensitive information, were compromised in a 2013 cyber attack. Recently, Novarica held a Working Group on the new cybersecurity regulations that will go in force on January 1, 2017 in New York State. The law was drafted from the NAIC Cybersecurity Task Force’s Insurance Data Security Model Law but goes further in many cases than the draft law did. The new standards will apply to insurers offering licensed products in New York State. While some proposed requirements stand as general best practices most insurers have already established, others will require carriers to implement significant changes. Although financial and insurance institutions have until June 2017 to comply, carriers are already considering the upcoming shifts in resources and strategies. The regulations will mandate:
- Annual submission of a written statement to the Department certifying compliance, with all supporting data, records and schedules maintained for five years.
- Regular cybersecurity awareness training for all personnel, updated to reflect the annual risk assessment.
- Appointing a Chief Information Security Officer.
- Documentation of “areas, systems, or processes that require material improvement, updating or redesign” along with planned and in-progress efforts toward remediation.
- Employment of cybersecurity personnel who must attend regular update and training sessions.
- Establishing cybersecurity policies to address areas like access controls and identity management, business continuity and disaster recovery, capacity and performance planning, customer data privacy, data governance and classification, incident response, information security, physical security and environmental controls, risk assessment, systems and application development and quality assurance, systems and network monitoring and security, and vendor and third-party service provider management.
- The policies must be reviewed by the board of directors or similar governing body, and approved by a senior officer.
- Establishing and maintaining cybersecurity programs to:
-detect incidents, identify internal and external risks
-to implement defensive infrastructure, policies, and procedures
-to respond to detected or identified incidents to mitigate the impact
-to recover from incidents and restore normal operations
-to fulfill regulatory reporting requirements
Most of the carriers present at the working group focused on the compliance expectations for vendors and third-party service providers. If partners do not comply with the regulations, the carriers manufacturing the products will be liable. We are unsure today if the carriers can get the penalties back from the MGA’s, agents and partners if the security breach was due to that agent’s or partner’s lack of compliance with the law.
Another area of focus was encryption. In the current draft of the legislation, carriers will have up to five years to implement encryption of nonpublic information both in transit and at rest. Many participants saw this as an onerous task, as PII data is already difficult to manage. Although the clause allows for “compensating controls” to stand in place of the encryption leading up to the five-year mark, carriers are already apprehensive of the burdens of such a large feat. In a similar context, multi-factor authentication will be required as well, but an extension of 1 year is being considered.
Some attending carriers with operations in Europe and the UK brought up concerns for how the cybersecurity legislation will affect international relationships. However, while there are some differences between the NYS regulation and the GDPR (General Data Protection Legislation), we don’t expect these difference to drastically impact the carrier’s ongoing technology activities.
Many carriers discussed the security and reliability of Cloud. While some saw Cloud as an additional risk, others saw it as a faster, seamless way to fortify cybersecurity. There was a general concern that because data centers from Cloud providers house different “tenants,” there is a risk of the data being exposed. There was a discussion of “accumulation risk” caused by a cloud which means that a hack of the cloud could automatically trigger a security event for everyone in the cloud. However, other attendees suggested that because it is easier to add a security tool to a Cloud solution, the risk of data exposure is mitigated.
Happy Holidays & Happy New Year!!!!
One of the more rapidly developing issue areas in the portfolio of items facing IT organizations is how and when to best leverage cloud-based services. Many carriers have moved important (but non-production) work loads into cloud based options, including both application development and testing. This has given carriers the opportunity to try the services out on important work, even while grappling with some of the more daunting issue areas related to establishing guidelines for using cloud based resources; the most common issue area CIOs and their teams need to address is security. Of course, for many carriers, as they explore top tier cloud providers (e.g., Amazon, Microsoft, Google, etc.) more deeply, they actually find that the security they offer can exceed what is associated with a carrier-managed environment. Armed with this insight, more carriers are moving mission critical workloads into cloud based offerings for an ever-increasing list of functions including email platforms, HR solutions, financial systems, claims processing and underwriting. At our recent special interest group meeting for Group/Voluntary Benefits carriers we noted the degree to which cloud based solutions were not only gaining traction but were looking increasingly attractive to carriers in this space. That all provides a good backdrop for this week’s story highlighting Liberty Mutual Benefits’ successful implementation of a core systems suite based on Amazon’s AWS offering. We fully expect to see this type of deployment gain greater traction in 2017 for a variety of reasons. We’re a long way from wondering if security issues can be “solved”. In the future, it is not likely that “running a corporate data center effectively” will be one of the IT skill sets that will create competitive differentiation for insurance carriers. Yet another example of the future already being here…but not evenly distributed.
There’s a lot of exploration and discussion of cyber liability insurance in the media. It’s not often that the industry has an entirely new line of business to sell, especially one as in demand as this, so it makes sense that insurers are aggressively pursuing the opportunity. On the other hand, most insurers realize that their knowledge of how to rate and underwrite cyber risk is still immature, and therefore are proceeding with caution. (It’s interesting to see so many technology security officers—long relegated to a subset of the IT organization—suddenly thrust into the thick of product development and underwriting, though that’s a different topic.) Most people agree that there are potential catastrophic risks for companies who have poor cyber security, and that it’s only a matter of time before another Target-like hack costs a company hundreds of millions of dollars (or more). Without a proper way of assessing that risk, many cyber policies on the market today have strict limits, meaning a policyholder will be protected from a million dollar loss from a cyber hacking event but not more.
My concern is that the cyber liability market may face events that behave more like a true catastrophe—such as hurricanes or floods—and not just a costly event at a single company. Take for example the recent IoT-powered denial of service attacks that crippled much of the internet. This was a relatively benign event in the grand scheme of things, but it exposed the fact that potential cyber backdoors were much more widespread than we realized. Often major hacking events piggyback on “zero day” exploits, existing vulnerabilities that are widespread across systems but as of yet unknown to users and vendors. The issue with such vulnerabilities is that they expose not just one company with a poor security process, but many companies—even those who have taken best practice security measures—who all rely on a trusted vendor or technology.
In the wake of a major zero day exploit in any common platform technology, it’s possible that thousands of companies would be compromised at the same time. Hence the comparison to a hurricane or flood: the big risk isn’t one company with a major cyber liability, but rather all of those companies to whom insurers have sold limited cyber policies filing claims at once. Suddenly a block of limited-coverage policies becomes a massive exposure for the insurer.
Should cyber liability insurers be attempting to limit certain exposures, for example balancing the number of clients who rely on Windows servers vs Linux servers, much the same way a property insurer will limit geographic exposure? Insurers are already developing better methods to judge the risk of an individual cyber liability policyholder, but it’s also important for insurers to look at their entire cyber book of business and assess the potential for cat-like behavior across all of them.
On this past Friday (10/21), many of you may have noticed problems connecting to Twitter or streaming Spotify and probably know now that the cause was the latest internet attack. The attack was a DDoS (distributed denial of service) attack where thousands of computing devices infected with the Mirai botnet code targeted the Dyn’s internet servers. Dyn is a Domain Name System management services provider used by Spotify, Github and other popular internet sites.
The internet experiences an increasing number of DDoS attacks – some estimates are at over 124,000 per week against enterprise networks. What was different about this attack was the Mirai botnet compromises common Internet of Things (IoT) devices, such as internet-enabled DVR’s (digital video recorders), security webcams and poorly secured internet routers. The basic plan of attack is to use standard administrator accounts and passwords provided by the device manufacturers and rarely changed by the consumer. (By the way, in most cases an easy fix is to reboot the device and change the administrator password.)
Security blogger Bruce Schneier recently wrote the “someone is learning how to take down the Internet”. He and other security experts warn that this could just be the beginning of more frequent and sophisticated attacks that are also larger and more damaging to internet site accessibility.
In Novarica’s report on IoT, Wearables and Customer Service, we mentioned five challenges for consumer adoption – security was an important one: “Adoption depends on building user trust and avoiding potentially hazardous hacking of devices, especially for automobile operation, home security, and drone operation. Security thus far has been the responsibility of device manufacturers, who may neglect it in order to keep prices competitive.”
As carriers begin to consider how to provide IoT and wearables to consumers and use the data, security should remain a concern so that consumer data is better protected along with brand name. Carriers should press device manufacturers to improve IoT security and CIOs should consult with their CISO or security consultants about how to be better prepared for DDOS attacks in the future.