While the Target stores “hack” was not the biggest in recent history, it is certainly one of the most visible and offers some important object lessons for insurance company CIOs. As we have now learned, malicious software was installed on company servers in late 2013, providing a gateway through which hackers were able to gather significant personal and confidential data on Target’s customers. The theft of this data has had significant adverse consequences for the company’s earnings, the trust of their customers, and the career plans of a number of high-profile individuals. The full scope the damage will be hard to quantify and the period over which recovery takes place will likely be measured in years rather than days or months. This is all pretty significant for a company that appears to have been ahead of the curve in thinking about these issues and had made significant investments in both people and technology to inoculate themselves from attacks prior to last year’s holiday shopping season.
What went wrong?
The origins of the issue of course extend well beyond Target’s environment. The debit and credit card infrastructure in the United States is substantially behind what is used in other markets (e.g., Europe). This problem has been well known for a number of years but participants in the ecosystem (e.g., retailers, card processors, banks) believed that this was someone else’s problem, certainly not their own. Since the issues emerged most obviously during the Great Recession, the natural tendency was to kick the issue down the road and hope, in the meantime, that the security risks could be otherwise managed. The sense seemed to be that it would take some future, seminal, event that would create a cause for action. That time may be upon us with this incident, although that is little consolation to impacted customers. As noted in a recent Business Week article, Target had actually made significant investments to get ahead of security concerns. Unfortunately, it also appears that they made notable errors in implementation which allowed the events to unfold with shockingly adverse consequences.
Are insurance companies immune to these kinds of events? Of course not … and they increasingly will face these types of challenges as transaction volumes grow and the speed of transaction processing accelerates. Implementing tools that allow for transaction analysis of varying types is increasingly the domain of all players and financial services, banks and insurance companies included. As the systems are deployed one of the issues CIOs and their teams face is the need to install them so that they effectively operate within their own environments. Failing to tune these systems may produce false positives that will overwhelm the staffs responsible for final intervention in determining whether or not activity is malicious. This issue applies across all monitoring systems used by carriers, including things such as financial transactions, e-mail traffic and security access. Just having expensive software installed is of little comfort if it has not been effectively tuned and is being appropriately monitored in order to assess results. Fine tuning these systems can be as much art as science but is critical if a financial institution is going to achieve a project’s operational objectives.
One of the interesting aspects of the Target case is that these were apparently lessons that the company had learned. The solution they implemented for transaction monitoring was very sophisticated and the company had spent considerable money both implementing it and in creating supporting infrastructure to analyze the results. According to the BW article, they had created a sizable organization in India to monitor the activity. The article goes on to note that this team did their job, effectively creating the appropriate level of alert and communication back to the corporate home office. The corporate home office was also well equipped to handle these types of incidents having created a command center that was specifically designed to deal with security related incidents.
For some unknown reason, however, the communications between the teams in India and the United States went unnoticed or were ignored. What has become clear in recent weeks is that the failure had nothing to do with technology and everything to do with the process and human beings managing the process that were created to support that technology.
There are a range of possibilities for why this happens; speculation on those causes is of little value at this point. The main message is that any process is only as good as its weakest link. This is hardly new news but it does reiterate the importance of real testing, in real world circumstances, to understand how components will react and interact. The parallels between incident management, such as this case illustrates, and disaster recovery events are reasonably significant. Practice exercises get people and technology working together to understand both the “happy path” toward resolving issues as well as to highlight areas where real world circumstances may deviate from a carefully crafted script. The magic for IT organizations can be in understanding those unanticipated events, and creating the mental shelf space for teams to be able to deal with those events by freeing them from needing to focus on relatively simple or mundane tasks. It remains to be seen exactly how the process in this incident broke down.
One element to consider is how tools are used in separate geographic locations as well as how communication process loops can be “closed” to assure confirmation of high priority concerns. Another to consider is the difference in cultural norms which can exist between teams in different parts of a country (to say nothing of different parts of the world) when they are attempting to share information. The nuance and subtlety which are an embedded part of the English language are important for CIOs and their teams to consider, particularly as they move to take advantage of resources in different geographic locations. While this is hardly news too many organizations, the importance of making sure this is well understood throughout the chain of command is directly correlated with the practical importance of the information being shared.
Target is hardly the first company in financial services to learn this lesson, but their logo made for a Business Week cover that will be hard to forget.
For insurance company CIOs, this is also a reminder that there are a range of security threats which need to be dealt with in the near-term. With Windows XP now in the final weeks of support, many CIOs face the unenviable task of selecting from a series of less than optimal choices for risk mitigation. Doing nothing is not an option! Recently, my colleague Tom Benton published a brief on IT Security issues specifically related to insurance carriers. In light of the targeting of Target, this is something that carries renewed important for all of us.
Information security is a messy business. It is not something that can be ignored, given the costs both financial and reputational terms. Even as plans for expanding channels and touch points became clear in our Survey of 2014 Carrier IT Budgets, the security challenges may be expanding faster than oeverall spending levels. We live in interesting times. Good hunting!